首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Nath AR  Beauchamp MS 《NeuroImage》2012,59(1):781-787
The McGurk effect is a compelling illusion in which humans perceive mismatched audiovisual speech as a completely different syllable. However, some normal individuals do not experience the illusion, reporting that the stimulus sounds the same with or without visual input. Converging evidence suggests that the left superior temporal sulcus (STS) is critical for audiovisual integration during speech perception. We used blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) to measure brain activity as McGurk perceivers and non-perceivers were presented with congruent audiovisual syllables, McGurk audiovisual syllables, and non-McGurk incongruent syllables. The inferior frontal gyrus showed an effect of stimulus condition (greater responses for incongruent stimuli) but not susceptibility group, while the left auditory cortex showed an effect of susceptibility group (greater response in susceptible individuals) but not stimulus condition. Only one brain region, the left STS, showed a significant effect of both susceptibility and stimulus condition. The amplitude of the response in the left STS was significantly correlated with the likelihood of perceiving the McGurk effect: a weak STS response meant that a subject was less likely to perceive the McGurk effect, while a strong response meant that a subject was more likely to perceive it. These results suggest that the left STS is a key locus for interindividual differences in speech perception.  相似文献   

2.
Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we used Positron Emission Tomography (PET) during audiovisual speech processing to study how temporal and spatial factors might jointly affect brain activations. In agreement with previous work, synchronous versus asynchronous audiovisual speech yielded increased activity in multisensory association areas (e.g., superior temporal sulcus [STS]), plus in some unimodal visual areas. Our orthogonal manipulation of relative stimulus position (auditory and visual stimuli presented at same location vs. opposite sides) and stimulus synchrony showed that (i) ventral occipital areas and superior temporal sulcus were unaffected by relative location; (ii) lateral and dorsal occipital areas were selectively activated for synchronous bimodal stimulation at the same external location; (iii) right inferior parietal lobule was activated for synchronous auditory and visual stimuli at different locations, that is, in the condition classically associated with the 'ventriloquism effect' (shift of perceived auditory position toward the visual location). Thus, different brain regions are involved in different aspects of audiovisual integration. While ventral areas appear more affected by audiovisual synchrony (which can influence speech identification), more dorsal areas appear to be associated with spatial multisensory interactions.  相似文献   

3.
Beauchamp MS  Yasar NE  Frye RE  Ro T 《NeuroImage》2008,41(3):1011-1020
Human superior temporal sulcus (STS) is thought to be a key brain area for multisensory integration. Many neuroimaging studies have reported integration of auditory and visual information in STS but less is known about the role of STS in integrating other sensory modalities. In macaque STS, the superior temporal polysensory area (STP) responds to somatosensory, auditory and visual stimulation. To determine if human STS contains a similar area, we measured brain responses to somatosensory, auditory and visual stimuli using blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). An area in human posterior STS, STSms (multisensory), responded to stimulation in all three modalities. STSms responded during both active and passive presentation of unisensory somatosensory stimuli and showed larger responses for more intense vs. less intense tactile stimuli, hand vs. foot, and contralateral vs. ipsilateral tactile stimulation. STSms showed responses of similar magnitude for unisensory tactile and auditory stimulation, with an enhanced response to simultaneous auditory-tactile stimulation. We conclude that STSms is important for integrating information from the somatosensory as well as the auditory and visual modalities, and could be the human homolog of macaque STP.  相似文献   

4.
Stevenson RA  James TW 《NeuroImage》2009,44(3):1210-1223
The superior temporal sulcus (STS) is a region involved in audiovisual integration. In non-human primates, multisensory neurons in STS display inverse effectiveness. In two fMRI studies using multisensory tool and speech stimuli presented at parametrically varied levels of signal strength, we show that the pattern of neural activation in human STS is also inversely effective. Although multisensory tool-defined and speech-defined regions of interest were non-overlapping, the pattern of inverse effectiveness was the same for tools and speech across regions. The findings suggest that, even though there are sub-regions in STS that are speech-selective, the manner in which visual and auditory signals are integrated in multisensory STS is not specific to speech.  相似文献   

5.
The temporal synchrony of auditory and visual signals is known to affect the perception of an external event, yet it is unclear what neural mechanisms underlie the influence of temporal synchrony on perception. Using parametrically varied levels of stimulus asynchrony in combination with BOLD fMRI, we identified two anatomically distinct subregions of multisensory superior temporal cortex (mSTC) that showed qualitatively distinct BOLD activation patterns. A synchrony-defined subregion of mSTC (synchronous > asynchronous) responded only when auditory and visual stimuli were synchronous, whereas a bimodal subregion of mSTC (auditory > baseline and visual > baseline) showed significant activation to all presentations, but showed monotonically increasing activation with increasing levels of asynchrony. The presence of two distinct activation patterns suggests that the two subregions of mSTC may rely on different neural mechanisms to integrate audiovisual sensory signals. An additional whole-brain analysis revealed a network of regions responding more with synchronous than asynchronous speech, including right mSTC, and bilateral superior colliculus, fusiform gyrus, lateral occipital cortex, and extrastriate visual cortex. The spatial location of individual mSTC ROIs was much more variable in the left than right hemisphere, suggesting that individual differences may contribute to the right lateralization of mSTC in a group SPM. These findings suggest that bilateral mSTC is composed of distinct multisensory subregions that integrate audiovisual speech signals through qualitatively different mechanisms, and may be differentially sensitive to stimulus properties including, but not limited to, temporal synchrony.  相似文献   

6.
Electrophysiological studies in nonhuman primates and other mammals have shown that sensory cues from different modalities that appear at the same time and in the same location can increase the firing rate of multisensory cells in the superior colliculus to a level exceeding that predicted by summing the responses to the unimodal inputs. In contrast, spatially disparate multisensory cues can induce a profound response depression. We have previously demonstrated using functional magnetic resonance imaging (fMRI) that similar indices of crossmodal facilitation and inhibition are detectable in human cortex when subjects listen to speech while viewing visually congruent and incongruent lip and mouth movements. Here, we have used fMRI to investigate whether similar BOLD signal changes are observable during the crossmodal integration of nonspeech auditory and visual stimuli, matched or mismatched solely on the basis of their temporal synchrony, and if so, whether these crossmodal effects occur in similar brain areas as those identified during the integration of audio-visual speech. Subjects were exposed to synchronous and asynchronous auditory (white noise bursts) and visual (B/W alternating checkerboard) stimuli and to each modality in isolation. Synchronous and asynchronous bimodal inputs produced superadditive BOLD response enhancement and response depression across a large network of polysensory areas. The most highly significant of these crossmodal gains and decrements were observed in the superior colliculi. Other regions exhibiting these crossmodal interactions included cortex within the superior temporal sulcus, intraparietal sulcus, insula, and several foci in the frontal lobe, including within the superior and ventromedial frontal gyri. These data demonstrate the efficacy of using an analytic approach informed by electrophysiology to identify multisensory integration sites in humans and suggest that the particular network of brain areas implicated in these crossmodal integrative processes are dependent on the nature of the correspondence between the different sensory inputs (e.g. space, time, and/or form).  相似文献   

7.
Kang E  Lee DS  Kang H  Hwang CH  Oh SH  Kim CS  Chung JK  Lee MC 《NeuroImage》2006,32(1):423-431
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and speech-associated mouth/lip movements (visual) from a speaker. Using PET where no scanner noise was present, brain regions involved in speech cue processing were investigated with the normal hearing subjects with no previous lip-reading training (N = 17) carrying out a semantic plausibility decision on spoken sentences delivered in a movie file. Multimodality was ensured at the sensory level in all four conditions. Sensory-specific speech cue of one sensory modality, i.e., auditory speech (A condition) or mouth movement (V condition), was delivered with a control stimulus of the other modality whereas speech cues of both sensory modalities (AV condition) were delivered during bimodal condition. In comparison to the control condition, extensive activations in the superior temporal regions were observed bilaterally during the A condition but these activations were reduced in extent and left lateralized during the AV condition. Polymodal region such as left posterior superior temporal sulcus (pSTS) involved in cross-modal interaction/integration of audiovisual speech was found to be activated during the A and more so during the AV conditions but not during the V condition. Activations were observed in Broca's (BA 44), medial frontal (BA 8), and anterior ventrolateral prefrontal (BA 47) regions in the left during the V condition, where lip-reading performance was less successful. Results indicated that the speech-associated lip movements (visual speech cue) rendered suppression on the activity in the right auditory temporal regions. Overadditivity (AV > A + V) observed in the right postcentral region during the bimodal condition relative to the sum of unimodal speech conditions was also associated with reduced activity during the V condition. These findings suggested that visual speech cue could exert an inhibitory modulatory effect on the brain activities in the right hemisphere during the cross-modal interaction of audiovisual speech perception.  相似文献   

8.
Watkins S  Shams L  Tanaka S  Haynes JD  Rees G 《NeuroImage》2006,31(3):1247-1256
When a single brief visual flash is accompanied by two auditory bleeps, it is frequently perceived incorrectly as two flashes. Here, we used high field functional MRI in humans to examine the neural basis of this multisensory perceptual illusion. We show that activity in retinotopic visual cortex is increased by the presence of concurrent auditory stimulation, irrespective of any illusory perception. However, when concurrent auditory stimulation gave rise to illusory visual perception, activity in V1 was enhanced, despite auditory and visual stimulation being unchanged. These findings confirm that responses in human V1 can be altered by sound and show that they reflect subjective perception rather than the physically present visual stimulus. Moreover, as the right superior temporal sulcus and superior colliculus were also activated by illusory visual perception, together with V1, they provide a potential neural substrate for the generation of this multisensory illusion.  相似文献   

9.
Kang E  Lee DS  Kang H  Lee JS  Oh SH  Lee MC  Kim CS 《NeuroImage》2004,22(3):1173-1181
Brain plasticity was investigated, which underlies the gaining of auditory sensory and/or auditory language in deaf children with an early onset deafness after cochlear implantation (CI) surgery. This study examined both the glucose metabolism of the brain and the auditory speech learning using 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) and the Central Institute of Deaf (CID) test, respectively, both before and after the CI surgery. In a within analysis comparing the pre-CI and the post-CI PET results, CI itself resulted in an increase in the glucose metabolism in the medial visual cortex, the bilateral thalamus, and the posterior cingulate. Compared with the normal hearing controls, the brain activity of the deaf children was greater in the medial visual cortex and bilateral occipito-parietal junctions after the CI. The better speech perception ability was associated with increases in activity in the higher visual areas such as middle occipito-temporal junction (hMT/V5) and posterior inferior temporal region (BA 21/37) in the left hemisphere and associated with decreases in activity in the right inferior parieto-dorsal prefrontal region. These findings suggest that the speech learning resulted in a greater demand of the visual and visuospatial processings subserved by the early visual cortex and parietal cortices. However, only those deaf children who successfully learned the auditory language after CI used more visual motion perception for mouth movement in the left hMT/V5 region and less somatosensory function in the right parieto-frontal region.  相似文献   

10.
Selective attention and multisensory integration are fundamental to perception, but little is known about whether, or under what circumstances, these processes interact to shape conscious awareness. Here, we used transcranial magnetic stimulation (TMS) to investigate the causal role of attention-related brain networks in multisensory integration between visual and auditory stimuli in the sound-induced flash illusion. The flash illusion is a widely studied multisensory phenomenon in which a single flash of light is falsely perceived as multiple flashes in the presence of irrelevant sounds. We investigated the hypothesis that extrastriate regions involved in selective attention, specifically within the right parietal cortex, exert an influence on the multisensory integrative processes that cause the flash illusion. We found that disruption of the right angular gyrus, but not of the adjacent supramarginal gyrus or of a sensory control site, enhanced participants' veridical perception of the multisensory events, thereby reducing their susceptibility to the illusion. Our findings suggest that the same parietal networks that normally act to enhance perception of attended events also play a role in the binding of auditory and visual stimuli in the sound-induced flash illusion.  相似文献   

11.
In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.  相似文献   

12.
Rimol LM  Specht K  Weis S  Savoy R  Hugdahl K 《NeuroImage》2005,26(4):97-1067
The objective of this study was to investigate phonological processing in the brain by using sub-syllabic speech units with rapidly changing frequency spectra. We used isolated stop consonants extracted from natural speech consonant-vowel (CV) syllables, which were digitized and presented through headphones in a functional magnetic resonance imaging (fMRI) paradigm. The stop consonants were contrasted with CV syllables. In order to control for general auditory activation, we used duration- and intensity-matched noise as a third stimulus category. The subjects were seventeen right-handed, healthy male volunteers. BOLD activation responses were acquired on a 1.5-T MR scanner. The auditory stimuli were presented through MR compatible headphones, using an fMRI paradigm with clustered volume acquisition and 12 s repetition time. The consonant vs. noise comparison resulted in unilateral left lateralized activation in the posterior part of the middle temporal gyrus and superior temporal sulcus (MTG/STS). The CV syllable vs. noise comparison resulted in bilateral activation in the same regions, with a leftward asymmetry. The reversed comparisons, i.e., noise vs. speech stimuli, resulted in right hemisphere activation in the supramarginal and superior temporal gyrus, as well as right prefrontal activation. Since the consonant stimuli are unlikely to have activated a semantic-lexical processing system, it seems reasonable to assume that the MTG/STS activation represents phonetic/phonological processing. This may involve the processing of both spectral and temporal features considered important for phonetic encoding.  相似文献   

13.
In visual perception of emotional stimuli, low- and high-level appraisal processes have been found to engage different neural structures. Beyond emotional facial expression, emotional prosody is an important auditory cue for social interaction. Neuroimaging studies have proposed a network for emotional prosody processing that involves a right temporal input region and explicit evaluation in bilateral prefrontal areas. However, the comparison of different appraisal levels has so far relied upon using linguistic instructions during low-level processing, which might confound effects of processing level and linguistic task. In order to circumvent this problem, we examined processing of emotional prosody in meaningless speech during gender labelling (implicit, low-level appraisal) and emotion labelling (explicit, high-level appraisal). While bilateral amygdala, left superior temporal sulcus and right parietal areas showed stronger blood oxygen level-dependent (BOLD) responses during implicit processing, areas with stronger BOLD responses during explicit processing included the left inferior frontal gyrus, bilateral parietal, anterior cingulate and supplemental motor cortex. Emotional versus neutral prosody evoked BOLD responses in right superior temporal gyrus, bilateral anterior cingulate, left inferior frontal gyrus, insula and bilateral putamen. Basal ganglia and right anterior cingulate responses to emotional versus neutral prosody were particularly pronounced during explicit processing. These results are in line with an amygdala-prefrontal-cingulate network controlling different appraisal levels, and suggest a specific role of the left inferior frontal gyrus in explicit evaluation of emotional prosody. In addition to brain areas commonly related to prosody processing, our results suggest specific functions of anterior cingulate and basal ganglia in detecting emotional prosody, particularly when explicit identification is necessary.  相似文献   

14.
Rimol LM  Specht K  Hugdahl K 《NeuroImage》2006,30(2):554-562
Previous neuroimaging studies have consistently reported bilateral activation to speech stimuli in the superior temporal gyrus (STG) and have identified an anteroventral stream of speech processing along the superior temporal sulcus (STS). However, little attention has been devoted to the possible confound of individual differences in hemispheric dominance for speech. The present study was designed to test for speech-selective activation while controlling for inter-individual variance in auditory laterality, by using only subjects with at least 10% right ear advantage (REA) on the dichotic listening test. Eighteen right-handed, healthy male volunteers (median age 26) participated in the study. The stimuli were words, syllables, and sine wave tones (220-2600 Hz), presented in a block design. Comparing words > tones and syllables > tones yielded activation in the left posterior MTG and the lateral STG (upper bank of STS). In the right temporal lobe, the activation was located in the MTG/STS (lower bank). Comparing left and right temporal lobe cluster sizes from the words > tones and syllables > tones contrasts on single-subject level demonstrated a statistically significant left lateralization for speech sound processing in the STS/MTG area. The asymmetry analyses suggest that dichotic listening may be a suitable method for selecting a homogenous group of subjects with respect to left hemisphere language dominance.  相似文献   

15.
The role of attention in speech comprehension is not well understood. We used fMRI to study the neural correlates of auditory word, pseudoword, and nonspeech (spectrally rotated speech) perception during a bimodal (auditory, visual) selective attention task. In three conditions, Attend Auditory (ignore visual), Ignore Auditory (attend visual), and Visual (no auditory stimulation), 28 subjects performed a one-back matching task in the assigned attended modality. The visual task, attending to rapidly presented Japanese characters, was designed to be highly demanding in order to prevent attention to the simultaneously presented auditory stimuli. Regardless of stimulus type, attention to the auditory channel enhanced activation by the auditory stimuli (Attend Auditory>Ignore Auditory) in bilateral posterior superior temporal regions and left inferior frontal cortex. Across attentional conditions, there were main effects of speech processing (word+pseudoword>rotated speech) in left orbitofrontal cortex and several posterior right hemisphere regions, though these areas also showed strong interactions with attention (larger speech effects in the Attend Auditory than in the Ignore Auditory condition) and no significant speech effects in the Ignore Auditory condition. Several other regions, including the postcentral gyri, left supramarginal gyrus, and temporal lobes bilaterally, showed similar interactions due to the presence of speech effects only in the Attend Auditory condition. Main effects of lexicality (word>pseudoword) were isolated to a small region of the left lateral prefrontal cortex. Examination of this region showed significant word>pseudoword activation only in the Attend Auditory condition. Several other brain regions, including left ventromedial frontal lobe, left dorsal prefrontal cortex, and left middle temporal gyrus, showed Attention x Lexicality interactions due to the presence of lexical activation only in the Attend Auditory condition. These results support a model in which neutral speech presented in an unattended sensory channel undergoes relatively little processing beyond the early perceptual level. Specifically, processing of phonetic and lexical-semantic information appears to be very limited in such circumstances, consistent with prior behavioral studies.  相似文献   

16.
We presented phonetically matching and conflicting audiovisual vowels to 10 dyslexic and 10 fluent-reading young adults during "clustered volume acquisition" functional magnetic resonance imaging (fMRI) at 3 T. We further assessed co-variation between the dyslexic readers' phonological processing abilities, as indexed by neuropsychological test scores, and BOLD signal change within the visual cortex, auditory cortex, and Broca's area. Both dyslexic and fluent readers showed increased activation during observation of phonetically conflicting compared to matching vowels within the classical motor speech regions (Broca's area and the left premotor cortex), this activation difference being more extensive and bilateral in the dyslexic group. The between-group activation difference in conflicting > matching contrast reached significance in the motor speech regions and in the left inferior parietal lobule, with dyslexic readers exhibiting stronger activation than fluent readers. The dyslexic readers' BOLD signal change co-varied with their phonological processing abilities within the visual cortex and Broca's area, and to a lesser extent within the auditory cortex. We suggest these findings as reflecting dyslexic readers' greater use of motor-articulatory and visual strategies during phonetic processing of audiovisual speech, possibly to compensate for their difficulties in auditory speech perception.  相似文献   

17.
Specht K  Reul J 《NeuroImage》2003,20(4):1944-1954
With this study, we explored the blood oxygen level-dependent responses within the temporal lobe to short auditory stimuli of different classes. To address this issue, we performed an attentive listening event-related fMRI study, where subjects were required to concentrate during the presentation of different types of stimuli. Because the order of stimuli was randomized and not predictable for the subject, the observed differences between the stimuli types were interpreted as an automatic effect and were not affected by attention. We used three types of stimuli: tones, sounds of animals and instruments, and words. We found in all cases bilateral activations of the primary and secondary auditory cortex. The strength and lateralization depended on the type of stimulus. The tone trials led to the weakest and smallest activations. The perception of sounds increased the activated network bilaterally into the superior temporal sulcus mainly on the right and the perception of words led to the highest activation within the left superior temporal sulcus as well as in left inferior frontal gyrus. Within the left temporal sulcus, we were able to distinguish between different subsystems, showing an extending activation from posterior to anterior for speech and speechlike information. Whereas posterior parts were involved in analyzing the complex auditory structure of sounds and speech, the middle and anterior parts responded strongest only in the perception of speech. In summary, a functional segregation of the temporal lobes into several subsystems responsible for auditory processing was visible. A lateralization for verbal stimuli to the left and sounds to the right was already detectable when short stimuli were used.  相似文献   

18.
The cortical processing of auditory-alone, visual-alone, and audiovisual speech information is temporally and spatially distributed, and functional magnetic resonance imaging (fMRI) cannot adequately resolve its temporal dynamics. In order to investigate a hypothesized spatiotemporal organization for audiovisual speech processing circuits, event-related potentials (ERPs) were recorded using electroencephalography (EEG). Stimuli were congruent audiovisual/ba/, incongruent auditory/ba/synchronized with visual/ga/, auditory-only/ba/, and visual-only/ba/and/ga/. Current density reconstructions (CDRs) of the ERP data were computed across the latency interval of 50-250 ms. The CDRs demonstrated complex spatiotemporal activation patterns that differed across stimulus conditions. The hypothesized circuit that was investigated here comprised initial integration of audiovisual speech by the middle superior temporal sulcus (STS), followed by recruitment of the intraparietal sulcus (IPS), followed by activation of Broca's area [Miller, L.M., d'Esposito, M., 2005. Perceptual fusion and stimulus coincidence in the cross-modal integration of speech. Journal of Neuroscience 25, 5884-5893]. The importance of spatiotemporally sensitive measures in evaluating processing pathways was demonstrated. Results showed, strikingly, early (<100 ms) and simultaneous activations in areas of the supramarginal and angular gyrus (SMG/AG), the IPS, the inferior frontal gyrus, and the dorsolateral prefrontal cortex. Also, emergent left hemisphere SMG/AG activation, not predicted based on the unisensory stimulus conditions was observed at approximately 160 to 220 ms. The STS was neither the earliest nor most prominent activation site, although it is frequently considered the sine qua non of audiovisual speech integration. As discussed here, the relatively late activity of the SMG/AG solely under audiovisual conditions is a possible candidate audiovisual speech integration response.  相似文献   

19.
Shahin AJ  Bishop CW  Miller LM 《NeuroImage》2009,44(3):1133-1143
The brain uses context and prior knowledge to repair degraded sensory inputs and improve perception. For example, listeners hear speech continuing uninterrupted through brief noises, even if the speech signal is artificially removed from the noisy epochs. In a functional MRI study, we show that this temporal filling-in process is based on two dissociable neural mechanisms: the subjective experience of illusory continuity, and the sensory repair mechanisms that support it. Areas mediating illusory continuity include the left posterior angular gyrus (AG) and superior temporal sulcus (STS) and the right STS. Unconscious sensory repair occurs in Broca's area, bilateral anterior insula, and pre-supplementary motor area. The left AG/STS and all the repair regions show evidence for word-level template matching and communicate more when fewer acoustic cues are available. These results support a two-path process where the brain creates coherent perceptual objects by applying prior knowledge and filling-in corrupted sensory information.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号