共查询到20条相似文献,搜索用时 93 毫秒
1.
Yi Du Bradley R. Buchsbaum Cheryl L. Grady Claude Alain 《Proceedings of the National Academy of Sciences of the United States of America》2014,111(19):7126-7131
Although it is well accepted that the speech motor system (SMS) is activated during speech perception, the functional role of this activation remains unclear. Here we test the hypothesis that the redundant motor activation contributes to categorical speech perception under adverse listening conditions. In this functional magnetic resonance imaging study, participants identified one of four phoneme tokens (/ba/, /ma/, /da/, or /ta/) under one of six signal-to-noise ratio (SNR) levels (–12, –9, –6, –2, 8 dB, and no noise). Univariate and multivariate pattern analyses were used to determine the role of the SMS during perception of noise-impoverished phonemes. Results revealed a negative correlation between neural activity and perceptual accuracy in the left ventral premotor cortex and Broca’s area. More importantly, multivoxel patterns of activity in the left ventral premotor cortex and Broca’s area exhibited effective phoneme categorization when SNR ≥ –6 dB. This is in sharp contrast with phoneme discriminability in bilateral auditory cortices and sensorimotor interface areas (e.g., left posterior superior temporal gyrus), which was reliable only when the noise was extremely weak (SNR > 8 dB). Our findings provide strong neuroimaging evidence for a greater robustness of the SMS than auditory regions for categorical speech perception in noise. Under adverse listening conditions, better discriminative activity in the SMS may compensate for loss of specificity in the auditory system via sensorimotor integration.The perception and identification of speech signals have traditionally been attributed to the superior temporal cortices (1–3). However, the speech motor system (SMS)—the premotor cortex (PMC) and the posterior inferior frontal gyrus (IFG), including Broca’s area—that traditionally supports speech production is also implicated in speech perception tasks as revealed by functional magnetic resonance imaging (fMRI) (4–8), magnetoencephalography (9), electrocorticography in patients (10), and transcranial magnetic stimulation (TMS) (11, 12). Although there is little doubt about these redundant representations, contentious debate remains about the role of the SMS in speech perception. The idea of action-based (articulatory) representations of speech tokens was proposed long ago in the motor theory of speech perception (13) and has been revived recently with the discovery of “mirror neurons” (14). However, empirical evidence does not support a strong version of the motor theory (15). Instead, current theories of speech processing posit that the SMS may implement a sensorimotor integration function to facilitate speech perception (2, 16–18). Specifically, the SMS generates internal models that predict sensory consequences of articulatory gestures under consideration, and such forward predictions are matched with acoustic representations in sensorimotor interface areas located in the left posterior superior temporal gyrus (pSTG) and/or left inferior parietal lobule (IPL) to constrain perception (17, 18). Forward sensorimotor mapping may sharpen the perceptual acuity of the sensory system to the expected inputs via a top–down gain allocation mechanism (16), which, we assume, would be especially useful for disambiguating phonological information under adverse listening conditions. However, the assumption, that the SMS is more robust than the auditory cortex in phonological processing in noise so as to achieve successful forward mapping during speech perception, has not yet been substantiated.In addition, there is a debate about whether the motor function is (11) or is not (16) essential for speech perception. Studies using TMS have found that stimulation of PMC resulted in declined phonetic discrimination in noise (11) but had no effect on phoneme identification under optimal listening conditions (16), suggesting a circumstantial recruitment of the SMS in speech perception. Moreover, neuroimaging studies have shown elevated activity in the SMS as speech intelligibility decreases (5, 17–21). For instance, there was greater activation in the PMC or Broca’s area when participants listened to distorted relative to clear speech (19), or nonnative than native speech (17, 18). Activity in the left IFG increased as temporal compression of the speech signals increased until comprehension failed at the most compressed levels (20). For speech in noise perception, stronger activation in the left PMC and IFG was observed at lower signal-to-noise ratios (SNRs) (21), and bilateral IFG activity was positively correlated with SNR-modulated reaction time (RT) (5). Those findings have given rise to the hypothesis that the SMS contributes to speech in noise perception in an adaptive and task-specific manner. Presumably, under optimal listening conditions (i.e., no background noise), speech perception emerges primarily from acoustic representations within the auditory system with little or no support from the SMS. In contrast, the SMS would play a greater role in speech perception when the speech signal is impoverished under adverse listening conditions. However, there is likely a limit in the extent to which the SMS can compensate for poor SNR. That is, in some cases, information from articulatory commands fails to generate plausible predictions regarding the speech signals. Thus, the forward mapping may adaptively change with SNR in a linear or a convex (the forward mapping efficiency peaks at a certain SNR and decreases when the SNR increases or decreases) pattern. However, the SNR conditions under which the SMS can successfully compensate for perception of impoverished speech signals by such a forward mapping mechanism are unknown.In the current fMRI study, 16 young participants identified English phoneme tokens (/ba/, /ma/, /da/, and /ta/) masked by broadband noise at multiple SNR levels (–12, –9, –6, –2, 8 dB, and no noise) via button press. A subvocal production task was also included at the end of scanning in which participants were instructed to repetitively and silently pronounce the four phonemes. Univariate General Linear Model (GLM) analysis and multivariate pattern analysis (MVPA) (22–25) were combined to investigate the recruitment [mean blood oxygenation level-dependent (BOLD) activation] and phoneme discriminability (spatial distribution of activity) of the SMS during speech in noise perception. MVPA compares the distributed activity patterns evoked by different stimuli/conditions across voxels and reveals the within-subject consistency of the activation patterns. It is robust to individual anatomical variability, is sensitive to small differences in activation, and provides a powerful tool for examining the processes underlying speech categorization (25). We predicted that (i) because the dorsal auditory stream (i.e., IFG, PMC, pSTG, and IPL) supporting sensorimotor integration is activated as a result of task-related speech perception (5, 17–21) and phonological working memory processes (26–28), the mean BOLD activity in those regions would negatively correlate with SNR-manipulated accuracy (increasing activity with increasing difficulty), supporting the compensatory recruitment of the SMS under adverse listening conditions; (ii) to implement effective forward sensorimotor mapping, the SMS would exhibit stronger multivoxel phoneme discrimination than auditory regions under noisy listening conditions; and (iii) when SNR decreases, the difference in phoneme discriminability between the SMS and auditory regions may increase linearly, or increase first and then decrease at a certain SNR level because of failed forward prediction processes under extensive noise interference. That is, the efficiency of the forward mapping would adaptively change with SNR in a linear or a convex pattern, respectively. 相似文献
2.
Sammual Yu-Lut Leung Keith Man-Chung Wong Vivian Wing-Wah Yam 《Proceedings of the National Academy of Sciences of the United States of America》2016,113(11):2845-2850
A series of mono- and dinuclear alkynylplatinum(II) terpyridine complexes containing the hydrophilic oligo(para-phenylene ethynylene) with two 3,6,9-trioxadec-1-yloxy chains was designed and synthesized. The mononuclear alkynylplatinum(II) terpyridine complex was found to display a very strong tendency toward the formation of supramolecular structures. Interestingly, additional end-capping with another platinum(II) terpyridine moiety of various steric bulk at the terminal alkyne would lead to the formation of nanotubes or helical ribbons. These desirable nanostructures were found to be governed by the steric bulk on the platinum(II) terpyridine moieties, which modulates the directional metal−metal interactions and controls the formation of nanotubes or helical ribbons. Detailed analysis of temperature-dependent UV-visible absorption spectra of the nanostructured tubular aggregates also provided insights into the assembly mechanism and showed the role of metal−metal interactions in the cooperative supramolecular polymerization of the amphiphilic platinum(II) complexes.Square-planar d8 platinum(II) polypyridine complexes have long been known to exhibit intriguing spectroscopic and luminescence properties (1–54) as well as interesting solid-state polymorphism associated with metal−metal and π−π stacking interactions (1–14, 25). Earlier work by our group showed the first example, to our knowledge, of an alkynylplatinum(II) terpyridine system [Pt(tpy)(C ≡ CR)]+ that incorporates σ-donating and solubilizing alkynyl ligands together with the formation of Pt···Pt interactions to exhibit notable color changes and luminescence enhancements on solvent composition change (25) and polyelectrolyte addition (26). This approach has provided access to the alkynylplatinum(II) terpyridine and other related cyclometalated platinum(II) complexes, with functionalities that can self-assemble into metallogels (27–31), liquid crystals (32, 33), and other different molecular architectures, such as hairpin conformation (34), helices (35–38), nanostructures (39–45), and molecular tweezers (46, 47), as well as having a wide range of applications in molecular recognition (48–52), biomolecular labeling (48–52), and materials science (53, 54). Recently, metal-containing amphiphiles have also emerged as a building block for supramolecular architectures (42–44, 55–59). Their self-assembly has always been found to yield different molecular architectures with unprecedented complexity through the multiple noncovalent interactions on the introduction of external stimuli (42–44, 55–59).Helical architecture is one of the most exciting self-assembled morphologies because of the uniqueness for the functional and topological properties (60–69). Helical ribbons composed of amphiphiles, such as diacetylenic lipids, glutamates, and peptide-based amphiphiles, are often precursors for the growth of tubular structures on an increase in the width or the merging of the edges of ribbons (64, 65). Recently, the optimization of nanotube formation vs. helical nanostructures has aroused considerable interests and can be achieved through a fine interplay of the influence on the amphiphilic property of molecules (66), choice of counteranions (67, 68), or pH values of the media (69), which would govern the self-assembly of molecules into desirable aggregates of helical ribbons or nanotube scaffolds. However, a precise control of supramolecular morphology between helical ribbons and nanotubes remains challenging, particularly for the polycyclic aromatics in the field of molecular assembly (64–69). Oligo(para-phenylene ethynylene)s (OPEs) with solely π−π stacking interactions are well-recognized to self-assemble into supramolecular system of various nanostructures but rarely result in the formation of tubular scaffolds (70–73). In view of the rich photophysical properties of square-planar d8 platinum(II) systems and their propensity toward formation of directional Pt···Pt interactions in distinctive morphologies (27–31, 39–45), it is anticipated that such directional and noncovalent metal−metal interactions might be capable of directing or dictating molecular ordering and alignment to give desirable nanostructures of helical ribbons or nanotubes in a precise and controllable manner.Herein, we report the design and synthesis of mono- and dinuclear alkynylplatinum(II) terpyridine complexes containing hydrophilic OPEs with two 3,6,9-trioxadec-1-yloxy chains. The mononuclear alkynylplatinum(II) terpyridine complex with amphiphilic property is found to show a strong tendency toward the formation of supramolecular structures on diffusion of diethyl ether in dichloromethane or dimethyl sulfoxide (DMSO) solution. Interestingly, additional end-capping with another platinum(II) terpyridine moiety of various steric bulk at the terminal alkyne would result in nanotubes or helical ribbons in the self-assembly process. To the best of our knowledge, this finding represents the first example of the utilization of the steric bulk of the moieties, which modulates the formation of directional metal−metal interactions to precisely control the formation of nanotubes or helical ribbons in the self-assembly process. Application of the nucleation–elongation model into this assembly process by UV-visible (UV-vis) absorption spectroscopic studies has elucidated the nature of the molecular self-assembly, and more importantly, it has revealed the role of metal−metal interactions in the formation of these two types of nanostructures. 相似文献
3.
4.
Jane Hornickel Steven G. Zecker Ann R. Bradlow Nina Kraus 《Proceedings of the National Academy of Sciences of the United States of America》2012,109(41):16731-16736
Children with dyslexia often exhibit increased variability in sensory and cognitive aspects of hearing relative to typically developing peers. Assistive listening devices (classroom FM systems) may reduce auditory processing variability by enhancing acoustic clarity and attention. We assessed the impact of classroom FM system use for 1 year on auditory neurophysiology and reading skills in children with dyslexia. FM system use reduced the variability of subcortical responses to sound, and this improvement was linked to concomitant increases in reading and phonological awareness. Moreover, response consistency before FM system use predicted gains in phonological awareness. A matched control group of children with dyslexia attending the same schools who did not use the FM system did not show these effects. Assistive listening devices can improve the neural representation of speech and impact reading-related skills by enhancing acoustic clarity and attention, reducing variability in auditory processing.Children with dyslexia, reading impairment not caused by deficits in ability or opportunity (1), often have difficulties with orienting and maintaining attention (2, 3). Although the ability to direct attention is still developing during the elementary school years (4), dyslexics have poorer task-dependent attentional shifting in both auditory and visual modalities than their typically developing peers even into adulthood (2, 3). These deficits may impact and be impacted by heightened variability in sensory processes, such as inconsistent representations of speech by the auditory nervous system, and could contribute to documented impairments in auditory processing (5–7) and difficulty with meaningfully disambiguating speech sounds (8). Children with dyslexia can exhibit abnormal subcortical processing of speech, particularly in response to acoustic elements crucial for differentiating speech sounds (9–11). Deficient auditory sensory representation and unsuccessful disambiguation of speech likely contribute to the well-documented impairments in phonological awareness and phonological memory seen in children with dyslexia (12–14), with auditory processing skills in prereaders predicting later language skill (15, 16). Because the auditory system integrates both sensory and cognitive facets of hearing, we suggest that through repeated, impaired interaction with sound, children with dyslexia can develop abnormal sensory representations of speech as well as abnormal cognitive skills for the interpretation of speech. For example, a child who repeatedly misperceives the sounds “cat” as “bat” or “pat” fails to make a robust sound-to-meaning connection between those sounds and their referent. However, because of this same integrative nature of the auditory system, deficient function can be improved with auditory training.Auditory perception and neurophysiology can be altered with auditory training (17–23). These changes can be traced directly to cross-cortical and descending cortical influence on neural receptivity in animal models and are driven by the behavioral importance of sounds (18, 24). In humans, attention and working memory are important components of training-related changes (25) and may serve to direct descending cortical influence on auditory sensory function. Computer-based perceptual games, musical training, and language learning can provide effective training for children with developmental learning disorders, such as dyslexia, because they actively engage attention to sound. Classroom assistive listening devices, which can be worn throughout the school day, can also improve auditory processing by engendering enhancements in attention, as reported by both teachers and students (26–28). Assistive listening devices (i.e., classroom FM systems) also result in neurophysiologic enhancements in response to attended vs. ignored sounds (29). Such systems increase the signal-to-noise ratio of the speaker of interest (e.g., the teacher) (30) and create a more stable acoustic input by reducing the impact of background noise on the most vulnerable portion of speech sounds (31). These acoustic enhancements, along with accompanying improvements in auditory attention, lead to boosts in academic achievement, literacy, and phonological awareness, with the greatest benefits seen for children with learning impairments (32–34).What are the biological mechanisms by which classroom FM system use improves auditory attention and phonological awareness in children with dyslexia? How might these benefits translate to the neural representation of speech? Here, we investigated the impact of classroom FM system use on auditory brainstem encoding of stop consonants, which can be deficient in children with dyslexia. Auditory brainstem function is stable from test to retest in the absence of intervention (35, 36), but can be altered by short-term auditory training (19, 20, 22), lifelong experience such as musical training and language experience (37, 38), and directed attention (39). Here we assessed auditory brainstem responses and reading performance in children with dyslexia before and after classroom FM system use for one academic year. We hypothesized that enhanced neural consistency would accompany improvement in reading skills in children wearing the FM systems but not in a control group of dyslexic children in the same classrooms who did not wear assistive listening devices. We further hypothesized that neural consistency would improve pervasively throughout the recording session and not simply offset neural fatigue. 相似文献
5.
Mihaela Mihailescu Dmitriy Krepkiy Mirela Milescu Klaus Gawrisch Kenton J. Swartz Stephen White 《Proceedings of the National Academy of Sciences of the United States of America》2014,111(50):E5463-E5470
Protein toxins from tarantula venom alter the activity of diverse ion channel proteins, including voltage, stretch, and ligand-activated cation channels. Although tarantula toxins have been shown to partition into membranes, and the membrane is thought to play an important role in their activity, the structural interactions between these toxins and lipid membranes are poorly understood. Here, we use solid-state NMR and neutron diffraction to investigate the interactions between a voltage sensor toxin (VSTx1) and lipid membranes, with the goal of localizing the toxin in the membrane and determining its influence on membrane structure. Our results demonstrate that VSTx1 localizes to the headgroup region of lipid membranes and produces a thinning of the bilayer. The toxin orients such that many basic residues are in the aqueous phase, all three Trp residues adopt interfacial positions, and several hydrophobic residues are within the membrane interior. One remarkable feature of this preferred orientation is that the surface of the toxin that mediates binding to voltage sensors is ideally positioned within the lipid bilayer to favor complex formation between the toxin and the voltage sensor.Protein toxins from venomous organisms have been invaluable tools for studying the ion channel proteins they target. For example, in the case of voltage-activated potassium (Kv) channels, pore-blocking scorpion toxins were used to identify the pore-forming region of the channel (1, 2), and gating modifier tarantula toxins that bind to S1–S4 voltage-sensing domains have helped to identify structural motifs that move at the protein–lipid interface (3–5). In many instances, these toxin–channel interactions are highly specific, allowing them to be used in target validation and drug development (6–8).Tarantula toxins are a particularly interesting class of protein toxins that have been found to target all three families of voltage-activated cation channels (3, 9–12), stretch-activated cation channels (13–15), as well as ligand-gated ion channels as diverse as acid-sensing ion channels (ASIC) (16–21) and transient receptor potential (TRP) channels (22, 23). The tarantula toxins targeting these ion channels belong to the inhibitor cystine knot (ICK) family of venom toxins that are stabilized by three disulfide bonds at the core of the molecule (16, 17, 24–31). Although conventional tarantula toxins vary in length from 30 to 40 aa and contain one ICK motif, the recently discovered double-knot toxin (DkTx) that specifically targets TRPV1 channels contains two separable lobes, each containing its own ICK motif (22, 23).One unifying feature of all tarantula toxins studied thus far is that they act on ion channels by modifying the gating properties of the channel. The best studied of these are the tarantula toxins targeting voltage-activated cation channels, where the toxins bind to the S3b–S4 voltage sensor paddle motif (5, 32–36), a helix-turn-helix motif within S1–S4 voltage-sensing domains that moves in response to changes in membrane voltage (37–41). Toxins binding to S3b–S4 motifs can influence voltage sensor activation, opening and closing of the pore, or the process of inactivation (4, 5, 36, 42–46). The tarantula toxin PcTx1 can promote opening of ASIC channels at neutral pH (16, 18), and DkTx opens TRPV1 in the absence of other stimuli (22, 23), suggesting that these toxin stabilize open states of their target channels.For many of these tarantula toxins, the lipid membrane plays a key role in the mechanism of inhibition. Strong membrane partitioning has been demonstrated for a range of toxins targeting S1–S4 domains in voltage-activated channels (27, 44, 47–50), and for GsMTx4 (14, 50), a tarantula toxin that inhibits opening of stretch-activated cation channels in astrocytes, as well as the cloned stretch-activated Piezo1 channel (13, 15). In experiments on stretch-activated channels, both the d- and l-enantiomers of GsMTx4 are active (14, 50), implying that the toxin may not bind directly to the channel. In addition, both forms of the toxin alter the conductance and lifetimes of gramicidin channels (14), suggesting that the toxin inhibits stretch-activated channels by perturbing the interface between the membrane and the channel. In the case of Kv channels, the S1–S4 domains are embedded in the lipid bilayer and interact intimately with lipids (48, 51, 52) and modification in the lipid composition can dramatically alter gating of the channel (48, 53–56). In one study on the gating of the Kv2.1/Kv1.2 paddle chimera (53), the tarantula toxin VSTx1 was proposed to inhibit Kv channels by modifying the forces acting between the channel and the membrane. Although these studies implicate a key role for the membrane in the activity of Kv and stretch-activated channels, and for the action of tarantula toxins, the influence of the toxin on membrane structure and dynamics have not been directly examined. The goal of the present study was to localize a tarantula toxin in membranes using structural approaches and to investigate the influence of the toxin on the structure of the lipid bilayer. 相似文献
6.
Miklos Jaszberenyi Ferenc G. Rick Petra Popovics Norman L. Block Marta Zarandi Ren-Zhi Cai Irving Vidaurre Luca Szalontay Arumugam R. Jayakumar Andrew V. Schally 《Proceedings of the National Academy of Sciences of the United States of America》2014,111(2):781-786
The dismal prognosis of malignant brain tumors drives the development of new treatment modalities. In view of the multiple activities of growth hormone-releasing hormone (GHRH), we hypothesized that pretreatment with a GHRH agonist, JI-34, might increase the susceptibility of U-87 MG glioblastoma multiforme (GBM) cells to subsequent treatment with the cytotoxic drug, doxorubicin (DOX). This concept was corroborated by our findings, in vivo, showing that the combination of the GHRH agonist, JI-34, and DOX inhibited the growth of GBM tumors, transplanted into nude mice, more than DOX alone. In vitro, the pretreatment of GBM cells with JI-34 potentiated inhibitory effects of DOX on cell proliferation, diminished cell size and viability, and promoted apoptotic processes, as shown by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide proliferation assay, ApoLive-Glo multiplex assay, and cell volumetric assay. Proteomic studies further revealed that the pretreatment with GHRH agonist evoked differentiation decreasing the expression of the neuroectodermal stem cell antigen, nestin, and up-regulating the glial maturation marker, GFAP. The GHRH agonist also reduced the release of humoral regulators of glial growth, such as FGF basic and TGFβ. Proteomic and gene-expression (RT-PCR) studies confirmed the strong proapoptotic activity (increase in p53, decrease in v-myc and Bcl-2) and anti-invasive potential (decrease in integrin α3) of the combination of GHRH agonist and DOX. These findings indicate that the GHRH agonists can potentiate the anticancer activity of the traditional chemotherapeutic drug, DOX, by multiple mechanisms including the induction of differentiation of cancer cells.Glioblastoma multiforme (GBM) is one of the most aggressive human cancers, and the afflicted patients inevitably succumb. The dismal outcome of this malignancy demands great efforts to find improved methods of treatment (1). Many compounds have been synthesized in our laboratory in the past few years that have proven to be effective against diverse malignant tumors (2–14). These are peptide analogs of hypothalamic hormones: luteinizing hormone-releasing hormone (LHRH), growth hormone-releasing hormone (GHRH), somatostatin, and analogs of other neuropeptides such as bombesin and gastrin-releasing peptide. The receptors for these peptides have been found to be widely distributed in the human body, including in many types of cancers (2–14). The regulatory functions of these hypothalamic hormones and other neuropeptides are not confined to the hypothalamo–hypophyseal system or, even more broadly, to the central nervous system (CNS). In particular, GHRH can induce the differentiation of ovarian granulosa cells and other cells in the reproductive system and function as a growth factor in various normal tissues, benign tumors, and malignancies (2–4, 6, 11, 14–18). Previously, we also reported that antagonistic cytototoxic derivatives of some of these neuropeptides are able to inhibit the growth of several malignant cell lines (2–14).Our earlier studies showed that treatment with antagonists of LHRH or GHRH rarely effects complete regression of glioblastoma-derived tumors (5, 7, 10, 11). Previous studies also suggested that growth factors such as EGF or agonistic analogs of LHRH serving as carriers for cytotoxic analogs and functioning as growth factors may sensitize cancer cells to cytotoxic treatments (10, 19) through the activation of maturation processes. We therefore hypothesized that pretreatment with one of our GHRH agonists, such as JI-34 (20), which has shown effects on growth and differentiation in other cell lines (17, 18, 21, 22), might decrease the pluripotency and the adaptability of GBM cells and thereby increase their susceptibility to cytotoxic treatment.In vivo, tumor cells were implanted into athymic nude mice, tumor growth was recorded weekly, and final tumor mass was measured upon autopsy. In vitro, proliferation assays were used for the determination of neoplastic proliferation and cell growth. Changes in stem (nestin) and maturation (GFAP) antigen expression was evaluated with Western blot studies in vivo and with immunocytochemistry in vitro. The production of glial growth factors (FGF basic, TGFβ) was verified by ELISA. Further, using the Human Cancer Pathway Finder real-time quantitative PCR, numerous genes that play a role in the development of cancer were evaluated. We placed particular emphasis on the measurement of apoptosis, using the ApoLive-Glo Multiplex Assay kit and by detection of the expression of the proapoptotic p53 protein. This overall approach permitted the evaluation of the effect of GHRH agonist, JI-34, on the response to chemotherapy with doxorubicin. 相似文献
7.
8.
Weiyi Ma William Forde Thompson 《Proceedings of the National Academy of Sciences of the United States of America》2015,112(47):14563-14568
Emotional responses to biologically significant events are essential for human survival. Do human emotions lawfully track changes in the acoustic environment? Here we report that changes in acoustic attributes that are well known to interact with human emotions in speech and music also trigger systematic emotional responses when they occur in environmental sounds, including sounds of human actions, animal calls, machinery, or natural phenomena, such as wind and rain. Three changes in acoustic attributes known to signal emotional states in speech and music were imposed upon 24 environmental sounds. Evaluations of stimuli indicated that human emotions track such changes in environmental sounds just as they do for speech and music. Such changes not only influenced evaluations of the sounds themselves, they also affected the way accompanying facial expressions were interpreted emotionally. The findings illustrate that human emotions are highly attuned to changes in the acoustic environment, and reignite a discussion of Charles Darwin’s hypothesis that speech and music originated from a common emotional signal system based on the imitation and modification of environmental sounds.Emotional responses to environmental events are essential for human survival. In contexts that have implications for survival and reproduction, the amygdala transmits signals to the hypothalamus, which releases hormones that activate the autonomic nervous system and cause physiological changes, such as increased heart rate, respiration, and blood pressure (1). These bodily changes contribute to the experience of emotion (2), and function to prepare an organism to respond effectively to biologically significant events in the environment (3).Throughout the arts and media, environmental conditions have been used to connote an emotional character. For example, the acoustic soundscape of film and television can powerfully affect a viewer’s perspectives on the narrative (4). Thus, human emotions appear to track changes in the acoustic environment, but it is unclear how they do this. One possibility is that the acoustic attributes that convey emotional states in speech and music also trigger emotional responses in environmental sounds. This possibility is implied within Charles Darwin’s theory that speech and music originated from a common precursor that developed from “the imitation and modification of various natural sounds, the voices of other animals, and man’s own instinctive cries” (5). Darwin also argued that this primitive system would have been especially useful in the expression of emotion. Modern day music, he reasoned, was a behavioral remnant of this early system of communication (5, 6).This hypothesis has been elaborated and restated by modern researchers as the “musical protolanguage hypothesis”: speech and music share a common ancestral precursor of a songlike communication system (or musical protolanguage) used in courtship and territoriality and in the expression of emotion, which is based on the imitation and modification of environmental sounds (6–10). Environmental sounds carry biologically significant information reflected in our emotional responses to such sounds. To express an emotional state, early hominins might have selectively imitated and manipulated abstract attributes of environmental sounds that have broad biological significance, vocally modulating pitch, intensity, and rate while disregarding the attributes of sound that are specific to individual sources. Extracting and transposing biologically significant cues in the environment to contexts beyond their original source allowed a new channel of emotional communication to emerge (11–14).The musical protolanguage hypothesis is supported by recent evidence that speech and music share underlying cognitive and neural resources (15–22), and draw on a common code of acoustic attributes when used to communicate emotional states (23–31). In their review of emotional expression in speech and music, Juslin and Laukka found that higher pitch, increased intensity, and faster rate were associated with more excited and positive emotions in both speech and music (23). More recently, it has been demonstrated that the spectra associated with certain major and minor intervals are similar to the spectra of excited and subdued speech, respectively (26, 27), a finding corroborated in acoustic analyses of South Indian music and speech (28). Furthermore, deficits in music processing are associated with reduced sensitivity to emotional speech prosody (32), whereas enhancements of the capacity to process music are correlated with improved sensitivity to emotional speech prosody (33, 34). For example, a study on individuals with congenital amusia, a neurodevelopmental disorder characterized by deficits in processing acoustic and structural attributes of music, showed that amusic individuals were worse than matched controls at decoding emotional prosody in speech, supporting speculations that music and language share mechanisms that trigger emotional responses to acoustic attributes (32).Changes in three acoustic attributes are especially important for communicating emotion in speech and music: frequency spectrum, intensity, and rate (23–25). Darwin’s hypothesis implies that these attributes are tracked by human emotions because they reflect biologically significant information about sound sources, such as their size, proximity, and speed. More specifically, the musical protolanguage hypothesis predicts that acoustic attributes that influence the emotional character of speech and music should also have emotional significance when arising from environmental sounds (5).The present study tested the hypothesis that changes in the frequency spectrum, intensity, and rate of environmental sounds are associated with changes in the perceived valence and arousal of those sounds (23–25). Because the sources and nature of environmental sounds vary considerably according to geographic location, environmental sounds are defined as any acoustic stimuli that can be heard in daily life that are neither musical nor linguistic. Thus, four types of environmental sounds were considered (human actions, animal sounds, machine noise, sounds in nature), each containing six exemplars. For each of these 24 environmental sounds, we manipulated the frequency spectrum, intensity, and rate. In accordance with the circumplex model of emotion, we obtained ratings of the perceived difference in valence (negative to positive) and arousal (calm to energetic) for stimulus pairs that differed in just one of the three manipulated attributes (35, 36). Although not all environmental sounds have a clearly perceptible fundamental frequency, research on pitch sensations for nonperiodic sounds confirm that individuals are sensitive to salient spectral regions and can detect when such regions are shifted (37, 38). 相似文献
9.
Adam T. Tierney Jennifer Krizman Nina Kraus 《Proceedings of the National Academy of Sciences of the United States of America》2015,112(32):10062-10067
Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes.By age six, the brain has reached 90% of its adult size (1). However, the years between childhood and young adulthood are marked by a host of subtler neural developments. Myelination and synaptic pruning (2–5) lead to a decrease in gray matter and an increase in white matter (6–13). Resting-state oscillations decline (14–16), and passive evoked responses to sound change in complex ways. Cortically, the P1, which is a positive deflection at around 50 ms generated within lateral Heschl’s gyrus (17), declines whereas the N1, a negative deflection at around 100 ms generated within primary and secondary auditory cortices (18–20), increases (21–23). Subcortically, the trial-by-trial consistency of the response declines (24, 25). An open question is how experience interacts with this developmental plasticity during adolescence. Is the transition from the plasticity of childhood to the stability of adulthood malleable by experience? And if so, what types of enrichment have the greatest impact on the development of the neural mechanisms contributing to auditory and language skills?Music training is an enrichment program commonly available to high school students, and its neural and behavioral consequences are well-understood (for a review, see ref. 26). Studies comparing nonmusicians with musicians who began training early in life have revealed a “signature” set of enhancements associated with musical experience (27, 28). Relative to nonmusician peers, musicians tend to show enhanced speech-in-noise perception (29–34), verbal memory (30–33, 35–38), phonological skills (39–45), and reading (46–50), although not without exception (51, 52). Music training has also been linked to enhancements in the encoding of sound throughout the auditory system. For example, musicians show an enhanced N1 (53–56). These enhancements extend to the subcortical auditory system, with musicians showing responses to sound that are faster (55, 57–61), are degraded less by background noise (32, 61), represent speech formant structure more robustly (32, 62–64). differentiate speech sounds to a greater extent (65–67), track stimulus pitch more accurately (68, 69), and are more consistent across trials (59, 70). In adolescence, music training leads to faster responses to speech in noise (71), but the extent to which adolescent music training can confer other aspects of the musician signature remains unknown.Motivated by a conceptual framework in which auditory enrichment interacts with the auditory processes that remain under development during adolescence, we undertook a school-based longitudinal study of adolescent auditory enrichment. We focused on objective biological measures of sound processing that (i) have shown developmental plasticity during adolescence in the absence of intervention and (ii) contribute to the “neural signature” of musicianship: the consistency of the subcortical response to speech and the magnitude of the cortical onset response to speech. Subcortical response consistency peaks in childhood, waning into young adulthood (24), coinciding with a period when learning a second language becomes more difficult than earlier in life (72). Response consistency tracks with language skills (73) and is enhanced in musicians (59, 70). Accordingly, we predicted that music training in adolescence prolongs this period of heightened auditory stability. Moreover, given that the cortical N1 onset response emerges during adolescence while the P1 response declines (17, 18, 21–23), and that N1 is enhanced in younger and older musicians (53–56), we predicted that music training during adolescence would accelerate the development of the cortical onset response.To test these hypotheses, we followed two groups of high school students longitudinally, testing them just before they entered high school (mean age 14.7) and again 4 y later during their last year of school. One group (n = 19) engaged in music training in which they performed music from written notation in a group setting whereas the active control group (n = 21) engaged in Junior Reserve Officers Training Corps (JROTC) training. Both types of training required investment of time and effort and emphasized the development of self-discipline, dedication, and determination; however, only the music training targeted auditory function. Both activities were part of the high school curriculum, which was otherwise identical for both groups. We also tested students’ language skills (phonological memory, phonological awareness, and rapid naming ability) to determine whether in-school music engendered benefits for literacy skills, a prediction consistent with cross-sectional studies (39–45). The two groups were matched demographically and on all outcome measures at the start of the study (see Demographic information Music training JROTC training No. female 8 8 Age at pretest 14.66 (0.42) 14.72 (0.38) Nonverbal IQ scores at pretest 51.74 (9.88) 51.14 (4.75) Avg degree of maternal education* 2.53 (0.84) 2.4 (0.75)