首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed.  相似文献   

2.
The present study investigated the functional neuroanatomy of transient mood changes in response to Western classical music. In a pilot experiment, 53 healthy volunteers (mean age: 32.0; SD = 9.6) evaluated their emotional responses to 60 classical musical pieces using a visual analogue scale (VAS) ranging from 0 (sad) through 50 (neutral) to 100 (happy). Twenty pieces were found to accurately induce the intended emotional states with good reliability, consisting of 5 happy, 5 sad, and 10 emotionally unevocative, neutral musical pieces. In a subsequent functional magnetic resonance imaging (fMRI) study, the blood oxygenation level dependent (BOLD) signal contrast was measured in response to the mood state induced by each musical stimulus in a separate group of 16 healthy participants (mean age: 29.5; SD = 5.5). Mood state ratings during scanning were made by a VAS, which confirmed the emotional valence of the selected stimuli. Increased BOLD signal contrast during presentation of happy music was found in the ventral and dorsal striatum, anterior cingulate, parahippocampal gyrus, and auditory association areas. With sad music, increased BOLD signal responses were noted in the hippocampus/amygdala and auditory association areas. Presentation of neutral music was associated with increased BOLD signal responses in the insula and auditory association areas. Our findings suggest that an emotion processing network in response to music integrates the ventral and dorsal striatum, areas involved in reward experience and movement; the anterior cingulate, which is important for targeting attention; and medial temporal areas, traditionally found in the appraisal and processing of emotions.  相似文献   

3.
The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language.  相似文献   

4.
The present study investigated emotional responses to music by using multidimensional scaling (MDS) analysis in patients with right or left medial temporal lobe (MTL) lesions and matched normal controls (NC). Participants were required to evaluate emotional dissimilarities of nine musical excerpts that were selected to express graduated changes along the valence and arousal dimensions. For this purpose, they rated dissimilarity between pairs of stimuli on an eight-point scale and the resulting matrices were submitted to an MDS analysis. The results showed that patients did not differ from NC participants in evaluating emotional feelings induced by the musical excerpts, suggesting that all participants were able to distinguish refined emotions. We concluded that the ability to detect and use emotional valence and arousal when making dissimilarity judgments was not strongly impaired by a right or left MTL lesion. This finding has important clinical implications and is discussed in light of current neuropsychological studies on emotion. It suggests that emotional responses to music can be at least partially preserved at a non-verbal level in patients with unilateral temporal lobe damage including the amygdala.  相似文献   

5.
《Seizure》2014,23(7):533-536
PurposePatients with temporal lobe epilepsy (TLE) often show impairment of cognitive processing in different domains. We aimed to evaluate whether also musical ability is impaired in TLE.MethodsWe enrolled patients with lesional TLE and without any other neurological or psychiatric disorder. The side and the etiology of the epilepsy were confirmed by EEG and by MRI. We applied a self-developed test of musical ability which evaluates the ability to identify melodies, pitch, rhythm, and emotional content of music. In addition, we compared the results of the patients to the results of age and sex matched healthy control subjects. All patients and subjects were without specific musical training.ResultsPatients with left TLE showed a significantly lower score in melody recognition, patients with right TLE showed a significantly lower score in identification of emotion in music. In all other aspects of music ability, no significant difference between left and right TLE could be found. We observed a significantly lower total score in patients with left TLE, but not with right TLE, as compared to healthy subjects. There were no differences with respect to sex.ConclusionOur data confirm that the recognition of melodies shows left hemisphere dominance whereas the identification of emotions in music shows right hemisphere dominance in patients without musical training. Furthermore, our data show that the impairment of cognitive processing in TLE is reflected even in higher cognitive functions such as music processing. However, this impairment was mild.  相似文献   

6.
Here, we used functional magnetic resonance imaging to test for the lateralization of the brain regions specifically involved in the recognition of negatively and positively valenced musical emotions. The manipulation of two major musical features (mode and tempo), resulting in the variation of emotional perception along the happiness-sadness axis, was shown to principally involve subcortical and neocortical brain structures, which are known to intervene in emotion processing in other modalities. In particular, the minor mode (sad excerpts) involved the left orbito and mid-dorsolateral frontal cortex, which does not confirm the valence lateralization model. We also show that the recognition of emotions elicited by variations of the two perceptual determinants rely on both common (BA 9) and distinct neural mechanisms.  相似文献   

7.
The role of the amygdala in recognition of danger is well established for visual stimuli such as faces. A similar role in another class of emotionally potent stimuli -- music -- has been recently suggested by the study of epileptic patients with unilateral resection of the anteromedian part of the temporal lobe [Gosselin, N., Peretz, I., Noulhiane, M., Hasboun, D., Beckett, C., & Baulac, M., et al. (2005). Impaired recognition of scary music following unilateral temporal lobe excision. Brain, 128(Pt 3), 628-640]. The goal of the present study was to assess the specific role of the amygdala in the recognition of fear from music. To this aim, we investigated a rare subject, S.M., who has complete bilateral damage relatively restricted to the amygdala and not encompassing other sectors of the temporal lobe. In Experiment 1, S.M. and four matched controls were asked to rate the intensity of fear, peacefulness, happiness, and sadness from computer-generated instrumental music purposely created to express those emotions. Subjects also rated the arousal and valence of each musical stimulus. An error detection task assessed basic auditory perceptual function. S.M. performed normally in this perceptual task, but was selectively impaired in the recognition of scary and sad music. In contrast, her recognition of happy music was normal. Furthermore, S.M. judged the scary music to be less arousing and the peaceful music less relaxing than did the controls. Overall, the pattern of impairment in S.M. is similar to that previously reported in patients with unilateral anteromedial temporal lobe damage. S.M.'s impaired emotional judgments occur in the face of otherwise intact processing of musical features that are emotionally determinant. The use of tempo and mode cues in distinguishing happy from sad music was also spared in S.M. Thus, the amygdala appears to be necessary for emotional processing of music rather than the perceptual processing itself.  相似文献   

8.
The present study used pleasant and unpleasant music to evoke emotion and functional magnetic resonance imaging (fMRI) to determine neural correlates of emotion processing. Unpleasant (permanently dissonant) music contrasted with pleasant (consonant) music showed activations of amygdala, hippocampus, parahippocampal gyrus, and temporal poles. These structures have previously been implicated in the emotional processing of stimuli with (negative) emotional valence; the present data show that a cerebral network comprising these structures can be activated during the perception of auditory (musical) information. Pleasant (contrasted to unpleasant) music showed activations of the inferior frontal gyrus (IFG, inferior Brodmann's area (BA) 44, BA 45, and BA 46), the anterior superior insula, the ventral striatum, Heschl's gyrus, and the Rolandic operculum. IFG activations appear to reflect processes of music-syntactic analysis and working memory operations. Activations of Rolandic opercular areas possibly reflect the activation of mirror-function mechanisms during the perception of the pleasant tunes. Rolandic operculum, anterior superior insula, and ventral striatum may form a motor-related circuitry that serves the formation of (premotor) representations for vocal sound production during the perception of pleasant auditory information. In all of the mentioned structures, except the hippocampus, activations increased over time during the presentation of the musical stimuli, indicating that the effects of emotion processing have temporal dynamics; the temporal dynamics of emotion have so far mainly been neglected in the functional imaging literature.  相似文献   

9.
Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.  相似文献   

10.
Difficulties in the recognition of emotions in expressive faces have been reported in people with 22q11.2 deletion syndrome (22q11.2DS). However, while low-intensity expressive faces are frequent in everyday life, nothing is known about their ability to perceive facial emotions depending on the intensity of expression. Through a visual matching task, children and adolescents with 22q11.2DS as well as gender- and age-matched healthy participants were asked to categorise the emotion of a target face among six possible expressions. Static pictures of morphs between neutrality and expressions were used to parametrically manipulate the intensity of the target face. In comparison to healthy controls, results showed higher perception thresholds (i.e. a more intense expression is needed to perceive the emotion) and lower accuracy for the most expressive faces indicating reduced categorisation abilities in the 22q11.2DS group. The number of intrusions (i.e. each time an emotion is perceived as another one) and a more gradual perception performance indicated smooth boundaries between emotional categories. Correlational analyses with neuropsychological and clinical measures suggested that reduced visual skills may be associated with impaired categorisation of facial emotions. Overall, the present study indicates greater difficulties for children and adolescents with 22q11.2DS to perceive an emotion in low-intensity expressive faces. This disability is subtended by emotional categories that are not sharply organised. It also suggests that these difficulties may be associated with impaired visual cognition, a hallmark of the cognitive deficits observed in the syndrome. These data yield promising tracks for future experimental and clinical investigations.  相似文献   

11.
Satoh M  Nakase T  Nagata K  Tomimoto H 《Neurocase》2011,17(5):410-417
Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.  相似文献   

12.
Empirical research examining the situational characteristics of gambling and their effect on gambling behaviour is limited but growing. This experimental pilot investigation reports the first ever empirical study into the combined effects of both music and light on gambling behaviour. While playing an online version of roulette, 56 participants took part in one of four experimental conditions (14 participants in each condition); (1) gambling with fast tempo music under normal (white) light, (2) gambling with fast tempo music under red light, (3) gambling with slow tempo music under normal (white) light, and (4) gambling with slow tempo music under red light. Risk (dollars spent) per spin and speed of bets were measured as indicators of gambling behaviour. Significant main effects were found for speed of bets in relation to musical tempo, but not light. No significant effects were found for risk per spin for either of the two independent variables. A significant interaction between light and music for speed of bets was shown. Planned comparisons revealed that fast tempo music under red light resulted in faster gambling. These findings are discussed along with the methodological limitations, and potential implications for various stakeholders including the gambling industry and practitioners.  相似文献   

13.
Emotional connection is the main reason people engage with music, and the emotional features of music can influence processing in other domains. Williams syndrome (WS) is a neurodevelopmental genetic disorder where musicality and sociability are prominent aspects of the phenotype. This study examined oscillatory brain activity during a musical affective priming paradigm. Participants with WS and age-matched typically developing controls heard brief emotional musical excerpts or emotionally neutral sounds and then reported the emotional valence (happy/sad) of subsequently presented faces. Participants with WS demonstrated greater evoked fronto-central alpha activity to the happy vs sad musical excerpts. The size of these alpha effects correlated with parent-reported emotional reactivity to music. Although participant groups did not differ in accuracy of identifying facial emotions, reaction time data revealed a music priming effect only in persons with WS, who responded faster when the face matched the emotional valence of the preceding musical excerpt vs when the valence differed. Matching emotional valence was also associated with greater evoked gamma activity thought to reflect cross-modal integration. This effect was not present in controls. The results suggest a specific connection between music and socioemotional processing and have implications for clinical and educational approaches for WS.  相似文献   

14.
We examined the integrative process between emotional facial expressions and musical excerpts by using an affective priming paradigm. Happy or sad musical stimuli were presented after happy or sad facial images during electroencephalography (EEG) recordings. We asked participants to judge the affective congruency of the presented face–music pairs. The congruency of emotionally congruent pairs was judged more rapidly than that of incongruent pairs. In addition, the EEG data showed that incongruent musical targets elicited a larger N400 component than congruent pairs. Furthermore, these effects occurred in nonmusicians as well as musicians. In sum, emotional integrative processing of face–music pairs was facilitated in congruent music targets and inhibited in incongruent music targets; this process was not significantly modulated by individual musical experience. This is the first study on musical stimuli primed by facial expressions to demonstrate that the N400 component reflects the affective priming effect.  相似文献   

15.
Intrinsic emotional expressions such as those communicated by faces and vocalizations have been shown to engage specific brain regions, such as the amygdala. Although music constitutes another powerful means to express emotions, the neural substrates involved in its processing remain poorly understood. In particular, it is unknown whether brain regions typically associated with processing ‘biologically relevant’ emotional expressions are also recruited by emotional music. To address this question, we conducted an event-related functional magnetic resonance imaging study in 47 healthy volunteers in which we directly compared responses to basic emotions (fear, sadness and happiness, as well as neutral) expressed through faces, non-linguistic vocalizations and short novel musical excerpts. Our results confirmed the importance of fear in emotional communication, as revealed by significant blood oxygen level-dependent signal increased in a cluster within the posterior amygdala and anterior hippocampus, as well as in the posterior insula across all three domains. Moreover, subject-specific amygdala responses to fearful music and vocalizations were correlated, consistent with the proposal that the brain circuitry involved in the processing of musical emotions might be shared with the one that have evolved for vocalizations. Overall, our results show that processing of fear expressed through music, engages some of the same brain areas known to be crucial for detecting and evaluating threat-related information.  相似文献   

16.
OBJECTIVE: Blunted affect is a major symptom in schizophrenia, and affective deficits clinically encompass deficits in expressiveness. Emotion research and ethological studies have shown that patients with schizophrenia are impaired in various modalities of expressiveness (posed and spontaneous emotion expressions, coverbal gestures, and smiles). Similar deficits have been described in depression, but comparative studies have brought mixed results. Our aim was to study and compare facial expressive behaviors related to affective deficits in patients with schizophrenia, depressed patients, and nonpatient comparison subjects. METHOD: Fifty-eight nondepressed inpatients with schizophrenia, 25 nonpsychotic inpatients with unipolar depression, and 25 nonpatient comparison subjects were asked to reproduce facial emotional expressions. Then the subjects were asked to speak about a specific emotion for 2 minutes. Each time, six cross-cultural emotions were tested. Facial emotional expressions were rated with the Facial Action Coding System. The number of facial coverbal gestures (facial expressions that are tied to speech) and the number of words were calculated. RESULTS: In relation to nonpatient comparison subjects, both patient groups were impaired for all expressive variables. Few differences were found between schizophrenia and depression: depressed subjects had less spontaneous expressions of other-than-happiness emotions, but overall, they appeared more expressive. Fifteen patients with schizophrenia were tested without and with typical or atypical antipsychotic medications: no differences could be found in study performance. CONCLUSIONS: The patients with schizophrenia and the patients with depression presented similar deficits in various expressive modalities: posed and spontaneous emotional expression, smiling, coverbal gestures, and verbal output.  相似文献   

17.
Experimental investigations of cross‐cultural music perception and cognition reported during the past decade are described. As globalization and Western music homogenize the world musical environment, it is imperative that diverse music and musical contexts are documented. Processes of music perception include grouping and segmentation, statistical learning and sensitivity to tonal and temporal hierarchies, and the development of tonal and temporal expectations. The interplay of auditory, visual, and motor modalities is discussed in light of synchronization and the way music moves via emotional response. Further research is needed to test deep‐rooted psychological assumptions about music cognition with diverse materials and groups in dynamic contexts. Although empirical musicology provides keystones to unlock musical structures and organization, the psychological reality of those theorized structures for listeners and performers, and the broader implications for theories of music perception and cognition, awaits investigation.  相似文献   

18.
Several studies have attempted to investigate how the brain codes emotional value when processing music of contrasting levels of dissonance; however, the lack of control over specific musical structural characteristics (i.e., dynamics, rhythm, melodic contour or instrumental timbre), which are known to affect perceived dissonance, rendered results difficult to interpret. To account for this, we used functional imaging with an optimized control of the musical structure to obtain a finer characterization of brain activity in response to tonal dissonance. Behavioral findings supported previous evidence for an association between increased dissonance and negative emotion. Results further demonstrated that the manipulation of tonal dissonance through systematically controlled changes in interval content elicited contrasting valence ratings but no significant effects on either arousal or potency. Neuroscientific findings showed an engagement of the left medial prefrontal cortex (mPFC) and the left rostral anterior cingulate cortex (ACC) while participants listened to dissonant compared to consonant music, converging with studies that have proposed a core role of these regions during conflict monitoring (detection and resolution), and in the appraisal of negative emotion and fear‐related information. Both the left and right primary auditory cortices showed stronger functional connectivity with the ACC during the dissonant portion of the task, implying a demand for greater information integration when processing negatively valenced musical stimuli. This study demonstrated that the systematic control of musical dissonance could be applied to isolate valence from the arousal dimension, facilitating a novel access to the neural representation of negative emotion.  相似文献   

19.
We have shown that an anteromedial temporal lobe resection can impair the recognition of scary music in a prior study (Gosselin et al., 2005). In other studies ( [Adolphs et?al., 2001] and [Anderson et?al., 2000] ), similar results have been obtained with fearful facial expressions. These findings suggest that scary music and fearful faces may be processed by common cerebral structures. To assess this possibility, we tested patients with unilateral anteromedial temporal excision and normal controls in two emotional tasks. In the task of identifying musical emotion, stimuli evoked either fear, peacefulness, happiness or sadness. Participants were asked to rate to what extent each stimulus expressed these four emotions on 10-point scales. The task of facial emotion included morphed stimuli whose expression varied from faint to more pronounced and evoked fear, happiness, sadness, surprise, anger or disgust. Participants were requested to select the appropriate label. Most patients were found to be impaired in the recognition of both scary music and fearful faces. Furthermore, the results in both tasks were correlated, suggesting a multimodal representation of fear within the amygdala. However, inspection of individual results showed that recognition of fearful faces can be preserved whereas recognition of scary music can be impaired. Such a dissociation found in two cases suggests that fear recognition in faces and in music does not necessarily involve exactly the same cerebral networks and this hypothesis is discussed in light of the current literature.  相似文献   

20.
Research indicates that music therapists are likely to make use of computer software, designed to measure changes in the way a patient and therapist make use of music in music therapy sessions. A proof of concept study investigated whether music analysis algorithms (designed to retrieve information from commercial music recordings) can be adapted to meet the needs of music therapists. Computational music analysis techniques were applied to multi-track audio recordings of simulated sessions, then to recordings of individual music therapy sessions; these were recorded by a music therapist as part of her ongoing practice with patients with acquired brain injury.The music therapist wanted to evaluate two hypotheses: one, whether changes in her tempo were affecting the tempo of a patient's play on acoustic percussion instruments, and two, whether her musical interventions were helping the patient reduce habituated, rhythmic patterning. Automatic diagrams were generated that gave a quick overview of the instrumental activity contained within each session: when, and for how long each instrument was played. From these, computational analysis was applied to musical areas of specific interest. The results of the interdisciplinary team work, audio recording tests, computer analysis tests, and music therapy field tests are presented and discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号