首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Very little is known about the use of gesture by children with developmental language disorders (DLDs). This case study of 'Lucy', a child aged 4;10 with a DLD, expands on what is known and in particular focuses on a type of idiosyncratic "rhythmic gesture" (RG) not previously reported. A fine-grained qualitative analysis was carried out of video recordings of Lucy in conversation with the first author. This revealed that Lucy's RG was closely integrated in complex ways with her use of other gesture types, speech rhythm, word juncture, syntax, pragmatics, discourse, visual processing and processing demands generally. Indeed, the only satisfactory way to explain it was as a partial byproduct of such interactions. These findings support the theoretical accounts of gesture which see it as just one component of a multimodal, integrated signalling system (e.g. Goldin-Meadow, S. (2000). Beyond words: The importance of gesture to researchers and learners. Child Development, 71(1), 231-239), and emergentist accounts of communication impairment which regard compensatory adaptation as integral (e.g. Perkins, M. R. (2007). Pragmatic Impairment. Cambridge: Cambridge University Press.).  相似文献   

2.
Behavioral evidence and theory suggest gesture and language processing may be part of a shared cognitive system for communication. While much research demonstrates both gesture and language recruit regions along perisylvian cortex, relatively less work has tested functional segregation within these regions on an individual level. Additionally, while most work has focused on a shared semantic network, less has examined shared regions for processing communicative intent. To address these questions, functional and structural MRI data were collected from 24 adult participants while viewing videos of an experimenter producing communicative, Participant‐Directed Gestures (PDG) (e.g., “Hello, come here”), noncommunicative Self‐adaptor Gestures (SG) (e.g., smoothing hair), and three written text conditions: (1) Participant‐Directed Sentences (PDS), matched in content to PDG, (2) Third‐person Sentences (3PS), describing a character's actions from a third‐person perspective, and (3) meaningless sentences, Jabberwocky (JW). Surface‐based conjunction and individual functional region of interest analyses identified shared neural activation between gesture (PDGvsSG) and language processing using two different language contrasts. Conjunction analyses of gesture (PDGvsSG) and Third‐person Sentences versus Jabberwocky revealed overlap within left anterior and posterior superior temporal sulcus (STS). Conjunction analyses of gesture and Participant‐Directed Sentences to Third‐person Sentences revealed regions sensitive to communicative intent, including the left middle and posterior STS and left inferior frontal gyrus. Further, parametric modulation using participants' ratings of stimuli revealed sensitivity of left posterior STS to individual perceptions of communicative intent in gesture. These data highlight an important role of the STS in processing participant‐directed communicative intent through gesture and language. Hum Brain Mapp 37:3444–3461, 2016. © 2016 Wiley Periodicals, Inc .  相似文献   

3.
During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.  相似文献   

4.
Background: Conveying instructions is an everyday use of language, and gestures are likely to be a key feature of this. Although co-speech iconic gestures are tightly integrated with language, and people with aphasia (PWA) produce procedural discourses impaired at a linguistic level, no previous studies have investigated how PWA use co-speech iconic gestures in these contexts.

Aims: This study investigated how PWA communicated meaning using gesture and language in procedural discourses, compared with neurologically healthy people (NHP). We aimed to identify the relative relationship of gesture and speech, in the context of impaired language, both overall and in individual events.

Methods & Procedures: Twenty-nine PWA and 29 NHP produced two procedural discourses. The structure and semantic content of language of the whole discourses were analysed through predicate argument structure and spatial motor terms, and gestures were analysed for frequency and semantic form. Gesture and language were analysed in two key events, to determine the relative information presented in each modality.

Outcomes & Results: PWA and NHP used similar frequencies and forms of gestures, although PWA used syntactically simpler language and fewer spatial words. This meant, overall, relatively more information was present in PWA gesture. This finding was also reflected in the key events, where PWA used gestures conveying rich semantic information alongside semantically impoverished language more often than NHP.

Conclusions: PWA gestures, containing semantic information omitted from the concurrent speech, may help listeners with meaning when language is impaired. This finding indicates gesture should be included in clinical assessments of meaning-making.  相似文献   

5.
Background: Speech-language pathologists considering the use of gesture as a therapeutic modality for clients with aphasia must first evaluate the integrity of their cleints' gesture systems. Questions arise with respect to which behaviours to assess and how to assess the chosen behaviours. There has been a long-held belief that tests of limb apraxia and pantomime provide valid information about candidacy for gesture-based interventions, yet the theoretical and empirical basis of this assumption is limited. Further, the relationship between conversational gesture skill and limb apraxia in co-occurring aphasia has been largely unexplored. It is possible that a client's gesture performance in natural conversation provides more valid information about gesture treatment candidacy than do tests of limb apraxia. Aims: This study aimed to investigate the relationship between the presence of limb apraxia and conversational gesture use in speakers with nonfluent aphasia. Following the assumption that limb praxis and conversational gesture reflect differing underlying processing, it was hypothesised that speakers with aphasia and limb apraxia would produce the full range of conversational gesture types in a conversational context. Further, it was hypothesised that speakers with demonstrated pantomime deficits on formal tests of pantomime would produce pantomimes naturally in conversation. Thus, a dissociation would be demonstrated between the processing responsible for gesture production as measured in limb apraxia tests and that subserving the production of conversational gesture. Methods & Procedure: Seven participants with nonfluent aphasia and ideomotor and conceptual limb apraxia conversed in a semi-structured conversation with the researcher. All arm and hand gestures produced by the participants were counted and rated according to guidelines provided by Hermann, Reichle, and Lucius-Hoene (1988), and the time they spent in either gesture or spoken expression was compared. Correlations were calculated between limb apraxia scores and proportions of meaning-laden gestures used in conversation. Outcomes & Results: All seven participants produced a wide range of gesture types. Participants with limited verbal output produced large amounts of meaning-laden gesture. Importantly, even participants with severe limb apraxia produced high proportions of meaning-laden gestures (codes and pantomimes) in the natural setting. There were no significant relationships found between scores on limb apraxia tests and natural gesture use. Conclusions: Patients with nonfluent aphasia and limb apraxia may still use meaningful conversational gesture in naturalistic settings. Tests of limb apraxia may be poor predictors of use of lexical gesture. Thus, clinicians are advised to sample lexical gesture use in spontaneous interactions.  相似文献   

6.
The functional role of the left ventral occipito‐temporal cortex (vOT) in visual word processing has been studied extensively. A prominent observation is higher activation for unfamiliar but pronounceable letter strings compared to regular words in this region. Some functional accounts have interpreted this finding as driven by top‐down influences (e.g., Dehaene and Cohen [ 2011 ]: Trends Cogn Sci 15:254–262; Price and Devlin [ 2011 ]: Trends Cogn Sci 15:246–253), while others have suggested a difference in bottom‐up processing (e.g., Glezer et al. [ 2009 ]: Neuron 62:199–204; Kronbichler et al. [ 2007 ]: J Cogn Neurosci 19:1584–1594). We used dynamic causal modeling for fMRI data to test bottom‐up and top‐down influences on the left vOT during visual processing of regular words and unfamiliar letter strings. Regular words (e.g., taxi) and unfamiliar letter strings of pseudohomophones (e.g., taksi) were presented in the context of a phonological lexical decision task (i.e., “Does the item sound like a word?”). We found no differences in top‐down signaling, but a strong increase in bottom‐up signaling from the occipital cortex to the left vOT for pseudohomophones compared to words. This finding can be linked to functional accounts which assume that the left vOT contains neurons tuned to complex orthographic features such as morphemes or words [e.g., Dehaene and Cohen [ 2011 ]: Trends Cogn Sci 15:254‐262; Kronbichler et al. [ 2007 ]: J Cogn Neurosci 19:1584–1594]: For words, bottom‐up signals converge onto a matching orthographic representation in the left vOT. For pseudohomophones, the propagated signals do not converge, but (partially) activate multiple orthographic word representations, reflected in increased effective connectivity. Hum Brain Mapp 35:1668–1680, 2014. © 2013 Wiley Periodicals, Inc.  相似文献   

7.
Jake Kurczek 《Aphasiology》2013,27(6-7):700-712
Background: Discourse cohesion and coherence gives our communication continuity. Deficits in cohesion and coherence have been reported in patients with cognitive-communication disorders (e.g., TBI, dementia). However, the diffuse nature of pathology and widespread cognitive deficits of these disorders have made identification of specific neural substrates and cognitive systems critical for cohesion and coherence challenging.

Aims: Taking advantage of a rare patient group with selective and severe declarative memory impairments, the current study attempts to isolate the contribution of declarative memory to the successful use of cohesion and coherence in discourse.

Methods & Procedures: Cohesion and coherence were examined in the discourse of six participants with hippocampal amnesia and six demographically matched comparison participants. Specifically, this study (1) documents the frequency, type, and completeness of cohesive ties; (2) evaluates discourse for local and global coherence; and (3) compares use of cohesive ties and coherence ratings in amnesia and healthy participants.

Outcomes & Results: Overall, amnesia participants produced fewer cohesive ties per T-unit, the adequacy of their ties were more often judged to be incomplete, and the ratings of their local coherence were consistently lower than comparison participants.

Conclusions: These findings suggest that declarative memory may contribute to the discursive use of cohesion and coherence. Broader notions of cohesion, or interactional cohesion, i.e., cohesion across speakers (two or more people), time (days, weeks), and communicative resources (gesture), warrant further study as the experimental tasks used in the literature, and here, may actually underestimate or overestimate the extent of impairment.  相似文献   

8.
Atomoxetine improves inhibitory control and visual processing in healthy volunteers and adults with attention‐deficit/hyperactivity disorder (ADHD). However, little is known about the neural correlates of these two functions after chronic treatment with atomoxetine. This study aimed to use the counting Stroop task with functional magnetic resonance imaging (fMRI) and the Cambridge Neuropsychological Test Automated Battery (CANTAB) to investigate the changes related to inhibitory control and visual processing in adults with ADHD. This study is an 8‐week, placebo‐controlled, double‐blind, randomized clinical trial of atomoxetine in 24 drug‐naïve adults with ADHD. We investigated the changes of treatment with atomoxetine compared to placebo‐treated counterparts using the counting Stroop fMRI and two CANTAB tests: rapid visual information processing (RVP) for inhibitory control and delayed matching to sample (DMS) for visual processing. Atomoxetine decreased activations in the right inferior frontal gyrus and anterior cingulate cortex, which were correlated with the improvement in inhibitory control assessed by the RVP. Also, atomoxetine increased activation in the left precuneus, which was correlated with the improvement in the mean latency of correct responses assessed by the DMS. Moreover, anterior cingulate activation in the pre‐treatment was able to predict the improvements of clinical symptoms. Treatment with atomoxetine may improve inhibitory control to suppress interference and may enhance the visual processing to process numbers. In addition, the anterior cingulate cortex might play an important role as a biological marker for the treatment effectiveness of atomoxetine. Hum Brain Mapp 38:4850–4864, 2017. © 2017 Wiley Periodicals, Inc.  相似文献   

9.
Background: Co-verbal gestures refer to hand or arm movements made during speaking. Spoken language and gestures have been shown to be tightly integrated in human communication.

Aims: The present study investigated whether co-verbal gesture use was associated with lexical retrieval in connected speech in unimpaired speakers and persons with aphasia (PWA).

Methods & Procedures: Narrative samples of 58 fluent PWA and 58 control speakers were extracted from Cantonese AphasiaBank. Based on the indicators of word-finding difficulty (WFD) in connected speech adapted from previous research and a gesture annotation system with independent coding of gesture forms and functions, all WFD instances were identified. The presence and type of gestures accompanying each incident of WFD were then annotated. Finally, whether the use of gesture was accompanied by resolution of WFD (i.e., the corresponding target word could be retrieved) was examined.

Outcomes & Results: Employment of co-verbal gesture did not seem to be related to the success of word retrieval. PWA’s naming ability at single-word level and their overall language ability (as reflected by the aphasia quotient of the Cantonese version of the Western Aphasia Battery) were found to be the two strongest predictors of success rate of resolving WFD.

Conclusions: The Lexical Retrieval Hypothesis highlighting the facilitative functions of iconic and metaphoric gestures in lexical retrieval was not supported. Challenges in conducting research related to WFD, and the clinical implications in gesture-based language intervention for PWA were discussed.  相似文献   


10.
Gestures represent an integral aspect of interpersonal communication, and they are closely linked with language and thought. Brain regions for language processing overlap with those for gesture processing. Two types of gesticulation, beat gestures and metaphoric gestures are particularly important for understanding the taxonomy of co‐speech gestures. Here, we investigated gesture production during taped interviews with respect to regional brain volume. First, we were interested in whether beat gesture production is associated with similar regions as metaphoric gesture. Second, we investigated whether cortical regions associated with metaphoric gesture processing are linked to gesture production based on correlations with brain volumes. We found that beat gestures are uniquely related to regional volume in cerebellar regions previously implicated in discrete motor timing. We suggest that these gestures may be an artifact of the timing processes of the cerebellum that are important for the timing of vocalizations. Second, our findings indicate that brain volumes in regions of the left hemisphere previously implicated in metaphoric gesture processing are positively correlated with metaphoric gesture production. Together, this novel work extends our understanding of left hemisphere regions associated with gesture to indicate their importance in gesture production, and also suggests that beat gestures may be especially unique. This provides important insight into the taxonomy of co‐speech gestures, and also further insight into the general role of the cerebellum in language. Hum Brain Mapp 36:4016–4030, 2015. © 2015 Wiley Periodicals, Inc.  相似文献   

11.
Body orientation and eye gaze influence how information is conveyed during face-to-face communication. However, the neural pathways underpinning the comprehension of social cues in everyday interaction are not known. In this study we investigated the influence of addressing vs. non-addressing body orientation on the neural processing of speech accompanied by gestures.While in an fMRI scanner, participants viewed short video clips of an actor speaking sentences with object- (O; e.g., shape) or person-related content (P; e.g., saying goodbye) accompanied by iconic (e.g., circle) or emblematic gestures (e.g., waving), respectively. The actor's body was oriented either toward the participant (frontal, F) or toward a third person (lateral, L) not visible.For frontal vs. lateral actor orientation (F > L), we observed activation of bilateral occipital, inferior frontal, medial frontal, right anterior temporal and left parietal brain regions. Additionally, we observed activity in the occipital and anterior temporal lobes due to an interaction effect between actor orientation and content of the communication (PF > PL) > (OF > OL).Our findings indicate that social cues influence the neural processing of speech-gesture utterances. Mentalizing (the process of inferring the mental state of another individual) could be responsible for these effects. In particular, socially relevant cues seem to activate regions of the anterior temporal lobes if abstract person-related content is communicated by speech and gesture. These new findings illustrate the complexity of interpersonal communication, as our data demonstrate that multisensory information pathways interact at both perceptual and semantic levels.  相似文献   

12.
Gestures are an important part of interpersonal communication, for example by illustrating physical properties of speech contents (e.g., “the ball is round”). The meaning of these so‐called iconic gestures is strongly intertwined with speech. We investigated the neural correlates of the semantic integration for verbal and gestural information. Participants watched short videos of five speech and gesture conditions performed by an actor, including variation of language (familiar German vs. unfamiliar Russian), variation of gesture (iconic vs. unrelated), as well as isolated familiar language, while brain activation was measured using functional magnetic resonance imaging. For familiar speech with either of both gesture types contrasted to Russian speech‐gesture pairs, activation increases were observed at the left temporo‐occipital junction. Apart from this shared location, speech with iconic gestures exclusively engaged left occipital areas, whereas speech with unrelated gestures activated bilateral parietal and posterior temporal regions. Our results demonstrate that the processing of speech with speech‐related versus speech‐unrelated gestures occurs in two distinct but partly overlapping networks. The distinct processing streams (visual versus linguistic/spatial) are interpreted in terms of “auxiliary systems” allowing the integration of speech and gesture in the left temporo‐occipital region. Hum Brain Mapp, 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

13.
There is controversy as to how responses to colour in the human brain are organized within the visual pathways. A key issue is whether there are modular pathways that respond selectively to colour or whether there are common neural substrates for both colour and achromatic (Ach) contrast. We used functional magnetic resonance imaging (fMRI) adaptation to investigate the responses of early and extrastriate visual areas to colour and Ach contrast. High‐contrast red–green (RG) and Ach sinewave rings (0.5 cycles/degree, 2 Hz) were used as both adapting stimuli and test stimuli in a block design. We found robust adaptation to RG or Ach contrast in all visual areas. Cross‐adaptation between RG and Ach contrast occurred in all areas indicating the presence of integrated, colour and Ach responses. Notably, we revealed contrasting trends for the two test stimuli. For the RG test, unselective processing (robust adaptation to both RG and Ach contrast) was most evident in the early visual areas (V1 and V2), but selective responses, revealed as greater adaptation between the same stimuli than cross‐adaptation between different stimuli, emerged in the ventral cortex, in V4 and VO in particular. For the Ach test, unselective responses were again most evident in early visual areas but Ach selectivity emerged in the dorsal cortex (V3a and hMT+). Our findings support a strong presence of integrated mechanisms for colour and Ach contrast across the visual hierarchy, with a progression towards selective processing in extrastriate visual areas.  相似文献   

14.
To date, empirical research relating to responsible gambling features has been sparse. A Delphi-based study rated the perceived effectiveness of 45 responsible gambling (RG) features in relation to 20 distinct gambling type games. Participants were 61 raters from seven countries and included responsible gambling experts (n?=?22), treatment providers (n?=?19) and recovered problem gamblers (n?=?20). The most highly recommended RG features could be divided into three groups: 1) Player initiated tools focused on aiding player behavior; 2) RG features related to informed-player choice; 3) RG features focused on gaming company actions. Overall, player control over personal limits were favoured more than gaming company controlled limits, although mandatory use of such features was often recommended. The study found that recommended RG features varied considerably between game types, according to their structural characteristics. Also, online games had the possibility to provide many more RG features than traditional (offline games). The findings draw together knowledge about the effectiveness of RG features for specific game types. This should aid objective, cost-effective, evidence based decisions on which RG features to include in an RG strategy, according to a specific portfolio of games. The findings of this study will available via a web-based tool, known as the Responsible Gambling Knowledge Centre (RGKC).  相似文献   

15.
Communicative intentions are transmitted by many perceptual cues, including gaze direction, body gesture, and facial expressions. However, little is known about how these visual social cues are integrated over time in the brain and, notably, whether this binding occurs in the emotional or the motor system. By coupling magnetic resonance and electroencephalography imaging in humans, we were able to show that, 200 ms after stimulus onset, the premotor cortex integrated gaze, gesture, and emotion displayed by a congener. At earlier stages, emotional content was processed independently in the amygdala (170 ms), whereas directional cues (gaze direction with pointing gesture) were combined at ~190 ms in the parietal and supplementary motor cortices. These results demonstrate that the early binding of visual social signals displayed by an agent engaged the dorsal pathway and the premotor cortex, possibly to facilitate the preparation of an adaptive response to another person's immediate intention.  相似文献   

16.
During face‐to‐face communication, body orientation and coverbal gestures influence how information is conveyed. The neural pathways underpinning the comprehension of such nonverbal social cues in everyday interaction are to some part still unknown. During fMRI data acquisition, 37 participants were presented with video clips showing an actor speaking short sentences. The actor produced speech‐associated iconic gestures (IC) or no gestures (NG) while he was visible either from an egocentric (ego) or from an allocentric (allo) position. Participants were asked to indicate via button press whether they felt addressed or not. We found a significant interaction of body orientation and gesture in addressment evaluations, indicating that participants evaluated IC‐ego conditions as most addressing. The anterior cingulate cortex (ACC) and left fusiform gyrus were stronger activated for egocentric versus allocentric actor position in gesture context. Activation increase in the ACC for IC‐ego>IC‐allo further correlated positively with increased addressment ratings in the egocentric gesture condition. Gesture‐related activation increase in the supplementary motor area, left inferior frontal gyrus and right insula correlated positively with gesture‐related increase of addressment evaluations in the egocentric context. Results indicate that gesture use and body‐orientation contribute to the feeling of being addressed and together influence neural processing in brain regions involved in motor simulation, empathy and mentalizing. Hum Brain Mapp 36:1925–1936, 2015. © 2015 Wiley Periodicals, Inc .  相似文献   

17.
This study examined the relationship between repetitive behaviors and sensory processing issues in school-aged children with high functioning autism (HFA). Children with HFA (N = 61) were compared to healthy, typical controls (N = 64) to determine the relationship between these behavioral classes and to examine whether executive dysfunction explained any relationship between the variables. Particular types of repetitive behavior (i.e., stereotypy and compulsions) were related to sensory features in autism; however, executive deficits were only correlated with repetitive behavior. This finding suggests that executive dysfunction is not the shared neurocognitive mechanism that accounts for the relationship between restricted, repetitive behaviors and aberrant sensory features in HFA. Group status, younger chronological age, presence of sensory processing issues, and difficulties with behavior regulation predicted the presence of repetitive behaviors in the HFA group.  相似文献   

18.
To find important objects, we must focus on our goals, ignore distractions, and take our changing environment into account. This is formalized in models of visual search whereby goal-driven, stimulus-driven, and history-driven factors are integrated into a priority map that guides attention. Stimulus history robustly influences where attention is allocated even when the physical stimulus is the same: when a salient distractor is repeated over time, it captures attention less effectively. A key open question is how we come to ignore salient distractors when they are repeated. Goal-driven accounts propose that we use an active, expectation-driven mechanism to attenuate the distractor signal (e.g., predictive coding), whereas stimulus-driven accounts propose that the distractor signal is attenuated because of passive changes to neural activity and inter-item competition (e.g., adaptation). To test these competing accounts, we measured item-specific fMRI responses in human visual cortex during a visual search task where trial history was manipulated (colors unpredictably switched or were repeated). Consistent with a stimulus-driven account of history-based distractor suppression, we found that repeated singleton distractors were suppressed starting in V1, and distractor suppression did not increase in later visual areas. In contrast, we observed signatures of goal-driven target enhancement that were absent in V1, increased across visual areas, and were not modulated by stimulus history. Our data suggest that stimulus history does not alter goal-driven expectations, but rather modulates canonically stimulus-driven sensory responses to contribute to a temporally integrated representation of priority.SIGNIFICANCE STATEMENT Visual search refers to our ability to find what we are looking for in a cluttered visual world (e.g., finding your keys). To perform visual search, we must integrate information about our goals (e.g., “find the red keychain”), the environment (e.g., salient items capture your attention), and changes to the environment (i.e., stimulus history). Although stimulus history impacts behavior, the neural mechanisms that mediate history-driven effects remain debated. Here, we leveraged fMRI and multivariate analysis techniques to measure history-driven changes to the neural representation of items during visual search. We found that stimulus history influenced the representation of a salient “pop-out” distractor starting in V1, suggesting that stimulus history operates via modulations of early sensory processing rather than goal-driven expectations.  相似文献   

19.
The role of iconic gestures in speech disambiguation: ERP evidence   总被引:1,自引:0,他引:1  
The present series of experiments explored the extent to which iconic gestures convey information not found in speech. Electroencephalogram (EEG) was recorded as participants watched videos of a person gesturing and speaking simultaneously. The experimental sentences contained an unbalanced homonym in the initial part of the sentence (e.g., She controlled the ball ...) and were disambiguated at a target word in the subsequent clause (which during the game ... vs. which during the dance ...). Coincident with the initial part of the sentence, the speaker produced an iconic gesture which supported either the dominant or the subordinate meaning. Event-related potentials were time-locked to the onset of the target word. In Experiment 1, participants were explicitly asked to judge the congruency between the initial homonym-gesture combination and the subsequent target word. The N400 at target words was found to be smaller after a congruent gesture and larger after an incongruent gesture, suggesting that listeners can use gestural information to disambiguate speech. Experiment 2 replicated the results using a less explicit task, indicating that the disambiguating effect of gesture is somewhat task-independent. Unrelated grooming movements were added to the paradigm in Experiment 3. The N400 at subordinate targets was found to be smaller after subordinate gestures and larger after dominant gestures as well as grooming, indicating that an iconic gesture can facilitate the processing of a lesser frequent word meaning. The N400 at dominant targets no longer varied as a function of the preceding gesture in Experiment 3, suggesting that the addition of meaningless movements weakened the impact of gesture. Thus, the integration of gesture and speech in comprehension does not appear to be an obligatory process but is modulated by situational factors such as the amount of observed meaningful hand movements.  相似文献   

20.
Objective: This study explored the perceived impact of parental drinking on children in a South African township where alcohol abuse is prevalent and high levels of existing poverty and violence may exacerbate potential consequences on children.

Method: Qualitative in-depth interviews were conducted with 92 male and female participants recruited from alcohol-serving venues in Cape Town, South Africa.

Results: Grounded theory analyses revealed three major aspects of parental drinking — intoxication, venue attendance and expenditures on alcohol — which participants linked to negative proximal outcomes (e.g., child neglect, abuse and exposure to alcohol culture) and long-term outcomes (e.g., fractured parent–child relationships and problematic youth behaviours). In addition, preliminary accounts from some participants suggested that parents may experience tensions between desires to reduce drinking for child-related reasons and complex factors maintaining their drinking behaviour, including the use of alcohol to cope with stressors and trauma.

Conclusions: This study provides novel insights into the consequences and motivations of parental drinking in a high-risk context. Contextual risks (e.g., poverty and violence) that exacerbate the impact of parental drinking on children may be the same factors that continue to shape intergenerational alcohol use in this community. Findings highlight opportunities for further research and interventions to support child protection in South Africa.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号