首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The mechanisms of attention prioritize sensory input for efficient perceptual processing. Influential theories suggest that attentional biases are mediated via preparatory activation of task-relevant perceptual representations in visual cortex, but the neural evidence for a preparatory coding model of attention remains incomplete. In this experiment, we tested core assumptions underlying a preparatory coding model for attentional bias. Exploiting multivoxel pattern analysis of functional neuroimaging data obtained during a non-spatial attention task, we examined the locus, time-course, and functional significance of shape-specific preparatory attention in the human brain. Following an attentional cue, yet before the onset of a visual target, we observed selective activation of target-specific neural subpopulations within shape-processing visual cortex (lateral occipital complex). Target-specific modulation of baseline activity was sustained throughout the duration of the attention trial and the degree of target specificity that characterized preparatory activation patterns correlated with perceptual performance. We conclude that top-down attention selectively activates target-specific neural codes, providing a competitive bias favoring task-relevant representations over competing representations distributed within the same subregion of visual cortex.  相似文献   

2.
To efficiently extract visual information from complex visual scenes to guide behavior and thought, visual input needs to be organized into discrete units that can be selectively attended and processed. One important such selection unit is visual objects. A crucial factor determining object-based selection is the grouping between visual elements. Although human lesion data have pointed to the importance of the parietal cortex in object-based representations, our understanding of these parietal mechanisms in normal human observers remains largely incomplete. Here we show that grouped shapes elicited lower functional MRI (fMRI) responses than ungrouped shapes in inferior intraparietal sulcus (IPS) even when grouping was task-irrelevant. This relative ease of representing grouped shapes allowed more shape information to be passed onto later stages of visual processing, such as information storage in superior IPS, and may explain why grouped visual elements are easier to perceive than ungrouped ones after parietal brain lesions. These results are discussed within a neural object file framework, which argues for distinctive neural mechanisms supporting object individuation and identification in visual perception.  相似文献   

3.
OBJECTIVES: To examine the association between visual attention/processing speed and mobility in older adults. DESIGN: Cross-sectional. SETTING: Clinical research unit of a department of ophthalmology. PARTICIPANTS: Three hundred forty-two older adults (aged 55-85) living independently in the community recruited from primary eye care practices. MEASUREMENTS: In addition to demographic, health, and functional information, the following variables were collected at the second annual visit of a prospective study on mobility: a test of visual attention/processing speed; a performance mobility assessment; and self-reported measures of falls, falls efficacy, mobility/balance, and physical activity. RESULTS: Lower scores on visual attention/processing speed were significantly related to poorer scores on the performance mobility assessment, even after adjustment for age, sex, race, education, number of chronic medical conditions, cognitive status, depressive symptoms, visual acuity, and contrast sensitivity (P=.04). Scores on the visual attention/processing speed test were unrelated to the self-reported measures of mobility. CONCLUSION: Results imply that visual attention impairment/slowed visual processing speed in older adults is independently associated with mobility problems. Interventions to reverse or minimize the progression of mobility dysfunction in older adults should take this common aging-related deficit in visual processing into account.  相似文献   

4.
5.
Macaques, like humans, rapidly orient their attention in the direction other individuals are looking. Both cortical and subcortical pathways have been proposed as neural mediators of social gaze following, but neither pathway has been characterized electrophysiologically in behaving animals. To address this gap, we recorded the activity of single neurons in the lateral intraparietal area (LIP) of rhesus macaques to determine whether and how this area might contribute to gaze following. A subset of LIP neurons mirrored observed attention by firing both when the subject looked in the preferred direction of the neuron, and when observed monkeys looked in the preferred direction of the neuron, despite the irrelevance of the monkey images to the task. Importantly, the timing of these modulations matched the time course of gaze-following behavior. A second population of neurons was suppressed by social gaze cues, possibly subserving task demands by maintaining fixation on the observed face. These observations suggest that LIP contributes to sharing of observed attention and link mirror representations in parietal cortex to a well studied imitative behavior.  相似文献   

6.
Compared to European-Americans, African-Americans have greater probability of becoming infected with HIV, as well as worse outcomes when they become infected. Therefore, adequate health communications should ensure that they capture the attention of African-Americans and do not perpetuate disadvantages relative to European-Americans. The objective of this report was to examine if racial disparities in attention to health information parallel racial disparities in health outcomes. Participants were clients of a public health clinic (Study 1 n?=?64; Study 2 n?=?55). Unobtrusive observation in a public health waiting room, message reading times, and response-time on a modified flanker task were used to examine attention to HIV- and flu-information across racial groups. In Study 1, participants were observed for the duration of their time in a public health clinic waiting room (average duration: 31 min). In Study 2, participants completed tasks in a private room at the public health clinic (average duration: 21 min). Across all attention measures, results suggest an interaction between race and information type on attention to health information. In particular, African-Americans differentially attended to information as a function of information type, with decreased attention to HIV- versus flu-information. In contrast, European-Americans attended equally to both HIV- and flu-information. As such, disparities in attention yielded less access to certain health information for African- than European-Americans in a health setting. The identified disparities in attention are particularly problematic because they disadvantage African-Americans at a time of great effort to correct racial disparities. Modifying the framing of health information in ways that ensure attention by all racial groups may be a strategy to increase attention, and thereby reduce disparities in health outcomes. Future research should find solutions that increase attentional access to health communications for all groups.  相似文献   

7.
Sharp-wave ripples (SWRs) are highly synchronous neuronal activity events. They have been predominantly observed in the hippocampus during offline states such as pause in exploration, slow-wave sleep, and quiescent wakefulness. SWRs have been linked to memory consolidation, spatial navigation, and spatial decision-making. Recently, SWRs have been reported during visual search, a form of remote spatial exploration, in macaque hippocampus. However, the association between SWRs and multiple forms of awake conscious and goal-directed behavior is unknown. We report that ripple activity occurs in macaque visual areas V1 and V4 during focused spatial attention. The occurrence of ripples is modulated by stimulus characteristics, increased by attention toward the receptive field, and by the size of the attentional focus. During attention cued to the receptive field, the monkey’s reaction time in detecting behaviorally relevant events was reduced by ripples. These results show that ripple activity is not limited to hippocampal activity during offline states, rather they occur in the neocortex during active attentive states and vigilance behaviors.

Hippocampal sharp-wave ripples (SWR, ripples) are large amplitude deflections (sharp-waves) of the local field potential (LFP) in the hippocampus of rodents, humans, and nonhuman primates, associated with a brief fast oscillatory pattern (ripple). Ripple oscillations vary in frequency from 140 to 200 Hz in rodent and 80 to 180 Hz in nonhuman primates and humans (16). SWRs occur at ~0.5 Hz in the hippocampus, 0.1 to 0.5 Hz in the posterior parietal, retrosplenial, cingulate, and at 0.05 Hz in somatosensory, motor, and visual cortices during nonrapid eye movement (NREM) sleep (7). During hippocampal SWRs, 15% of hippocampal pyramidal cells discharge synchronously, which triggers activation in cortical areas, but suppression in midbrain and brainstem regions (2, 3).Ripples support memory consolidation by transferring information acquired during waking to cortical networks during sleep and quiescence (79). Consolidation occurs through temporal replay of event-related activity in the hippocampus during ripples (1015). SWRs are also predictive of future trajectory and performance during spatial navigation tasks (1618). Finally, they are implicated in the correct temporal sequencing of place cell activity preceding novel spatial experiences (preplay) (19). Memory consolidation in the visual cortex requires NREM sleep spindle activity (20), which are coordinated with hippocampal ripples (21), increasing hippocampal–neocortical coupling (22) and associated information transfer.In rodents, hippocampal SWRs are pronounced during offline states (23, 24), but they occur during awake states in humans (25), as well as nonhuman primates during visual search and goal-directed visual exploration, termed exploratory SWRs (26, 27). Hippocampal SWR occurrence of monkeys is increased when the subject’s gaze is focused near a target object during search or when patients observe familiar pictures of scenes or faces (5, 26, 28). Here, ripple rates also increased during free recall along with a high-frequency band activation around the time of ripple in the visual cortex, suggesting a role of SWRs in activating the visual cortex during episodic and semantic memory retrieval (5, 19, 26, 2831). Memory and attention are intertwined, and working memory and attention affect neural activity in striate and extrastriate cortex similarly (32, 33). Given that ripples are strongly linked to memory, it is tempting to speculate that they are linked to attention.Attention affects components of neural circuits, that in the hippocampus drive ripples. Sharp-waves are generated by excitatory afferents from CA3 to CA1 (34, 35), but ripples are evoked by parvalbumin-positive (PV+) interneurons inside CA1 (36). Optogenetic activation of PV+ cells induces hippocampal ripples (37), and PV+ cells can be active during each ripple cycle (38, 39). Narrow spiking cells, often associated with PV+ cells (40) are more affected by spatial attention than broad spiking cells (41). Hence, we hypothesize that ripples increase in the visual cortex when animals are cued to attend to the receptive field (RF, referred to as cue RF conditions), relative to when animals are cued to attend to the opposite hemifield (cue away conditions), as PV+ drive is likely increased.Attention might also increase ripple rates through cholinergic mechanisms. Ripples during offline states coincide with reduced septal acetylcholine (ACh) release into the hippocampus, and cholinergic suppression of hippocampal SWR impairs spatial working memory (42). ACh plays an important role in working memory and spatial attention, and hence SWRs should decrease during cue RF conditions, if ACh levels are increased. However, cholinergic receptor distribution differs between the hippocampus and primate visual cortex. In the human hippocampus, M1 receptors are predominantly expressed in excitatory pyramidal cells (43), while in the (primate) visual cortex, they are predominantly expressed on inhibitory interneurons, and here especially on PV+ cells (44). Hence, increased ACh levels with attention (45) might trigger higher ripple rates in the visual cortex. SWRs have been suggested to be an alternative working memory system to the proposed system of “delay activity” or “neuronal chaining” and theta activity (46, 47). If SWRs help refocus by “reminding” the system about current task demands, then we hypothesize that ripples increase when animals are required to attend to the RF under cue RF conditions.In rodents, somatostatin-positive (SOM+) interneurons are suppressed during ripple episodes (39), and SOM+ cell activity is linked to surround suppression (48). Whether similar mechanisms occur in the primate neocortex, where SOM+ is not a good marker for interneurons (49, 50), is unknown. However, primate calbindin-positive (CB+) cells may be homolog to rodent SOM+ cells, and may serve a similar function. If so, spatial attention, which causes surround “exclusion” (51), might do so through reduced CB+ activity (51, 52), and thereby increase ripple rates. The link between increased SOM+ activity and reduced ripple rates (in rodents) further suggests that larger stimuli, inducing surround suppression and higher SOM+ (CB+) cell activity, result in reduced ripple rates.To examine these predictions, we recorded LFPs and spiking activity in visual areas V1 and V4 of two male macaque monkeys performing a cued spatial attention task. Ripple activity was detected in both regions, ripples occurred more often when monkeys were cued to deploy attention to the RF of the recorded neurons, smaller stimuli resulted in higher ripple rates than larger stimuli, and ripple occurrence was predictive of better behavioral performance. Thus, ripples occur in cortical visual areas and are involved in cognitive functions beyond memory consolidation and retrieval.  相似文献   

8.
In primates, visual stimuli with social and emotional content tend to attract attention. Attention might be captured through rapid, automatic, subcortical processing or guided by slower, more voluntary cortical processing. Here we examined whether irrelevant faces with varied emotional expressions interfere with a covert attention task in macaque monkeys. In the task, the monkeys monitored a target grating in the periphery for a subtle color change while ignoring distracters that included faces appearing elsewhere on the screen. The onset time of distracter faces before the target change, as well as their spatial proximity to the target, was varied from trial to trial. The presence of faces, especially faces with emotional expressions interfered with the task, indicating a competition for attentional resources between the task and the face stimuli. However, this interference was significant only when faces were presented for greater than 200 ms. Emotional faces also affected saccade velocity and reduced pupillary reflex. Our results indicate that the attraction of attention by emotional faces in the monkey takes a considerable amount of processing time, possibly involving cortical–subcortical interactions. Intranasal application of the hormone oxytocin ameliorated the interfering effects of faces. Together these results provide evidence for slow modulation of attention by emotional distracters, which likely involves oxytocinergic brain circuits.An important issue in understanding the processing of significant affective stimuli is the extent to which these stimuli compete with ongoing tasks in normal healthy individuals. It is generally agreed that there exists attentional bias toward emotional faces in humans as well as other primates (1, 2). However, it is uncertain whether emotional faces trigger attentional capture, which we define as an immediate shift of visual attention at the expense of other stimuli.Affective reactions can be evoked with minimal cognitive processing (3). For instance, presentation of emotional faces under reduced awareness by masking activates the amygdala (4, 5) and produces pupillary (6) and skin conductance responses (7). An immediate response to affectively salient stimuli is thought possible through a direct subcortical pathway via the amygdala, bypassing the primary sensory cortex (8). This activity could then influence allocation of attentional resources in the cortex (9, 10). Consistent with this, emotional faces capture attention in visual search (1113), even when irrelevant to the task at hand. Based on these findings, one would predict that emotional faces will interfere with a primary attention task.However, shifts of attention to emotional stimuli are likely not obligatory in every circumstance. Some studies indicate that capture only occurs in low perceptual load conditions (14, 15). Functional MRI (fMRI) has shown that the amygdala response to affective stimuli is modulated by task demands (16, 17). Also, capture is not entirely stimulus driven, but may be dependent on overlap between the current attentional set and the stimulus that does the capturing (18, 19). In a larger sense, goals and expectations influence capture (20). These factors are more cognitive in nature and possibly involve cortical processing. It has been proposed that the subcortical affective response depends on cortical processing (21). When cortical resources are fully taken up by a primary task, affective stimuli may not get any processing advantage and therefore may not interfere with the task.In the present study, three monkeys detected a subtle color change in an attended target while faces of conspecifics were presented as distracters (Fig. 1). Attentional capture of the faces was measured in terms of reduction in sensitivity for detecting the color change. We found that face distracters did influence monkeys’ performance and reaction time (RT), as well as affected their eye velocity and pupillary dilatation, especially when the faces had a threat expression. Importantly, these influences were dependent on presentation duration of the face images, suggesting that shifts of attention toward the faces were not immediate.Open in a separate windowFig. 1.Methods. (A) Illustration of screen events in the task. (B) Timeline of screen events with possible times that image onset and target/distracter change could occur. (C) Examples of facial expressions of one individual in the stimulus set. From left to right: neutral, threat, fear grin, and lip smack. Fear grin and lip smack were combined.Although different viewpoints predict different roles for cortical and subcortical pathways, there seems to be general agreement that preexisting bias affects how attention is allocated. For example, anxiety is often associated with bias toward fear-relevant information (2226). To manipulate our subjects’ bias toward faces, we administered the hormone oxytocin (OT). It has been shown that inhalation of OT increases attention to eyes (27, 28) and ability to read emotions from facial expressions (29). In monkeys, OT was shown to blunt social vigilance (30). We found that OT reduced interference on our task, indicating a link between oxytocinergic circuits and attentional circuits.  相似文献   

9.
The precise mechanisms by which the information ecosystem polarizes society remain elusive. Focusing on political sorting in networks, we develop a computational model that examines how social network structure changes when individuals participate in information cascades, evaluate their behavior, and potentially rewire their connections to others as a result. Individuals follow proattitudinal information sources but are more likely to first hear and react to news shared by their social ties and only later evaluate these reactions by direct reference to the coverage of their preferred source. Reactions to news spread through the network via a complex contagion. Following a cascade, individuals who determine that their participation was driven by a subjectively “unimportant” story adjust their social ties to avoid being misled in the future. In our model, this dynamic leads social networks to politically sort when news outlets differentially report on the same topic, even when individuals do not know others’ political identities. Observational follow network data collected on Twitter support this prediction: We find that individuals in more polarized information ecosystems lose cross-ideology social ties at a rate that is higher than predicted by chance. Importantly, our model reveals that these emergent polarized networks are less efficient at diffusing information: Individuals avoid what they believe to be “unimportant” news at the expense of missing out on subjectively “important” news far more frequently. This suggests that “echo chambers”—to the extent that they exist—may not echo so much as silence.

By standard measures, political polarization in the American mass public is at its highest point in nearly 50 y (1). The consequences of this fundamental and growing societal divide are potentially severe: High levels of polarization reduce policy responsiveness and have been associated with decreased social trust (2), acceptance of and dissemination of misinformation (3), democratic erosion (4), and in extreme cases even violence (5). While policy divides have traditionally been thought to drive political polarization, recent research suggests that political identity may play a stronger role (6, 7). Yet people’s political identities may be increasingly less visible to those around them: Many Americans avoid discussing and engaging with politics and profess disdain for partisanship (8), and identification as “independent” from the two major political parties is higher than at any point since the 1950s (9). Taken together, these conflicting patterns complicate simple narratives about the mechanisms underlying polarization. Indeed, how macrolevel divisions relate to the preferences, perceptions, and interpersonal interactions of individuals remains a significant puzzle.A solution to this puzzle is particularly elusive given that many Americans, increasingly wary of political disagreement, avoid signaling their politics in discussions and self-presentation and thus lack direct information about the political identities of their social connections (10). However, regardless of individuals’ perceptions about each other, the information ecosystem around them—the collection of news sources available to society—reflects, at least to some degree, the structural divides of the political and economic system (11, 12). Traditional accounts of media-driven polarization have emphasized a direct mechanism: Individuals are influenced by the news they consume (13) but also tend to consume news from outlets that align with their politics (14, 15), thereby reinforcing their views and shifting them toward the extremes (16, 17). However, large-scale behavioral studies have offered mixed evidence of these mechanisms (18, 19), including evidence that many people encounter a significant amount of counter-attitudinal information online (2022). Furthermore, instead of directly tuning into news sources, individuals often look to their immediate social networks to guide their attention to the most important issues (2327). Therefore, it is warranted to investigate how the information ecosystem may impact society beyond direct influence on individual opinions.Here, we examine media-driven polarization as a social process (28) and propose a mechanism—information cascades—by which a polarized information ecosystem can indirectly polarize society by causing individuals to self-sort into emergent homogeneous social networks even when they do not know others’ political identities. Information cascades, in which individuals observe and adopt the behavior of others, allow the actions of a few individuals to quickly propagate through a social network (29, 30). Found in social systems ranging from fish schools (31) and insect swarms (32) to economic markets (33) and popular culture (29), information cascades are a widespread social phenomenon that can greatly impact collective behavior such as decision making (34). Online social media platforms are especially prone to information cascades since the primary affordances of these services involve social networking and information sharing (3538): For example, users often see and share posts of social connections without ever reading the source material (e.g., a shared news article) (39). In addition to altering beliefs and behavior, information cascades can also affect social organization: For instance, retweet cascades on Twitter lead to bursts of unfollowing and following activity (40) that indicate sudden shifts in social connections as a direct result of information spreading through the social network. While research so far has been agnostic as to the content of the information shared during a cascade, it is plausible that information from partisan news outlets could create substantial changes in networks of individuals.We therefore propose that the interplay between network-altering cascades and an increasingly polarized information ecosystem could result in politically sorted social networks, even in the absence of partisan cues. While we do not argue that this mechanism is the only driver of political polarization—a complex phenomenon likely influenced by several factors—we do argue that the interplay between information and social organization could be one driver that is currently overlooked in discussions of political polarization. We explore this proposition by developing a general theoretical model. After presenting the model, we use Twitter data to probe some of its predictions. Finally, we use the model to explore how the emergence of politically sorted networks might alter information diffusion.  相似文献   

10.
Organisms with complex visual systems rarely respond to just the sum of all visual stimuli impinging on their eyes. Often, they restrict their responses to stimuli in a temporarily selected region of the visual field (selective visual attention). Here, we investigate visual attention in the fly Drosophila during tethered flight at a torque meter. Flies can actively shift their attention; however, their attention can be guided to a certain location by external cues. Using visual cues, we can direct the attention of the fly to one or the other of the two visual half-fields. The cue can precede the test stimulus by several seconds and may also be spatially separated from the test by at least 20° and yet attract attention. This kind of external guidance of attention is found only in the lower visual field.  相似文献   

11.
12.
Percolation theory has been widely used to study phase transitions in network systems. It has also successfully explained various macroscopic spreading phenomena across different fields. Yet, the theoretical frameworks have been focusing on direct interactions among nodes, while recent empirical observations have shown that indirect interactions are common in many network systems like social and ecological networks, among others. By investigating the detailed mechanism of both direct and indirect influence on scientific collaboration networks, here we show that indirect influence can play the dominant role in behavioral influence. To address the lack of theoretical understanding of such indirect influence on the macroscopic behavior of the system, we propose a percolation mechanism of indirect interactions called induced percolation. Surprisingly, our model exhibits a unique anisotropy property. Specifically, directed networks show first-order abrupt transitions as opposed to the second-order continuous transition in the same network structure but with undirected links. A mix of directed and undirected links leads to rich hybrid phase transitions. Furthermore, a unique feature of the nonmonotonic pattern is observed in network connectivities near the critical point. We also present an analytical framework to characterize the proposed induced percolation, paving the way to further understanding network dynamics with indirect interactions.

Percolation theory (1) is one of the most prominent frameworks within statistical physics. Initially developed (2, 3) to explain the chemical formation of large macromolecules, it has been recently used to study various dynamical processes in complex networks (49). Examples include the use of bond percolation (9, 10) to study the wide spread of rumors over online social media and outbreaks of infectious diseases on structured populations. Site percolation (4, 5, 11) has been employed to study the cascading failures of infrastructure networks (6, 1216) and the resilience of protein–protein interaction networks (17). Likewise, bootstrap percolation (18), k-core (1921), and linear threshold percolation (7, 2224) have enabled the study of the spreading of behaviors over social networks. Finally, the so-called explosive percolation (25) has allowed a better characterization of systems’ structural transitions when they are growing or can adapt, whereas core percolation (26, 27) has contributed significantly to insights into nondeterministic polynomial problems. Common to all these percolation models is that they have successfully described various important dynamical phenomena by considering different direct interactions (8, 9, 28) among network nodes; in particular, they have captured the behavior of network systems as given by phase transitions (4, 8, 9, 28, 29).Our study is motivated by recent evidence that there are many systems in which indirect interactions play a major role in their spreading dynamics (3035). Such underlying indirect interactions have important implications not only on the dynamics of the system but also on the evolution and the emergence of network structures. For example, Christakis and Fowler (30, 31) found that for the spreading of many social behaviors, such as drug (36) and alcohol addictions (37) and obesity (30), an individual can span their influence to their friends around three degrees of separation (friend of a friend’s friend). This phenomenon is also widely known as “three degrees of influence” in social science. In ecological networks, Guimarães et al. (32, 33) discovered in 2017 that indirect effects contribute strongly to the trait coevolution among reciprocal species, which can alter environmental selection and promote the evolution of species.Despite the ubiquity of indirect influence in various real-world systems, few studies have examined the exact mechanisms by which the indirect influences occur, or the relative strengths between direct and indirect influences. Here, based on empirical analyses of scientific collaboration networks, we reveal that indirect influence occurs through next-nearest neighbors and can be the dominant mechanism through which research interests change; on the contrary, evidence of direct (nearest) influence is relatively weak.However, on the theoretical front, up to now there has been no percolation-based theoretical model to describe the underlying mechanism of indirect influence or its distinctions with existing percolation models in terms of the macroscopic behaviors. For either regular networks or complex networks, various percolation models like bond, site, bootstrap, k-core, linear threshold and core, etc., are always based on direct interactions (8, 9, 28) among nodes. In essence, all of these models only take into account the existence and the strength of directly connected nodes, regardless of any indirect influences of other nodes. Hence, they are not suitable for describing the indirect mechanism. Here, we propose a percolation framework called induced percolation to theoretically study the impact of such an indirect mechanism on the whole system.Our results show that indirect interactions lead to a unique macroscopic behavior characterized by anisotropy and phase transitions and different spreading outcomes compared to the direct influence mechanisms. Specifically, we study the most general scenario in which links can have directions and report that varying the links’ directionality could change the order of the phase transition. This is in sharp contrast to previous percolation models, for which the nature of the phase transitions is not affected by the directionality of links. Such rich phase transition behavior is further illustrated in our simulations on empirical networks. To the best of our knowledge, the phenomenon of directionality-related order of the phase transitions only exists in some special cases of core percolation (27), whereas it is shown to be a generic feature in our indirect interaction model.  相似文献   

13.
Social interaction deficits in drug users likely impede treatment, increase the burden of the affected families, and consequently contribute to the high costs for society associated with addiction. Despite its significance, the neural basis of altered social interaction in drug users is currently unknown. Therefore, we investigated basal social gaze behavior in cocaine users by applying behavioral, psychophysiological, and functional brain-imaging methods. In study I, 80 regular cocaine users and 63 healthy controls completed an interactive paradigm in which the participants’ gaze was recorded by an eye-tracking device that controlled the gaze of an anthropomorphic virtual character. Valence ratings of different eye-contact conditions revealed that cocaine users show diminished emotional engagement in social interaction, which was also supported by reduced pupil responses. Study II investigated the neural underpinnings of changes in social reward processing observed in study I. Sixteen cocaine users and 16 controls completed a similar interaction paradigm as used in study I while undergoing functional magnetic resonance imaging. In response to social interaction, cocaine users displayed decreased activation of the medial orbitofrontal cortex, a key region of reward processing. Moreover, blunted activation of the medial orbitofrontal cortex was significantly correlated with a decreased social network size, reflecting problems in real-life social behavior because of reduced social reward. In conclusion, basic social interaction deficits in cocaine users as observed here may arise from altered social reward processing. Consequently, these results point to the importance of reinstatement of social reward in the treatment of stimulant addiction.Cocaine dependence is a chronically relapsing disorder defined by uncontrolled and compulsive drug use (1). Despite severe negative consequences including disrupted social relationships, loss of employment, and somatic and psychiatric illnesses, an addicted person’s life is often centered around the drug of choice and activities related to it (2). Therefore, drug use is classified as a major social, legal, and public health problem (3). After cannabis, cocaine is the second most prevalent illegal drug in the United States and Europe (4, 5), with a lifetime prevalence among young adults of 6.3% in Europe (15- to 34-y-olds) (4) and 13.3% in the United States (18- to 25-y-olds) (5).Social cognition and social support for drug users are of great clinical relevance, as they have been reported to influence onset of drug use and development of substance use disorders, and treatment success in patients with substance use disorders (6, 7). Impairments in social cognition may augment the risk of social isolation, aggression, and depression, likely supporting the vicious circle of drug use (8). Additionally, impaired social cognition may contribute to the decay of social relationships in addicted patients (9) with negative consequences for treatment success given that higher social support predicted longer abstinence duration (10). Furthermore, no efficient pharmacological treatment for cocaine addiction is currently available (11), and treatment approaches such as cognitive behavioral therapy rely, at least in part, on the emotional responsiveness and social abilities of drug users (12). Previous results suggest that cocaine users (CUs) show impairments in different facets of social cognition, particularly in emotional empathy, mental perspective taking, and emotion recognition in prosody, which are related to deficits in real-life social behavior such as fewer social contacts and more criminal offenses (13, 14). Furthermore, in money distribution games, CUs act more self-servingly and less altruistically than stimulant-naïve controls (15). Volkow et al. (9) postulated that neuroadaptations in the reward systems of drug users (e.g., ventral striatum and orbitofrontal cortex) alter reward processing such that the value of the abused drug is enhanced and concurrently the value of nondrug rewards, including social interaction, is reduced. Consequently, general social competence might become impaired and promote antisocial and criminal behavior. This may explain why social consequences of drug use (e.g., imprisonment or familial problems) do not prompt drug-addicted people to quit using the drug as well as how they contribute to increased drug use and transition from recreational drug use to addiction (9). However, whereas altered processing of monetary rewards has been reported in CUs (16), social reward processing has not been studied yet, neither on the psychological nor the neural level. Therefore, it remains elusive whether CUs (i) show behavioral differences to reward stemming from social interactions and, if so, (ii) which neural adaptations within reward circuitry underlie these potential changes in social interaction behavior.An essential part of social interaction is the phenomenon of “social gaze,” which has two aspects: Gaze can be used by the gazing person as a deictic cue to manipulate the attention of others, and can be read out by observers as a hint toward attentional focus of the gazing person (17). Both aspects can converge in joint attention (JA), which is a central element of social interaction (18) and is established when a person follows the direction of another person’s gaze so that both attend to the same object (19). Engagement in JA is considered to reflect our understanding of another person’s point of view (20). The capacity of JA emerges at 8–12 mo of age (21) and is predictive for later language learning (22) and the development of more advanced social skills such as mental perspective taking (e.g., the attribution of intentions and goals to others, also known as theory of mind) (23). Impaired JA is a core symptom of autism spectrum disorders (24).To test for social gaze differences between CUs and healthy controls (HCs), we applied a paradigm designed to capture the reciprocal and interactive nature of JA (25) (Fig. S1), where participants engage in an online interaction with an anthropomorphic virtual character in real time. Compared with self-initiated nonjoint attention (NJA; i.e., if the counterpart does not follow one’s gaze but rather pays attention to another object), self-initiated JA (i.e., if the counterpart follows one’s own gaze) is perceived as more pleasurable and associated with stronger activation of reward-related brain areas in healthy controls (25). This rewarding nature of JA might underlie the human motivation to engage in the sharing of experiences that emerges in early childhood (22, 25).It has been suggested that changes in social reward processing might underlie alterations in social behavior and cognition in CUs (9). Here we conducted two studies assessing JA processing, which constitutes an elegant approach to investigate basic social interaction patterns related to social reward processing (25), in CUs and stimulant-naïve HCs by means of behavioral, psychophysiological, and functional brain-imaging methods. In study I, a large sample of relatively pure CUs with few psychiatric comorbidities (n = 80) and stimulant-naïve HCs (n = 63) completed an interactive JA task (25) while valence and arousal ratings, error scores, reaction time, and pupil size were obtained. Pupil dilation provides an objective index of affective processing (26, 27). Based on the observations obtained in study I, we further investigated the neural correlates of the blunted emotional response to social gaze in subsamples of 16 CUs and 16 HCs using functional magnetic resonance imaging (fMRI) during an abridged version of the paradigm (study II). We hypothesized that altered emotional responses to JA are accompanied by less pronounced activation in reward-related brain areas of CUs.  相似文献   

14.
Recent evidence suggests a link between visual motion processing and social cognition. When person A watches person B, the brain of A apparently generates a fictitious, subthreshold motion signal streaming from B to the object of B’s attention. These previous studies, being correlative, were unable to establish any functional role for the false motion signals. Here, we directly tested whether subthreshold motion processing plays a role in judging the attention of others. We asked, if we contaminate people’s visual input with a subthreshold motion signal streaming from an agent to an object, can we manipulate people’s judgments about that agent’s attention? Participants viewed a display including faces, objects, and a subthreshold motion hidden in the background. Participants’ judgments of the attentional state of the faces was significantly altered by the hidden motion signal. Faces from which subthreshold motion was streaming toward an object were judged as paying more attention to the object. Control experiments showed the effect was specific to the agent-to-object motion direction and to judging attention, not action or spatial orientation. These results suggest that when the brain models other minds, it uses a subthreshold motion signal, streaming from an individual to an object, to help represent attentional state. This type of social-cognitive model, tapping perceptual mechanisms that evolved to process physical events in the real world, may help to explain the extraordinary cultural persistence of beliefs in mind processes having physical manifestation. These findings, therefore, may have larger implications for human psychology and cultural belief.

A recent series of reports suggests a link between visual motion processing and social cognition. The human brain appears to fabricate a subtle, false motion signal when looking at another person. The fictitious motion streams in a beam from the other person toward the object of that person’s attention (13). The signal can be observed directly with functional MRI in motion-processing cortical areas, and it can be observed indirectly by how it causes a motion aftereffect in the area of a scene between a face and the object of the face’s attention. The signal, however, is perceptually subthreshold—people are not explicitly aware of it. The functional purpose, if any, of this subthreshold false motion signal is not known, although we speculated it is part of the social toolkit for modeling the attention of others. Because previous studies were correlative—showing a correlation between social cognition and an internally generated motion signal—the causal relationship is not known (4, 5). To establish this new subfield of study in which social cognition taps into preexisting perceptual machinery to model the mind states of others, a direct causal experiment is needed. Here, we provide that test. We asked, if we contaminate a participant’s visual world with a subthreshold motion that streams from another person toward an object, can we manipulate the participant’s perception of that other person’s attention? The results demonstrated a behaviorally meaningful impact of subthreshold motion on social judgments. It explains why the human brain fabricates a motion signal during social cognition. Modeling the attention state of others is a crucial part of social cognition (69), and recruiting the motion-processing system evidently contributes to that model. It may have proved adaptive to co-opt the brain’s existing motion-processing mechanism to encode sources and targets of attention, in essence drawing a quick visual sketch with moving arrows to help keep track of who is attending to what in a complex environment. We suggest that the beam of motion represents, instantaneously, the relationship between an agent and the target of its attention. In this interpretation, the subthreshold motion signal balances two adaptive pressures: it is strong enough to influence social cognition in a meaningful direction while at the same time not so strong that it materially interferes with the normal motion perception of real objects. This type of social-cognitive model, borrowing low-level perceptual mechanisms that evolved to process physical events in the real world, may help to explain the extraordinary cultural persistence of beliefs in mind processes having physical manifestation. It is a common belief across time and cultures that attentive gaze comes with a palpable outward flow and that other properties of the mind are linked to specific physical auras and flows. The present findings, therefore, may have larger implications for human psychology and cultural belief.  相似文献   

15.
The attention system in patients with liver cirrhosis has not yet been fully investigated. We therefore studied visual attention orienting in cirrhotic patients without overt hepatic encephalopathy. Seventy cirrhotic patients without overt hepatic encephalopathy (aged 57±10 yr., mean±s.d.) and 55 controls (aged 49±12 yr.) were enrolled. Visual attention orienting was evaluated by a computerized neuropsychological test. The Reitan A test, commonly used to detect subclinical hepatic encephalopathy, was used to evaluate mental performance. Psychometric test scores were reduced in cirrhotics compared to controls (attention test: neutral condition =495±149 vs. 401±98 msec; valid condition =434±110 vs. 398±84 msec; invalid condition =485±146 vs. 392±110 msec; p<0.001; Reitan A test =52±20 vs. 35±11 sec., p<0.001). The attention effect of the cue was found both in controls and cirrhotics; however, it was significantly higher in cirrhotics than in controls (61±111 vs. 33±41 msec; p<0.002). The attention effect was directly correlated with Reitan A test (r=0.23, p=0.05) in cirrhotics. In conclusion, in cirrhotic patients without overt hepatic encephalopathy, visual attention orienting was present and focusing to an indexed location had a higher effect on reaction time compared to controls, possibly because of reduced basal arousal.  相似文献   

16.
The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting show that higher values in an object’s luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness.  相似文献   

17.
18.
Coordination among social animals requires rapid and efficient transfer of information among individuals, which may depend crucially on the underlying structure of the communication network. Establishing the decision-making circuits and networks that give rise to individual behavior has been a central goal of neuroscience. However, the analogous problem of determining the structure of the communication network among organisms that gives rise to coordinated collective behavior, such as is exhibited by schooling fish and flocking birds, has remained almost entirely neglected. Here, we study collective evasion maneuvers, manifested through rapid waves, or cascades, of behavioral change (a ubiquitous behavior among taxa) in schooling fish (Notemigonus crysoleucas). We automatically track the positions and body postures, calculate visual fields of all individuals in schools of ∼150 fish, and determine the functional mapping between socially generated sensory input and motor response during collective evasion. We find that individuals use simple, robust measures to assess behavioral changes in neighbors, and that the resulting networks by which behavior propagates throughout groups are complex, being weighted, directed, and heterogeneous. By studying these interaction networks, we reveal the (complex, fractional) nature of social contagion and establish that individuals with relatively few, but strongly connected, neighbors are both most socially influential and most susceptible to social influence. Furthermore, we demonstrate that we can predict complex cascades of behavioral change at their moment of initiation, before they actually occur. Consequently, despite the intrinsic stochasticity of individual behavior, establishing the hidden communication networks in large self-organized groups facilitates a quantitative understanding of behavioral contagion.The social transmission of behavioral change is central to collective animal behavior. For many mobile groups, such as schooling fish and flocking birds, social contagion can be fast, resulting in dramatic waves of response (16). Such waves are evident in particular when individuals are under threat of attack from predators (1). Despite the ubiquity and importance of behavioral contagion, and the fact that survival depends on how individual interactions scale to collective properties (2), we still know very little about the sensory basis and mechanism of such coordinated collective response.In the early 20th century, Edmund Selous proposed that rapid waves of turning in large flocks of birds resulted from a direct transference of thoughts among animals: “They must think collectively, all at the same time… a flash out of so many brains” (3). By the mid- 1950s, however, attention had turned from telepathy to synchrony arising from the rapid transmission of local behavioral response to neighbors, with some of the first experimental studies of cascading behavioral change undertaken by Dimitrii Radakov (4). Radakov (4) hand-traced the paths of each fish, frame-by-frame, revealing that the speed of the “wave of agitation” could propagate much faster than the maximum swim speed of individuals. Using similar methodology, Treherne and Foster (5) studied rapid waves of escape response in marine skaters, describing what they saw as “the Trafalgar effect” in reference to the speed of communication, via signaling flags, among ships in the British Navy’s fleet at the battle of Trafalgar in 1805. Signals observable at a distance allowed information to travel much faster than the ships could move themselves. Since these studies, similar behavioral cascades have been found in many other organisms (2, 68).Describing general “macroscopic” properties, such as the speed or direction of behavioral waves, is relatively straightforward. Revealing the nature of social interactions by which information propagates among individuals, however, has proven much more difficult. In many situations, such as when a predator attacks a group (1) or when artificial stimuli are used, it is not possible to differentiate between the propagation of behavior via social contagion and the propagation of behavior resulting from direct response to the stimulus, or some combination of both. For example, the sound of an object dropped into the water (9) creates a near-instantaneous acoustic cue typically available to all individuals. This problem is further exacerbated by the fact that response latency associated with direct behavioral response increases with distance to the stimulus (10); thus, the null expectation for asocial response by members of a group to a stimulus would be a fast wave of response (appearing to travel via contagion) from the stimulus outward.In previous studies, therefore, it has not been possible to isolate the social component of rapid collective response. Although simulations can qualitatively reproduce phenomena reminiscent of such waves (6, 11), the underlying assumptions made may be incorrect. For example, a predominant paradigm has been to consider individuals as “self-propelled particles,” which (inspired by collective processes in physical systems) interact with neighbors through social “forces” (1215). In such models, it is usually assumed that they do so with neighbors within a fixed distance [a “metric range” (12, 13)] or with a fixed number of near-neighbors regardless of their distance [a “topological range” (15)]. These assumptions, although mathematically convenient, do not necessarily represent what is convenient, or appropriate, for neural sensing and decision making. Furthermore, it has been shown that these representations poorly reflect the sensory information used during social response in schooling golden shiner fish (16), the focal species of the present work.A major challenge in the study of collective animal behavior is that the pathways of communication are not directly observable. In the study of isolated organisms, it has long been realized that mapping the physical and functional connectivity of neural networks is essential to developing a quantitative and predictive science of how individual behavior is generated. By contrast, in the study of mobile animal groups, the analogous issue of determining the structure of the sensory networks by which interactions, and the resulting group behavior, are mediated remains to be explored. The structure and heterogeneity of networks are known to have a profound impact on contagious processes in general, from spreading neural electrical activity (17), innovations (18), disease (19), or power grid (20) failure. In all such scenarios, predicting the magnitude of contagion and identifying influential nodes (either in terms of their capacity to instigate or inhibit widespread contagion) are crucial.Several measures have become prominent predictors of influence in networks, including an individual’s degree (number of connections) (21) and betweenness-centrality (the number of shortest paths that pass through a focal individual) (22). In the study of contagious disease, those individuals who have a large number of social contacts (a high degree), yet whose contacts do not form a tight clique [i.e., have a low clustering coefficient (19)], have the capacity to be “superspreaders,” allowing infection to spread extensively (23). Although disease transmission and social contagion are similar in some respects, there are important differences. Whereas disease transmission can follow contact with a single infected individual (simple contagion), in many social processes, behavioral change depends on reinforcement via multiple contacts [complex contagion (24)].Here, we focus on studying rapid waves of behavioral change in the context of collective evasion, using strongly schooling fish (golden shiners) as an experimental study system. To uncover the process by which this behavioral change spreads, we exploit the fact that shiners, like other fish, exhibit “fast-start” behavior when they perceive an aversive stimulus (e.g., via the visual, acoustic, or mechanosensory system) (25), and occasionally do so in the absence of any external stimulus. Studying fast-start evasion resulting from spontaneous startle events, instead of presenting a stimulus visible to multiple individuals, offers us the opportunity to identify the initiator of escape waves unambiguously and to avoid confounding social and asocial factors.Because fast-start is mediated by a reflex circuit involving a pair of giant neurons, the Mauthner cells (26), it may be expected that individual fish would be unable to establish the causal factor for escape in others (i.e., whether it resulted from a real threat or not). We first test this hypothesis by comparing evasion response resulting from spontaneous startles with evasion response resulting from an experimentally controlled alarming mechanosensory stimulus, and find no difference in response. This result suggests that when responding to fast-start behavior, golden shiners do not differentiate between threat-induced and spontaneous startles, and is consistent with previous experiments on birds (27), and also with theoretical predictions suggesting that the risk of predation makes it simply too costly for vulnerable organisms, like golden shiners, to wait to determine if the escape motion of others is associated with real danger (28).To investigate the mechanism of transmission of evasion behavior, we performed a detailed analysis of 138 spontaneous evasion maneuvers in schools of 150 ± 4 freely swimming fish (body length of  ≈  4.5–5 cm, spontaneous evasion experiments minimally interfered with schooling behavior). Due to the importance of visual cues in this species (16, 29), we reconstruct a planar representation of each fish’s visual field using ray casting to approximate the pathways of light onto the retina, based on automated estimation of the body posture and eye position of each individual (SI Appendix). This representation reveals the underlying visual information available to each fish. Because we can determine unambiguously the initiator and first responder (the first and second individuals to startle) of any behavioral cascade, and only social cues are present, we can investigate the nature of social contagion in this system by asking what sensory information is predictive of whether or not an individual will be the first to respond.This approach allows us to study how individuals translate sensory information to motor response (evasion) and, consequently, to reveal the social cues that inform individual decision making in this behavioral context. Knowing these cues then allows us to reconstruct quantitative interaction networks by which evasion behavior propagates across groups. We address a key question: From the structural properties of the network alone, is it possible to predict whether a given individual’s startle will result in a behavioral cascade, and of what magnitude? We also reveal the general nature of this contagion process and the relationship between spatial position and social influence, and susceptibility to social influence, in large, mobile animal groups.  相似文献   

19.
Scientists have long proposed that memory representations control the mechanisms of attention that focus processing on the task-relevant objects in our visual field. Modern theories specifically propose that we rely on working memory to store the object representations that provide top-down control over attentional selection. Here, we show that the tuning of perceptual attention can be sharply accelerated after 20 min of noninvasive brain stimulation over medial-frontal cortex. Contrary to prevailing theories of attention, these improvements did not appear to be caused by changes in the nature of the working memory representations of the search targets. Instead, improvements in attentional tuning were accompanied by changes in an electrophysiological signal hypothesized to index long-term memory. We found that this pattern of effects was reliably observed when we stimulated medial-frontal cortex, but when we stimulated posterior parietal cortex, we found that stimulation directly affected the perceptual processing of the search array elements, not the memory representations providing top-down control. Our findings appear to challenge dominant theories of attention by demonstrating that changes in the storage of target representations in long-term memory may underlie rapid changes in the efficiency with which humans can find targets in arrays of objects.The cognitive and neural mechanisms that tune visual attention to select certain targets are not completely understood despite decades of intensive study (1, 2). Attention can clearly be tuned to certain object features (similar to tuning a radio to a specific station, also known as an attentional set), but how this tuning occurs as we search for certain objects in our environment is still a matter of debate. The prevailing theoretical view is that working memory representations of target objects provide top-down control of attention as we perform visual search for these objects embedded in arrays of distractors (37). However, an alternative view is that long-term memory representations play a critical role in the top-down control of attention, enabling us to guide attention based on the more enduring representations of this memory store (816). To distinguish between these competing theoretical perspectives, we used transcranial direct-current stimulation (tDCS) to manipulate activity in the brain causally (17), and combined this causal manipulation of neural activity with electrophysiological measurements that are hypothesized to index the working memory and long-term memory representations that guide visual attention to task-relevant target objects.To determine the nature of the working memory and long-term memory representations that control visual attention during search, we simultaneously measured two separate human event-related potentials (ERPs) (8, 18, 19). The contralateral delay activity (or CDA) of subjects’ ERPs provides a measure of the maintenance of target object representations in visual working memory (20, 21). The CDA is a large negative waveform that is maximal over posterior cortex, contralateral to the position of a remembered item. This large-amplitude lateralized negativity is observed even when nonspatial features are being remembered, and persists as information is held in working memory to perform a task. A separate component, termed the anterior P1, or P170, is hypothesized to measure the build-up of long-term memory representations. The anterior P1 is a positive waveform that is maximal over frontal cortex and becomes increasingly negative as exposures to a stimulus accumulate traces in long-term memory (8, 19, 22). This component is thought to reflect the accumulation of information that supports successful recognition of a stimulus on the basis of familiarity (23). For example, the anterior P1 amplitude can be used to predict subsequent recognition memory for a stimulus observed hundreds of stimuli in the past (i.e., across minutes to hours of time) (23) (additional information on the critical features of these ERP components is provided in SI Materials and Methods). We used simultaneous measurements of the CDA and anterior P1 to determine the role that working memory and long-term memory representations play in the tuning of attention following brain stimulation.Our tDCS targeted the medial-frontal region in our first experiments (Fig. 1A) because anodal stimulation of this area results in rapid improvement of simple visual discriminations relative to baseline sham conditions (24). If it is possible to induce rapid improvements in the selection of targets among distractors as humans perform search, then the competing theories of visual attention would account for the accelerated tuning of attention in different ways. The theories that propose working memory representations provide top-down control of visual attention predict that the stimulation-induced improvement in visual search will be due to changes in the nature of the visual working memory representations indexed by the CDA component (Fig. 1 B and C). Specifically, the CDA elicited by the target cue presented on each trial should increase in amplitude, relative to the sham condition, to explain the improvement of attentional selection during search. This type of modulation is expected if working memory-driven theories of attention are correct based on previous evidence that the CDA is larger on trials of a short-term memory task when performed correctly compared with incorrect trials (20). In contrast, theories that propose long-term memory representations rapidly assume control of attention during visual search predict that the stimulation-induced improvement will be due to changes in the long-term memory representations indexed by the anterior P1 elicited by the target cue presented on each trial. Specifically, we should see the anterior P1 exhibit a more negative potential as search improves following stimulation.Open in a separate windowFig. 1.tDCS model, task, and results of experiment 1. (A) Modeled distribution of current during frontocentral midline anodal tDCS on top and front views of a 3D reconstruction of the cortical surface. (B) Task-relevant cue (green Landolt C in this example) signaled the shape of the target in the upcoming search array. Subjects searched for the same target across a run of three to seven trials. Central fixation was maintained for the trial duration. (C) Representative anterior P1, CDA, and N2pc from repetition 1 in the sham condition show each component’s distinctive temporal and spatial profile, with analysis windows shaded in gray. Mean RTs (D), N2pc amplitudes (E), anterior P1 amplitudes (F), and CDA amplitudes (G) are shown across target repetitions for sham (dashed line) and anodal (solid line) conditions. Error bars are ±1 SEM. Red shading highlights dynamics across trials 1 and 2. Grand average ERP waveforms from the frontal midline electrode (Fz) synchronized to cue onset are shown across target repetitions for sham (dashed line) and anodal (solid line) conditions. The measurement window of the anterior P1 is shaded in gray. (H) Relationship between logarithmic rate parameter enhancements for mean anterior P1 amplitude and RT after anodal stimulation relative to sham.Each subject completed anodal and sham tDCS sessions on different days, with order counterbalanced across subjects (n = 18). Immediately after 20 min of tDCS over medial-frontal (experiments 1 and 2) or right parietal (experiment 3) regions of the head (see the current flow model for experiment 1 in Fig. 1A, and additional information about stimulation locations in SI Materials and Methods), we recorded subjects’ ERPs while they completed a visual search task. In this search task, the target was cued at the beginning of each trial (Fig. 1 B and C). The task-relevant cue signaled the identity of the target that could appear in the search array presented a second later. In experiments 1 and 3, the targets and distractors were Landolt-C stimuli, and in experiment 2, they were pictures of real-world objects. A task-irrelevant item was presented with each cue to balance the hemispheric visual input so that the lateralized ERPs that elicit the CDA could be unambiguously interpreted (25). The key manipulation in this task was that the target remained the same for three to seven consecutive trials (length of run randomized) before it was changed to a different object. These target repetitions allowed us to observe attentional tuning becoming more precise across trials.We found that anodal medial-frontal tDCS in experiment 1 accelerated the rate of attentional tuning across trials, as evidenced by the speed of behavior and attention-indexing ERPs elicited by the search arrays (Fig. 1 D and E). First, in the baseline sham condition, we observed that subjects became faster at searching for the target across the same-target runs of trials, as shown by reaction time (RT) speeding (F2,34 = 6.031, P = 0.007) (additional analyses of the sham condition and analyses to verify the absence of effects on accuracy are provided in Fig. S1A and SI Materials and Methods). However, following anodal stimulation, subjects’ RTs dramatically increased in speed, such that search RTs reached floor levels within a single trial. This striking causal aftereffect of anodal tDCS was evidenced by a stimulation condition × target repetition interaction on RTs (F2,34 = 3.735, P = 0.042), with this RT effect being significant between the first two trials of search for a particular Landolt C (F1,17 = 6.204, P = 0.023) but with no significant change thereafter (P > 0.310). Additionally, by fitting these behavioral RT data with a logarithmic function to model the rate of improvement (9), we found that anodal tDCS significantly increased the rate parameters of RT speeding (F1,17 = 5.097, P = 0.037).Consistent with the interpretation that tDCS changed how attention selected the targets in the search arrays, we found that the N2-posterior-contralateral (N2pc) component, an index of the deployment of covert attention to the possible target in a search array (26), showed a pattern that mirrored the single-trial RT effects (F1,17 = 4.792, P = 0.043) (Fig. 1E; N2pc waveforms are provided in Fig. S1A). However, other ERP components indexing lower level perceptual processing or late-stage response selection during search were unchanged by the tDCS (Fig. S1 C and D and Table S1). Our findings demonstrate that the brain stimulation only changed the deployment of visual attention to targets in the search arrays and did not change the operation of any other cognitive mechanism we could measure during the visual search task. Thus, by delivering electrical current over the medial-frontal area, we were able to accelerate the speed with which subjects tuned their attention to select the task-relevant objects causally.To determine whether the tDCS-induced attentional improvements were caused by changes in working memory or long-term memory mechanisms of top-down control, we examined the putative neurophysiological signatures of visual working memory (i.e., the CDA) and long-term memory (i.e., the anterior P1) elicited by the target cues. Given the rapid tuning of attention following tDCS relative to sham, we might expect the flexible working memory system to underlie this effect. Contrary to this intuition, we found that the rapid, one-trial improvement in attentional tuning following medial-frontal tDCS was mirrored by changes in the putative neural index of long-term memory but left the putative neural index of working memory unchanged (Fig. 1 F and G). Fig. 1F shows that the accelerated effects of attentional tuning caused by anodal stimulation were preceded by a rapid increase in negativity of the anterior P1 across same-target trials, mirroring the rapid, single-trial improvement in RT and the N2pc as the search array was analyzed. This effect was confirmed statistically by a significant stimulation condition × target repetition interaction on the anterior P1 amplitude (F2,34 = 3.797, P = 0.049), and most dramatically between the first two trials of search (F1,17 = 5.816, P = 0.027), with no significant pairwise changes in anterior P1 amplitude thereafter (P > 0.707). Logarithmic model fits showed that the rate parameters of the anterior P1 significantly increased after anodal tDCS relative to the more gradual attentional tuning observed in the sham condition (F1,17 = 5.502, P = 0.031; anterior P1 analyses from the sham condition are described in SI Materials and Methods). Despite these causal changes in anterior P1 activity, neither the amplitude of the CDA (F2,34 = 0.669, P = 0.437) nor its rate parameters (F1,17 = 1.183, P = 0.292) significantly differed between stimulation conditions, showing the selectivity of medial-frontal tDCS on the putative neural metric of long-term memory (CDA waveforms are provided in Fig. S1B). We note that the absence of a stimulation-induced CDA increase is not due to ceiling effects. The single target cue gave us ample room to measure such a boost of the CDA, given that without brain stimulation, this memory load is far from eliciting ceiling amplitude levels for this component (20).If the better long-term memory representations indexed by the anterior P1 were the source of the improved search performance, then the size of the stimulation-induced boost of the anterior P1 elicited by the cue should be predictive of the search performance that followed a second later. Consistent with the prediction, we found that an individual subject’s anterior P1 amplitude change across the same-target runs following medial-frontal stimulation was highly predictive of the accelerated rates at which the subjects searched through the visual search array that followed (r18 = 0.764, P = 0.0002) (Fig. 1H). Thus, the ERPs elicited by the target cues ruled out the working memory explanation of the rapid changes in attentional tuning we observed, and were consistent with the hypothesis that changes in the nature of the long-term memory representations that control attention were the source of this dramatic improvement.In experiment 2, we replicated the pattern of findings from experiment 1 using a search task in which the targets and distractors were pictures of real-world objects (Fig. 2 and Fig. S2). These results demonstrate the robustness and reliability of the pattern of effects shown in experiment 1. Specifically, brain stimulation resulted in attention being rapidly retuned to the new targets after one trial, as evidenced by RTs hitting the floor by the second trial in a run. Again, this change in RT was mirrored by stimulation changing the anterior P1, and not the CDA, consistent with accounts that posit an important role for long-term memory in the guidance of attention.Open in a separate windowFig. 2.Task and results of experiment 2. (A) Task in experiment 2 was identical to that of experiment 1 with the exception that Landolt-C stimuli were replaced with real-world objects. Mean RTs (B), N2pc amplitudes (C), anterior P1 amplitudes (D), and CDA amplitudes (E) are shown across target repetitions for sham (dashed line) and anodal (solid line) conditions. Error bars are ±1 SEM. Red shading highlights dynamics across trials 1 and 2. Grand average ERP waveforms from the frontal midline electrode (Fz) synchronized to cue onset are shown across target repetitions for sham (dashed line) and anodal (solid line) conditions. The measurement window of the anterior P1 is shaded in gray. (F) Relationship between logarithmic rate parameter enhancements for mean anterior P1 amplitude and RT after anodal stimulation relative to sham.Next, we sought to provide converging evidence for our conclusion that the stimulation was changing subjects’ behavior by changing the nature of subjects’ long-term memory, consistent with previous functional interpretations of the anterior P1. So far, we have drawn conclusions using our analyses across the fairly short runs of same-target trials. However, we next looked at the learning that took place across the entire experimental session, lasting almost 3 h. If our interpretation of the anterior P1 underlying accelerated attentional tuning is correct, then we should see that the anterior P1 is sensitive to the accumulative effects of learning across the entire experimental session and that these long-term effects change following stimulation. To assess the cumulative effects of learning across these long experimental sessions, we examined how behavior, the anterior P1, and the CDA changed across the beginning, middle, and end of experiments 1 and 2 (Fig. 3 and SI Materials and Methods); that is, we averaged the same-target runs together in the first third, second third, and final third of sessions across all of our subjects. Fig. 3 shows the learning we observed across these long sessions. The RTs were slowest at the beginning of the experiment, when faced with a new target, but as subjects accumulated experience with the set of eight possible targets, we saw the RTs at the beginning of the same-target runs become progressively faster. This accumulation of experience across the entire session that sped RT was mirrored by systematic changes in the amplitude of the anterior P1. The anterior P1 became progressively more negative across the experiment, as we would expect if the magnitude of the negativity were indexing the quality (i.e., strength or number) of the long-term memories for these targets that accumulated across the entire experiment. In contrast, the CDA showed no change across the entire experiment, indicating that the role of working memory in updating the target at the beginning of the same-target runs does not change with protracted learning. For example, it is likely that working memory representations were reactivated to help reduce proactive interference from the target representations built up during the previous run of trials, consistent with influential theoretical proposals (27). Our medial-frontal tDCS boosted these learning effects measured with the anterior P1 and search RTs while leaving the CDA unchanged, consistent with our interpretation of the findings across the shorter same-target runs. Thus, this cumulative learning across the entire experimental session allowed us to observe how the dynamics of the memory representations underlying the focusing of attention evolved over the long term. These results lend further support to the hypothesis that contributions from long-term memory are driving the causal boost of attentional tuning we observed following brain stimulation.Open in a separate windowFig. 3.Within-session dynamics of experiments 1 and 2. Mean RT, anterior P1 amplitude, and CDA amplitude as a function of target repetitions binned according to the first third (black), middle third (red), and last third (green) of runs, collapsed across experiments 1 and 2. Logarithmic model fits are shown for sham (dashed line) and anodal (solid line) tDCS conditions. Error bars are ±1 SEM.To determine whether the effects of experiments 1 and 2 were specific to medial-frontal stimulation, in experiment 3, we stimulated the posterior parietal region in a new group of subjects (order of anodal and sham conditions was counterbalanced, n = 18) (Fig. 4A). This region of the dorsal visual stream plays a role in memory (28) and generating top-down attentional control signals (29), so that it provides a useful contrast with our medial-frontal stimulation, which appeared to influence attentional selection by changing the long-term memory representations. We specifically targeted the right parietal region because previous studies show that disrupting activity in right parietal cortex can influence attention (30, 31).Open in a separate windowFig. 4.tDCS model and results of experiment 3. (A) Modeled distribution of current during right parietal anodal tDCS on top and rear views of a 3D reconstruction of the cortical surface. Mean RTs (B), N2pc amplitudes (C), anterior P1 amplitudes (D), and CDA amplitudes (E) are shown across target repetitions for sham (dashed line) and anodal (solid line) conditions. Bar graphs show data collapsed across target repetitions for each stimulation condition based on whether the target color appeared in the left or right visual hemifield. Error bars are ±1 SEM. (F) Mean N1 amplitudes are illustrated as in BE. The waveforms are search array-locked grand average potentials at lateral occipital sites (OL/OR) contralateral to right (blue) and left (red) hemifield target colors shown across sham (dashed line) and anodal (solid line) conditions. OL, occipital left; OR, occipital right. *P < 0.05.We found that unlike medial-frontal stimulation, right parietal tDCS had no effect on the overall tuning of attention or the memory representations controlling search performance. Fig. 4 BE shows the overlap between stimulation conditions for the RTs (no stimulation condition × target repetition interaction: F2,34 = 0.029, P = 0.955) and the amplitudes of the N2pc (F2,34 = 0.139, P = 0.807), CDA (F2,34 = 0.814, P = 0.439), and anterior P1 (F2,34 = 0.393, P = 0.663) across target repetitions. Because subjects again searched for the same target across the runs of trials in experiment 3, we did observe main effects of target repetition on RTs (F2,34 = 6.190, P = 0.015) and the amplitudes of the N2pc (F2,34 = 4.053, P = 0.045), CDA (F2,34 = 5.292, P = 0.024) and anterior P1 (F2,34 = 6.320, P = 0.006). These effects were due to the steady speeding of RTs, declining CDA amplitude, and increasing amplitudes of the anterior P1 and N2pc across same-target trials. The effects of target repetition indicate that the roles played by working memory and long-term memory in tuning attention across trials in the baseline sham condition were unchanged following right parietal stimulation (Fig. 4 BE and Figs. S3D and S4).Given the lateralized application of tDCS in experiment 3, we examined the data based on whether the target appeared in the left or right visual field. We found that parietal stimulation caused lateralized, bidirectional effects on search performance. Relative to sham, subjects were faster at searching for targets after anodal stimulation, but only on trials in which the target color appeared contralateral (i.e., in the left visual field) to the location of the stimulating electrode on the head (i.e., over the right hemisphere) (Fig. 4B). This effect was evidenced by a stimulation condition × target color laterality interaction on search RTs (F1,17 = 12.098 P = 0.003) and a main effect of stimulation condition on contralateral search RTs (F1,17 = 6.014 P = 0.025). In contrast, RTs were slower when target colors appeared ipsilateral (i.e., in the right visual hemifield) with respect to the location of tDCS (F1,17 = 4.276 P = 0.054) (Fig. 4B). These results suggest that parietal stimulation facilitated and impeded overall search behavior depending on the location of the target in the visual field.We found that the lateralized, bidirectional effects of parietal tDCS on search performance were caused by directly influencing perceptual processing, not changing the memory representations controlling attention. The amplitude of the posterior N1 component, a neural index of perceptual processing (32), was significantly modulated by stimulation condition and in a pattern mirroring that of the behavior (stimulation condition × target color laterality interaction: F1,17 = 10.494 P = 0.005; stimulation condition main effects: contralateral, F1,17 = 4.755 P = 0.044; stimulation condition main effects: ipsilateral, F1,17 = 4.573 P = 0.047) (Fig. 4F and Fig. S3A). In contrast, our indices of the memory representations of the targets and of the deployment of attention were not significantly changed by tDCS [i.e., no stimulation condition × target color laterality interaction: N2pc (F1,17 = 0.041 P = 0.843), CDA (F1,17 = 0.107 P = 0.748), anterior P1 (F1,17 = 0.169 P = 0.686)] (Fig. 4 CE and Fig. S3 BD).In sum, our parietal stimulation protocol did not change the nature of the memory representations controlling attention but directly influenced the perceptual processing of the objects in the search array. These observations were evidenced by lateralized changes in the early visual ERPs and the behavioral responses to the task-relevant items contralateral vs. ipsilateral to the stimulation. Thus, the effects observed in experiments 1 and 2 are not a ubiquitous pattern observed following stimulation of any cognitive control structure. Instead, when we stimulated the posterior parietal region of the visual stream, we observed changes in early visual responses of the brain and similarly spatially mapped patterns of performance.Our findings from experiments 1 and 2, that stimulation over medial-frontal areas can rapidly improve attentional selection of targets, may seem surprising because the medial-frontal cortex is not commonly thought to be a crucial node in the network of regions that guide attention (29, 33). This region is most frequently discussed as critical for the higher level monitoring of task performance, response conflict, and prediction error (34, 35). However, a variety of studies across species and methods have found connections between regions of medial-frontal cortex and both attention and memory processes. First, human neuroimaging research shows that the cingulate opercular network, including anterior cingulate and presupplementary cortex, is engaged during the implementation of a task set, visuospatial attention, and episodic memory (3638). Second, studies using animal models show that attentional selectivity in the visual domain appears to reside in dorsomedial areas of prefrontal cortex (39), such as the anterior cingulate gyrus. Third, both the dorsomedial and right dorsolateral prefrontal cortices respond strongly in memory recognition tasks with specific activity bordering the anterior cingulate at or near Brodmann’s areas 6, 8, and 32 (40), including supplementary and presupplementary motor areas. The right dorsolateral prefrontal cortex, which also appeared to be in the path of our current-flow modeling, has been causally linked to human long-term memory processes (41). Given the set of regions in this path, the specificity of our empirical observations is striking. However, future work is clearly needed to dissect the contributions of the group of medial-frontal and medial-prefrontal regions within the path of the current used here.Our results present evidence from causal manipulations of the healthy human brain that suggest the rapid reconfiguration of the top-down control of visual attention can be carried out by long-term memory. This conclusion seems counterintuitive, given that the active storage of objects in working memory can strongly control attention (7, 18, 42) and that the dominant theories of attention focus exclusively on the role of working memory in guiding attention (36). The present findings do not suggest that working memory representations do not control attention across the short term; indeed, we observed the neural index of storage of the target in working memory that was concurrent with the large changes in the putative index of long-term memory. The critical implication of the present findings is that the rapid improvements in attentional control following brain stimulation were most closely related to our ERP measure of long-term memory and not working memory. These results are surprising to us, given that effects of long-term memory on attentional control are typically observed in tasks in which improvements evolve slowly across protracted training (10, 1214, 16, 43), or even a lifetime of semantic associations (11). Here, we show that the time course of improvement need not be diagnostic of the type of memory representation involved.Our results can also be interpreted within theoretical models that take a broader view of top-down control and do not rely on a conceptual dichotomy between working memory and long-term memory processes that guide attention (44). Neuroimaging research has identified multiple control mechanisms that configure downstream processing consistent with behavioral goals. Most relevant here is the network consisting of the anterior insula (also referred to as the frontal operculum) and dorsal anterior cingulate cortex (also referred to as the medial superior frontal cortex). This network is thought to integrate information over protracted time scales, in an iterative manner, similar to the dynamics and functional properties of the anterior P1. Further, the cingulate opercular network carries various critical control signals, including the selection and maintenance of task goals and the making and monitoring of choices (38, 45, 46). It is possible that our medial-frontal stimulation changed the functioning of this control network, causing the improvements we observed in attentional control.Finally, our findings provide evidence from causal manipulations of the human brain to support the slowly growing view that the nature of top-down attentional control involves the interplay of different types of memory representations (8, 15, 4749). Moving forward, we believe that such a view moves theories of attention nearly into register with models of learning, automaticity, and skill acquisition (9, 5052). Ideally, this perspective will serve to unify, rather than further hyperspecialize, theories of information processing in the brain.  相似文献   

20.
Social interactions are fundamental for human behavior, but the quantification of their neural underpinnings remains challenging. Here, we used hyperscanning functional MRI (fMRI) to study information flow between brains of human dyads during real-time social interaction in a joint attention paradigm. In a hardware setup enabling immersive audiovisual interaction of subjects in linked fMRI scanners, we characterize cross-brain connectivity components that are unique to interacting individuals, identifying information flow between the sender’s and receiver’s temporoparietal junction. We replicate these findings in an independent sample and validate our methods by demonstrating that cross-brain connectivity relates to a key real-world measure of social behavior. Together, our findings support a central role of human-specific cortical areas in the brain dynamics of dyadic interactions and provide an approach for the noninvasive examination of the neural basis of healthy and disturbed human social behavior with minimal a priori assumptions.Human social interactions have likely shaped brain evolution and are critical for development, health, and society. Defining their neural underpinnings is a key goal of social neuroscience. Interacting dyads, the simplest and fundamental form of human interaction, have been examined with behavioral setups that used real movement interactions during communication in real time as a proxy (14), providing mathematical models representing human interaction, goal sharing, mutual engagement, and coordination. To identify the neural systems supporting these behaviors, neuroimaging would be the tool of choice, but studying dyadic interactions with this method is both experimentally and analytically challenging. Consequently, the neural processes underlying human social interactions remain incompletely understood.Experimentally, studying dyads with neuroimaging technology that allows only one participant per scanner provides challenges that have been addressed in the literature in one of two ways. First, the audiovisual experiences of human social contact have been simulated using stimuli such as photographs, recorded videos, or computerized avatars in the absence of human interaction (57), or, recently, immersive audiovisual linkups have been used with one of the two participants being scanned (8, 9). Secondly, pioneering neuroimaging experiments have coupled two scanner sites over the Internet, a setup called hyperscanning, enabling subjects to observe higher-level behavioral responses such as choices made to accept or reject an offer in real time while in the scanners (10, 11). In the current study, we aimed to combine the advantages of these experimental approaches by enabling two humans to see (and possibly hear) each other in a hyperscanning framework, enabling an immersive social interaction while both participant’s brains are imaged. To do so, we implemented a setup with delay-free data transmission and precisely synchronized data acquisition, in addition to a live video stream provided between scanner sites during the entire session (Fig. 1A). While real-time video transmission is not an indispensable requirement for the study of all forms of social interaction, it is a naturalistic presentation method for visual social stimuli in the scanner, and likely helpful for the study of interactions involving changes in eye gaze and facial expressions, although the advantages of the precise temporal synchronization are partially mitigated by the low temporal resolution of the blood oxygen level-dependent (BOLD) response and the sampling frequency of functional MRI (fMRI) experiments.Open in a separate windowFig. 1.Hardware environment and analysis routine for fMRI hyperscanning. (A) Illustration of the hyperscanning setup as implemented for the present studies. (B) Schematic overview of the analysis routine for the examination of information flow between interacting human brain systems in hyperscanned fMRI data. Letters correspond to the numbering of in-text analysis steps.Analytically, extracting and testing for information flow in the resulting joint neuroimaging data are not straightforward. In this paper, we describe a general analysis framework for this problem that makes only minimal a priori assumptions. Importantly, using permutation testing, we also aim to address the open question of whether there is anything neurally specific or even unique about human dyadic interaction, compared with a situation in which no real-time information is exchanged.In the current paper, we study joint attention (JA), a basic yet fundamental mechanism of social interaction that is used by humans to coordinate and communicate intentions and information as well as guiding others’ attention in a nonverbal way, especially through eye gaze (12). JA is of considerable interest both for cognitive and clinical neuroscience because it arises early in development, preceding and shaping the emergence of symbolic communication and higher-order social functions such as representational theory of mind (13, 14). Disturbances of JA in developmental disorders with prominent social disturbances such as autism and attention deficit hyperactivity disorder, but also schizophrenia, have been identified (see, e.g., refs. 13 and 15).To investigate JA, we used a paradigm where information on a target location is given to one subject (sender of information) only, but both subjects (sender and receiver) must respond correctly by indicating the target location on a button response device. Thus, information needs to be transferred from one subject to another nonverbally while fMRI data are acquired, resulting in flow of information between two interacting brain systems (interaction phase, INT). For determination of interaction-based aspects of the fMRI data, control phases without interaction were added to the task protocol (NoINT). We studied a discovery sample to identify the main neural parameters of information flow (n = 26) and, for confirmation, a larger independent replication sample (n = 50). Combined, these data were used to validate the approach and relate the resulting parameters to socially relevant psychometric measures. Based on the previous literature on neuroimaging in JA, we expected that we would see information flow involving the temporoparietal junction and medial prefrontal cortex. However, to keep methodological assumptions in this new field minimal, we decided to not include this as an a priori hypothesis into our analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号