首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 32 毫秒
1.
Effects of aging on the useful field of view   总被引:3,自引:0,他引:3  
Previous research has shown that the useful field of view (UFOV) is a useful tool in predicting driving ability, and the UFOV also seems to decline with age. The goals of the current study were first, to examine UFOV changes systematically as a function of age (15-84), and, second, to determine the effect of dividing attention on the UFOV. Our results show that the deterioration in the UFOV begins early in life (by 20 years, or younger). This deterioration is best conceptualized as a decrease in the efficiency with which observers can extract information from a cluttered scene, rather than by shrinking of the field of view per se. The diminished efficiency among elderly observers is exacerbated when conditions require the division of attention between central and peripheral tasks.  相似文献   

2.
Difficult search tasks are known to involve attentional resources, but the spatiotemporal behavior of attention remains unknown. Are multiple search targets processed in sequence or in parallel? We developed an innovative methodology to solve this notoriously difficult problem. Observers performed a difficult search task during which two probes were flashed at varying delays. Performance in reporting probes at each location was considered a measure of attentional deployment. By solving a second-degree equation, we determined the probability of probe report at the most and least attended probe locations on each trial. Because these values differed significantly, we conclude that attention was focused on one stimulus or subgroup of stimuli at a time, and not divided uniformly among all search stimuli. Furthermore, this deployment was modulated periodically over time at ∼7 Hz. These results provide evidence for a nonuniform spatiotemporal deployment of attention during difficult search.Visual search tasks (e.g., to find a target embedded among similar looking distracters) have long been used to investigate the deployment of attention (16). Certain tasks are performed “efficiently,” in which case the search time and accuracy are independent of the number of distracters. Other tasks are more difficult, or “inefficient,” characterized by an increase in reaction times (RTs) and/or a decrease in accuracy with the number of distracting elements, a result typically attributed to the need to allocate attention (47). For more than 30 y now, since the pioneering study of Treisman and Gelade in 1980 (4), two opposing theories of attention deployment during difficult search have persisted. Attention could either be allocated nonuniformly to the stimuli, such that in some cases it would switch sequentially from one stimulus (or group of stimuli) to another (4, 5), or be divided uniformly to process all of the stimuli in parallel, but with a drop in efficiency for increasing distractor numbers (2, 810). To date, neither of these two theories has been unequivocally disproved. Overall performance in the search task itself is not directly informative, because both theories predict an increase in RT with the number of distracters (11, 12). One alternative is to use briefly flashed probes to test for the deployment of attention at a specific location and time. With two probes, it should be possible to differentiate parallel and sequential processing strategies: The strict parallel theory predicts that both probes should receive equal amounts of attention, whereas the sequential theory predicts that one of the probes will receive more attention than the other. Of course, the most attended probe may not be the same one on every trial, but a simple mathematical manipulation, the solution of a quadratic equation, allows us to access this information despite the need to average performance over trials.In recent years, a second, related, debate has arisen in the literature concerning the temporal behavior of attention. It has been proposed that attention samples visual stimuli periodically rather than continuously (1318). This question is connected to the uniform vs. nonuniform debate in that the nonuniform (or sequential) model of attention processing maps rather naturally onto a periodic sampling of visual information (with the periodicity reflecting the switching between stimuli). No such relation exists for the parallel uniform model, making it more naturally compatible with continuous processing (although, of course, not incompatible with periodic sampling arising for independent reasons).Consequently, in this study, we asked whether attention processing during a difficult search task is uniform or nonuniform, both in space and in time. We used a difficult (attention demanding) visual search task consisting of finding a letter T among letter L’s (four stimuli). After a varying delay, we probed two of the four stimulus locations (Fig. 1) and computed performance in reporting both probes correctly (PBOTH) or none of the probes correctly (PNONE). Using the mathematical manipulation described in Methods, we were able to determine that attention was not divided equally between the four search item positions, but focused on one stimulus or subgroup of stimuli at a time. Moreover, we found that the deployment of attention was modulated periodically at theta frequency (∼7 Hz). We conclude that in this difficult search task, attention was deployed nonuniformly both in space and in time.Open in a separate windowFig. 1.Experimental procedure. One second to 2 s after pressing the space bar, the search array appears for 30–200 ms depending on the randomly chosen SOA for this trial. Observers report the presence or absence of the target stimulus T among distracting L letters. After the variable SOA (from 30–450 ms relative to search array onset), two probe letters appear randomly for 80 ms at two of the four search array locations. For probe onset SOAs greater than 200 ms, an additional empty screen is presented between the search task and the probe detection task (the fixation point is maintained). In other words, if the SOA was shorter than 200 ms, the interstimulus interval (ISI) was zero; otherwise, the ISI was greater than zero. Masks follow probe stimuli for 200 ms. After mask offset, observers first report the presence or absence of the T among L’s, and then the identity of the two probe stimuli by selecting letters from a list, using the computer mouse. A trial ends when observers click on the end button.  相似文献   

3.
Macaques are often used as a model system for invasive investigations of the neural substrates of cognition. However, 25 million years of evolution separate humans and macaques from their last common ancestor, and this has likely substantially impacted the function of the cortical networks underlying cognitive processes, such as attention. We examined the homology of frontoparietal networks underlying attention by comparing functional MRI data from macaques and humans performing the same visual search task. Although there are broad similarities, we found fundamental differences between the species. First, humans have more dorsal attention network areas than macaques, indicating that in the course of evolution the human attention system has expanded compared with macaques. Second, potentially homologous areas in the dorsal attention network have markedly different biases toward representing the contralateral hemifield, indicating that the underlying neural architecture of these areas may differ in the most basic of properties, such as receptive field distribution. Third, despite clear evidence of the temporoparietal junction node of the ventral attention network in humans as elicited by this visual search task, we did not find functional evidence of a temporoparietal junction in macaques. None of these differences were the result of differences in training, experimental power, or anatomical variability between the two species. The results of this study indicate that macaque data should be applied to human models of cognition cautiously, and demonstrate how evolution may shape cortical networks.Selective attention operates in at least two functional modes: stimulus-driven (bottom-up) control of attention and goal-directed (top-down) (1). A recently proposed model by Corbetta et al., based on human neuroimaging and stroke studies, divides the control of attention between two cortical networks that underlie these modes of attention: the dorsal attention network, comprising the human frontal eye-fields (FEF) and intraparietal sulcus (IPS), and the ventral attention network, centered around an area at the temporoparietal junction (TPJ), located on the right hemisphere caudal supramarginal gyrus (Brodmann area 40 or area PFG/PF) and posterior superior temporal gyrus (Brodmann area 22) (24). This functionally defined ventral attention network area is referred to as the TPJ by Corbetta et al. (2) and TPJa in Mars et al. (5). Here, we refer to the functionally defined area as the TPJ, and reserve “temporoparietal junction” for the anatomical region in both species. According to this model, the dorsal attention network is activated when the subject sustains attention on a cued spatial location (6). The TPJ is activated only by the presentation of a behaviorally relevant stimulus that captures attention, with larger activations evoked by stimuli that are unexpected or cause reorienting of attention, and is deactivated when distracting stimuli are presented during sustained attention (6, 7). The conjunction of deactivation during sustained attention and activation during target detection functionally identifies the TPJ in event-related paradigms (6). Damage to the TPJ decreases the ability to detect and orient attention to novel stimuli presented, especially in the left hemifield, a condition known as visuospatial neglect (3). The dorsal and ventral attention networks interact with each other and with the visual cortex (2). During top-down or goal-directed control of attention, the dorsal attention network is activated, enhancing the selected stimulus in visual cortex, and the TPJ is deactivated, suppressing the orienting of attention to potentially distracting stimuli. However, when a behaviorally relevant stimulus is presented, the TPJ is activated, causing attention to be focused on this stimulus (2). However, these roles of the TPJ in the control of attention remain open to debate, partly because little is known about the TPJ’s connectivity or neuronal response properties.The macaque has been used as a model for studying attention using invasive techniques that complement neuroimaging. There are established maps of cortico-cortical connectivity (8) and many electrophysiological studies of how these areas interact (9). As with all model systems, interspecies differences in these cortical systems are likely to exist. For example, the dorsal attention networks are assumed to be homologous between the two species, but basic facts, such as the number of areas within the dorsal network in each species, remain unknown. Furthermore, a ventral attention system has been functionally characterized only in humans, and the underlying architectonics and connections remain largely unknown. The macaque model might be used to examine these features, but in macaques the ventral attention system has not been functionally isolated and an anatomical homolog is unclear. Areas PF/7b or PFG seem most likely to be anatomically homologous to the human TPJ, but functionally these areas are more involved in polysensory integration and motor functions than in the control of attention (4, 10, 11). A recent functional connectivity study (12) found an area in the region of the macaque temporoparietal junction that shares a similar pattern of connectivity and right hemisphere lateralization with the human TPJ. However, this potential homolog covers multiple areas—7a, temporal parietal occipital caudal, and retroinsular—with no unifying function (13). However, another recent study finds the homolog of a human area adjacent to the TPJ involved in social cognition to be located in the mid-superior temporal sulcus of the macaque, far from the expected location at the macaque temporoparietal junction (14). Many details about the anatomy and physiology of the ventral attention network in either species remain unknown (3, 5).To address the similarity of attention systems, we quantitatively compared humans and macaques using functional MRI (fMRI). We had both species attend to and search through a rapid serial visual presentation (RSVP) stream of images to detect a previously memorized target image. The task was designed to separate activation of visual areas and the dorsal attention network from deactivations of the TPJ during search (which combines sustained covert attention and visual processing of the RSVP stimuli), and activation of the TPJ by target detection (Fig. S1A) (2, 6). We used the same macaque data to report on topographic organization in the lateral intraparietal (LIP) previously (15).Open in a separate windowFig. S1.Paradigm and behavior. (A) After memorizing a target object, subjects were instructed to fixate on the central gray point while signaling detection of the target with a hand response device. Streams in each trial appeared in one of the locations outlined in yellow (Upper Left) in relation to the gray fixation point; the location was chosen randomly for each trial. (B–D) Comparison of behavioral performance between two monkey subjects (red) versus the human subjects (blue). Green line in D represents the inner edge of peripherally presented stimuli.  相似文献   

4.
Attention alters perception across the visual field. Typically, endogenous (voluntary) and exogenous (involuntary) attention similarly improve performance in many visual tasks, but they have differential effects in some tasks. Extant models of visual attention assume that the effects of these two types of attention are identical and consequently do not explain differences between them. Here, we develop a model of spatial resolution and attention that distinguishes between endogenous and exogenous attention. We focus on texture-based segmentation as a model system because it has revealed a clear dissociation between both attention types. For a texture for which performance peaks at parafoveal locations, endogenous attention improves performance across eccentricity, whereas exogenous attention improves performance where the resolution is low (peripheral locations) but impairs it where the resolution is high (foveal locations) for the scale of the texture. Our model emulates sensory encoding to segment figures from their background and predict behavioral performance. To explain attentional effects, endogenous and exogenous attention require separate operating regimes across visual detail (spatial frequency). Our model reproduces behavioral performance across several experiments and simultaneously resolves three unexplained phenomena: 1) the parafoveal advantage in segmentation, 2) the uniform improvements across eccentricity by endogenous attention, and 3) the peripheral improvements and foveal impairments by exogenous attention. Overall, we unveil a computational dissociation between each attention type and provide a generalizable framework for predicting their effects on perception across the visual field.

Endogenous and exogenous spatial attention prioritize subsets of visual information and facilitate their processing without concurrent eye movements (13). Selection by endogenous attention is goal-driven and adapts to task demands, whereas exogenous attention transiently and automatically orients to salient stimuli (13). In most visual tasks, both types of attention typically improve visual perception similarly [e.g., acuity (46), visual search (7, 8), perceived contrast (911)]. Consequently, models of visual attention do not distinguish between endogenous and exogenous attention (e.g., refs. 1219). However, stark differences also exist. Each attention type differentially modulates neural responses (20, 21) and fundamental properties of visual processing, including temporal resolution (22, 23), texture sensitivity (24), sensory tuning (25), contrast sensitivity (26), and spatial resolution (2734).The effects of endogenous and exogenous attention are dissociable during texture segmentation, a visual task constrained by spatial resolution [reviews (13)]. Whereas endogenous attention optimizes spatial resolution to improve the detection of an attended texture (3234), exogenous attention reflexively enhances resolution even when detrimental to perception (2731, 34). Extant models of attention do not explain these well-established effects.Two main hypotheses have been proposed to explain how attention alters spatial resolution. Psychophysical studies ascribe attentional effects to modulations of spatial frequency (SF) sensitivity (30, 33). Neurophysiological (13, 35, 36) and neuroimaging (37, 38) studies bolster the idea that attention modifies spatial profiles of neural receptive fields (RFs) (2). Both hypotheses provide qualitative predictions of attentional effects but do not specify their underlying neural computations.Differences between endogenous and exogenous attention are well established in segmentation tasks and thus provide an ideal model system to uncover their separate roles in altering perception. Texture-based segmentation is a fundamental process of midlevel vision that isolates regions of local structure to extract figures from their background (3941). Successful segmentation hinges on the overlap between the visual system’s spatial resolution and the levels of detail (i.e., SF) encompassed by the texture (39, 41, 42). Consequently, the ability to distinguish between adjacent textures varies as resolution declines toward the periphery (4346). Each attention type differentially alters texture segmentation, demonstrating that their effects shape spatial resolution [reviews (13)].Current models of texture segmentation do not explain performance across eccentricity and the distinct modulations by attention. Conventional models treat segmentation as a feedforward process that encodes the elementary features of an image (e.g., SF and orientation), transforms them to reflect the local structure (e.g., regions of similarly oriented bars), and then pools across space to emphasize texture-defined contours (39, 41, 47). Few of these models account for variations in resolution across eccentricity (46, 48, 49) or endogenous (but not exogenous) attentional modulations (18, 50). All others postulate that segmentation is a “preattentive” (42) operation whose underlying neural processing is impervious to attention (39, 41, 4649).Here, we develop a computational model in which feedforward processing and attentional gain contribute to segmentation performance. We augment a conventional model of texture processing (39, 41, 47). Our model varies with eccentricity and includes contextual modulation within local regions in the stimulus via normalization (51), a canonical neural computation (52). The defining characteristic of normalization is that an individual neuron is (divisively) suppressed by the summed activity of neighboring neurons responsive to different aspects of a stimulus. We model attention as multiplicative gains [attentional gain factors (15)] that vary with eccentricity and SF. Attention shifts sensitivity toward fine or coarse spatial scales depending on the range of SFs enhanced.Our model is image-computable, which allowed us to reproduce behavior directly from grayscale images used in psychophysical experiments (6, 26, 27, 2933). The model explains three signatures of texture segmentation hitherto unexplained within a single computational framework (Fig. 1): 1) the central performance drop (CPD) (2734, 4346) (Fig. 1A), that is, the parafoveal advantage of segmentation over the fovea; 2) the improvements in the periphery and impairments at foveal locations induced by exogenous attention (2732, 34) (Fig. 1B); and 3) the equivalent improvements across eccentricity by endogenous attention (3234) (Fig. 1C).Open in a separate windowFig. 1.Signatures of texture segmentation. (A) CPD. Shaded region depicts the magnitude of the CPD. Identical axis labels are omitted in B and C. (B) Exogenous attention modulation. Exogenous attention improves segmentation performance in the periphery and impairs it near the fovea. (C) Endogenous attention modulation. Endogenous attention improves segmentation performance across eccentricity.Whereas our analyses focused on texture segmentation, our model is general and can be applied to other visual phenomena. We show that the model predicts the effects of attention on contrast sensitivity and acuity, i.e., in tasks in which both endogenous and exogenous attention have similar or differential effects on performance. To preview our results, model comparisons revealed that normalization is necessary to elicit the CPD and that separate profiles of gain enhancement across SF (26) generate the effects of exogenous and endogenous attention on texture segmentation. A preferential high-SF enhancement reproduces the impairments by exogenous attention due to a shift in visual sensitivity toward details too fine to distinguish the target at foveal locations. The transition from impairments to improvements in the periphery results from exogenous attentional gain gradually shifting to lower SFs that are more amenable for target detection. Improvements by endogenous attention result from a uniform enhancement of SFs that encompass the target, optimizing visual sensitivity for the attended stimulus across eccentricity.  相似文献   

5.
Chemotherapy is thought to cause cognitive deficits in some breast cancer patients, but the relative effects on older and younger breast cancer patients are unknown. The effects of chemotherapy on everyday cognitive tasks have not been examined. Thirty-eight female breast cancer survivors (3 to 45 months post chemotherapy) were compared to 55 age-matched control participants. Participants completed the Useful Field of View (UFOV), a computerized test of visual information processing that has been shown to decline with age, and which has been used to predict older adults' driving performance. Older chemotherapy patients performed more poorly than controls on the UFOV speed of processing, but not on the other two components. They also performed more poorly than younger chemotherapy patients. On the divided attention and selective attention components of the UFOV, older participants performed more poorly than younger participants, but there were no significant differences between chemotherapy patients and controls. These findings are explained in terms of brain changes thought to be caused by chemotherapy, which might have the most impact on older adults, already at risk for behavioral slowing.  相似文献   

6.
People with dementia can be overstimulated by too many patterns and designs in one space. The purpose of this research was to find the visual perceptions’ changing of the demented elderly related to the textures of building materials, and to inspect what kind of texture might have the possibility of causing demented elders to have visual hallucinations. A total of ten male subjects with mild dementia participated in this experiment. The simulation of visual perception was made using a highly sensitive LCD projector that showed pictures of building materials on a screen. Clock Drawing Test (CDT) was applied to assess the visual perceptions’ changing of the subjects before and after simulation. Based on the results of this experiment, the visual perceptions of the subjects were more changed by character textures and textures of regular shapes than by the other typologies of textures. Some of the subjects have the possibility of visual hallucinations while looking at the textures during the experiment, because they described visual images that did not exist. Data about these building materials can be made available for the reference of building managers and designers, in order to prevent the demented elderly from having behavior problems.  相似文献   

7.
The aim of this study is to determine the prevalence of psoriatic arthritis (PsA) according to Classification of Psoriatic Arthritis (CASPAR) criteria, Assessment of Spondyloarthritis International Society (ASAS) peripheral and axial SpA criteria, and New York criteria for AS. The first 100 patients consecutively attending a psoriasis dermatology clinic were assessed. Demographic and clinical data were collected; all patients were questioned and examined for joint manifestations. Rheumatoid factor and radiographies of hands, feet, cervical spine, and pelvis for sacroiliac joints were obtained. X-rays were read independently by two experienced observers in blind fashion. Patients with objective joint manifestations, both axial and peripheral, were evaluated for fulfillment of CASPAR, ASAS peripheral and axial, and New York criteria. Median age 48 years; 93 % of patients had psoriasis vulgaris and 56 % had nail involvement. Seventeen patients had peripheral arthritis as follows: nine mono/oligoarticular and eight polyarthritis. Median arthritis duration was of 8 years. Seventeen percent of patients fulfilled CASPAR and ASAS peripheral criteria, 6 % New York, and 5 % ASAS axial criteria. Patients who met CASPAR criteria showed a significantly higher psoriasis duration compared to those without arthritis (M 16 vs 10 years, p?=?0.02), and a higher frequency of nail involvement (88.2 vs 49.4 %, p?=?0.003). Five patients (29.4 %) fulfilled ASAS axial criteria; all of them had peripheral involvement as follows: mono/oligoarticular in three patients and polyarticular in two. Patients with peripheral and axial involvement presented a significantly higher frequency of erythrodermic psoriasis compared to the other patients (35.3 vs 1.2 %, p?=?0.0006 and 80 vs 16.7 %, p?=?0.02). Prevalence of PsA, for CASPAR and ASAS peripheral criteria, was of 17 %. Five percent of patients met ASAS axial criteria, while 6 % met New York criteria. Worth noting, few patients without signs or symptoms of arthritis had radiological changes, both axial and peripheral, precluding a proper classification.  相似文献   

8.
Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.Perceptual learning, an enduring improvement in the performance of a sensory task resulting from practice, has been widely used as a model to study experience-dependent cortical plasticity in adults (1). However, at present, there is no consensus on the nature of the neural mechanisms underlying this type of learning. Perceptual learning is often specific to the physical properties of the trained stimulus, leading to the hypothesis that the underlying neural changes occur in sensory coding areas (2). Electrophysiological and brain imaging studies have shown that visual perceptual learning alters neural response properties in primary visual cortex (3, 4) and extrastriate areas including V4 (5) and MT+ (middle temporal/medial superior temporal cortex) (6), as well as object selective areas in the inferior temporal cortex (7, 8). An alternative hypothesis proposes that perceptual learning is mediated by downstream cortical areas that are responsible for attentional allocation and/or decision-making, such as the intraparietal sulcus (IPS) and anterior cingulate cortex (9, 10).Learning is most beneficial when it enables generalized improvements in performance with other tasks and stimuli. Although specificity is one of the hallmarks of perceptual learning, transfer of learning to untrained stimuli and tasks does occur, to a greater or lesser extent (2). For example, visual perceptual learning of an orientation task involving clear displays (a Gabor patch) also improved performance of an orientation task involving noisy displays (a Gabor patch embedded in a random-noise mask) (11). Transfer of perceptual learning to untrained tasks indicates that neuronal plasticity accompanying perceptual learning is not restricted to brain circuits that mediate performance of the trained task, and perceptual training may lead to more widespread and profound plasticity than we previously believed. However, this issue has rarely been investigated. Almost all studies concerned with the neural basis of perceptual learning have used the same task and stimuli for training and testing. One exception is a study conducted by Chowdhury and DeAngelis (12). It is known that learning of fine depth discrimination in a clear display can transfer to coarse depth discrimination in a noisy display (13). Chowdhury and DeAngelis (12) examined the effect of fine depth discrimination training on the causal contribution of macaque MT to coarse depth discrimination. MT activity was essential for coarse depth discrimination before training. However, after training, inactivation of MT had no effect on coarse depth discrimination. This result is striking, but the neural substrate of learning transfer was not revealed.Here, we performed a transcranial magnetic stimulation (TMS) experiment and a functional magnetic resonance imaging (fMRI) experiment, seeking to identify the neural mechanisms involved in the transfer of learning from coherent motion (i.e., a motion stimulus containing 100% signal) to a task involving noisy motion (i.e., a motion stimulus containing only 40% signal and 60% noise:40% coherent motion). By testing with stimuli other than the trained stimulus, we uncovered much more profound functional changes in the brain than expected. Before training, V3A and MT+ were the dominant areas for the processing of coherent and noisy motion, respectively. Learning modified their inherent functional specializations, whereby V3A superseded MT+ as the dominant area for the processing of noisy motion after training. This change in functional specialization involving key areas within the cortical motion processing network served as the neural substrate for the transfer of motion perceptual learning.  相似文献   

9.
Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently.Almost all human visually guided behavior relies on the selective uptake of information, due to sensory and cognitive limitations. On the sensory side, the sampling of visual input by the retinal mosaic of photoreceptors becomes increasingly sparse and irregular away from central vision (1). In addition, fewer cortical neurons are devoted to the analysis of peripheral visual information (cortical magnification) (2, 3). Humans and other animals with so-called foveated visual systems have evolved gaze-shifting mechanisms to overcome these limitations. Saccadic eye movements serve to rapidly and efficiently deploy gaze to objects and regions of interest in the visual field. Sampling the environment appropriately with gaze is the starting point of adaptive visual-motor behavior (4, 5).Studies have shown that saccadic eye movements are guided by analysis of information in the visual periphery up to 80–100 ms before saccade execution (68). However, active vision typically requires humans not only also to analyze information in the visual periphery to decide where to fixate next (peripheral selection), but also to analyze the information at the current fixation location (foveal analysis). Not much is known about how foveal analysis and peripheral selection are coordinated and interact. In this regard, we need to know (i) whether and to what extent foveal analysis and peripheral selection are constrained by a common bottleneck or limited capacity resource, and (ii) how time within a fixation is allocated to these two tasks.Capacity limitations are ubiquitous in human visual processing. There is a long-standing debate on the extent to which visual attention may be focused on different locations in the visual field (911). If foveal analysis and peripheral selection both require a spatial attentional “spotlight,” the coordination of these two tasks will be constrained by the way in which this spotlight can be configured. For example, the size of the spotlight may vary with the processing difficulty of foveal information, as in tunnel vision (12, 13). Similarly, in both reading (14) and scene-viewing (15), a reduction in the perceptual span has been reported with higher foveal load. A high foveal processing load can also prevent distraction from irrelevant visual information in the periphery (16). These findings suggest that there may be interactions between foveal analysis and peripheral selection (17, 18), in that the gain on peripheral information processing may vary according to the foveal processing load.A useful way to think of the coordination between foveal analysis and peripheral selection is to picture the temporal profile of information extraction over the course of a fixation period. Fig. 1 shows some schematic profiles, or integration windows, for foveal and peripheral information (shown in black and gray respectively). Fig. 1 A–C chart the progression in the extent to which the extraction of peripheral visual information is contingent upon the completion of foveal analysis—from completely contingent (Fig. 1A, serial), through partly contingent (Fig. 1B, cascaded), to completely parallel (Fig. 1C). This temporal relation between foveal analysis and peripheral selection is a core assumption of models of eye movement control in reading (14, 1921) and other visual-motor domains (22, 23). Finally, Fig. 1D demonstrates a hypothetical tradeoff between an increase in foveal load and a decreased peripheral gain. In this example, the foveal integration window is extended to reflect the higher processing load. The duration of the peripheral window is also extended, but by a smaller amount, and its amplitude is reduced. Note that the accuracy of peripheral selection will be determined by both the amplitude and the duration of the integration window.Open in a separate windowFig. 1.Hypothetical temporal weighting functions for foveal analysis and peripheral selection. (A) Strict serial model: peripheral information is analyzed only once foveal processing is complete. (B) A weaker version of the serial model in which peripheral information is processed once some criterion amount of foveal analysis is complete. (C) Parallel model in which foveal analysis and peripheral selection start together. In A–C, the time window for peripheral selection is shorter than that for foveal analysis, reflecting the primary importance of the latter. (D) Manipulation of foveal load. As foveal processing difficulty is increased, more time is taken to analyze the foveal information. The time window for peripheral selection extends as well, but by a smaller amount. In addition, the gain of peripheral processing is lower, resulting in attenuation of the amplitude of the weighting function.A potentially powerful way to identify the coordination between foveal analysis and peripheral selection is then to estimate these underlying integration windows directly, under conditions of variable foveal processing load. Identifying these windows is far from trivial; it involves determining what information is being processed, from where, and at what point in time during an individual fixation. We have developed a dual-task noise classification approach (2426) that allows us to identify what information is used by the observer for what “task” over the brief time scale of a single fixation. Using this method, we show that the uptake of information for foveal analysis and peripheral selection proceeds independently and in parallel.  相似文献   

10.
In primates, visual stimuli with social and emotional content tend to attract attention. Attention might be captured through rapid, automatic, subcortical processing or guided by slower, more voluntary cortical processing. Here we examined whether irrelevant faces with varied emotional expressions interfere with a covert attention task in macaque monkeys. In the task, the monkeys monitored a target grating in the periphery for a subtle color change while ignoring distracters that included faces appearing elsewhere on the screen. The onset time of distracter faces before the target change, as well as their spatial proximity to the target, was varied from trial to trial. The presence of faces, especially faces with emotional expressions interfered with the task, indicating a competition for attentional resources between the task and the face stimuli. However, this interference was significant only when faces were presented for greater than 200 ms. Emotional faces also affected saccade velocity and reduced pupillary reflex. Our results indicate that the attraction of attention by emotional faces in the monkey takes a considerable amount of processing time, possibly involving cortical–subcortical interactions. Intranasal application of the hormone oxytocin ameliorated the interfering effects of faces. Together these results provide evidence for slow modulation of attention by emotional distracters, which likely involves oxytocinergic brain circuits.An important issue in understanding the processing of significant affective stimuli is the extent to which these stimuli compete with ongoing tasks in normal healthy individuals. It is generally agreed that there exists attentional bias toward emotional faces in humans as well as other primates (1, 2). However, it is uncertain whether emotional faces trigger attentional capture, which we define as an immediate shift of visual attention at the expense of other stimuli.Affective reactions can be evoked with minimal cognitive processing (3). For instance, presentation of emotional faces under reduced awareness by masking activates the amygdala (4, 5) and produces pupillary (6) and skin conductance responses (7). An immediate response to affectively salient stimuli is thought possible through a direct subcortical pathway via the amygdala, bypassing the primary sensory cortex (8). This activity could then influence allocation of attentional resources in the cortex (9, 10). Consistent with this, emotional faces capture attention in visual search (1113), even when irrelevant to the task at hand. Based on these findings, one would predict that emotional faces will interfere with a primary attention task.However, shifts of attention to emotional stimuli are likely not obligatory in every circumstance. Some studies indicate that capture only occurs in low perceptual load conditions (14, 15). Functional MRI (fMRI) has shown that the amygdala response to affective stimuli is modulated by task demands (16, 17). Also, capture is not entirely stimulus driven, but may be dependent on overlap between the current attentional set and the stimulus that does the capturing (18, 19). In a larger sense, goals and expectations influence capture (20). These factors are more cognitive in nature and possibly involve cortical processing. It has been proposed that the subcortical affective response depends on cortical processing (21). When cortical resources are fully taken up by a primary task, affective stimuli may not get any processing advantage and therefore may not interfere with the task.In the present study, three monkeys detected a subtle color change in an attended target while faces of conspecifics were presented as distracters (Fig. 1). Attentional capture of the faces was measured in terms of reduction in sensitivity for detecting the color change. We found that face distracters did influence monkeys’ performance and reaction time (RT), as well as affected their eye velocity and pupillary dilatation, especially when the faces had a threat expression. Importantly, these influences were dependent on presentation duration of the face images, suggesting that shifts of attention toward the faces were not immediate.Open in a separate windowFig. 1.Methods. (A) Illustration of screen events in the task. (B) Timeline of screen events with possible times that image onset and target/distracter change could occur. (C) Examples of facial expressions of one individual in the stimulus set. From left to right: neutral, threat, fear grin, and lip smack. Fear grin and lip smack were combined.Although different viewpoints predict different roles for cortical and subcortical pathways, there seems to be general agreement that preexisting bias affects how attention is allocated. For example, anxiety is often associated with bias toward fear-relevant information (2226). To manipulate our subjects’ bias toward faces, we administered the hormone oxytocin (OT). It has been shown that inhalation of OT increases attention to eyes (27, 28) and ability to read emotions from facial expressions (29). In monkeys, OT was shown to blunt social vigilance (30). We found that OT reduced interference on our task, indicating a link between oxytocinergic circuits and attentional circuits.  相似文献   

11.
Snakes and their relationships with humans and other primates have attracted broad attention from multiple fields of study, but not, surprisingly, from neuroscience, despite the involvement of the visual system and strong behavioral and physiological evidence that humans and other primates can detect snakes faster than innocuous objects. Here, we report the existence of neurons in the primate medial and dorsolateral pulvinar that respond selectively to visual images of snakes. Compared with three other categories of stimuli (monkey faces, monkey hands, and geometrical shapes), snakes elicited the strongest, fastest responses, and the responses were not reduced by low spatial filtering. These findings integrate neuroscience with evolutionary biology, anthropology, psychology, herpetology, and primatology by identifying a neurobiological basis for primates’ heightened visual sensitivity to snakes, and adding a crucial component to the growing evolutionary perspective that snakes have long shaped our primate lineage.Snakes have long been of interest to us above and beyond the attention we give to other wild animals. The attributes of snakes and our relationships with them have been topics of discussion in fields as disparate as religion, philosophy, anthropology, psychology, primatology, and herpetology (1, 2). Ochre and eggshells dated to as early as 75,000 y ago and found with cross-hatched and ladder-shaped lines (3, 4) resemble the dorsal and ventral scale patterns of snakes. As the only natural objects with those characteristics, snakes may have been among the first models used in representational imagery created by modern humans. Our interest in snakes may have originated much further back in time; our primate lineage has had a long and complex evolutionary history with snakes as competitors, predators, and prey (1). The position of primates as prey of snakes has, in fact, been argued to have constituted strong selection favoring the evolution of the ability to detect snakes quickly as a means of avoiding them, beginning with the earliest primates (2, 5). Across primate species, ages, and (human) cultures, snakes are indeed detected visually more quickly than innocuous stimuli, even in cluttered scenes (611). Physiological responses reveal that humans are also able to detect snakes visually even before becoming consciously aware of them (12). Although the visual system must be involved in the preferential ability to detect snakes rapidly and preconsciously or automatically, the neurological basis for this ability has not yet been elucidated, perhaps because an evolutionary perspective is rarely incorporated in neuroscientific studies. Our study helps to fill this interdisciplinary gap by investigating the responses of neurons to snakes and other natural stimuli that may have acted as selective pressures on primates in the past.Here, we identify a mechanism for the visual system’s involvement in rapid snake detection by measuring neuronal responses in the medial and dorsolateral pulvinar to images of snakes, faces of monkeys, hands of monkeys, and geometric shapes in a catarrhine primate, Macaca fuscata. The medial and the dorsal part of the traditionally delimited lateral pulvinar are distinctive in primates, with no homologous structures found in the visual systems of nonprimate mammals (13), and the medial pulvinar appears to be involved in visual attention and fast processing of threatening images (14). Based on this and other indirect evidence, the Snake Detection Theory (2) hypothesized that these primate-specific regions of the pulvinar evolved in part to assist primates in detecting and thus avoiding snakes. If true, then we would expect snake-sensitive neurons to be found in those regions. Here we present unique neuroscientific evidence in support of the snake detection theory (2).  相似文献   

12.
Background/Study Context: Typical measures for assessing the useful field (UFOV) of view involve many components of attention. The objective of the current experiment was to examine differences in visual search efficiency for older individuals with and without UFOV impairment. Methods: The authors used a computerized screening instrument to assess the useful field of view and to characterize participants as having an impaired or normal UFOV. Participants also performed two visual search tasks, a feature search (e.g., search for a green target among red distractors) or a conjunction search (e.g., a green target with a gap on its left or right side among red distractors with gaps on the left or right and green distractors with gaps on the top or bottom). Results: Visual search performance did not differ between UFOV impaired and unimpaired individuals when searching for a basic feature. However, search efficiency was lower for impaired individuals than unimpaired individuals when searching for a conjunction of features. Conclusion: The results suggest that UFOV decline in normal aging is associated with conjunction search. This finding suggests that the underlying cause of UFOV decline may arise from an overall decline in attentional efficiency. Because the useful field of view is a reliable predictor of driving safety, the results suggest that decline in the everyday visual behavior of older adults might arise from attentional declines.  相似文献   

13.
Voluntary control of attention promotes intelligent, adaptive behaviors by enabling the selective processing of information that is most relevant for making decisions. Despite extensive research on attention in primates, the capacity for selective attention in nonprimate species has never been quantified. Here we demonstrate selective attention in chickens by applying protocols that have been used to characterize visual spatial attention in primates. Chickens were trained to localize and report the vertical position of a target in the presence of task-relevant distracters. A spatial cue, the location of which varied across individual trials, indicated the horizontal, but not vertical, position of the upcoming target. Spatial cueing improved localization performance: accuracy (d′) increased and reaction times decreased in a space-specific manner. Distracters severely impaired perceptual performance, and this impairment was greatly reduced by spatial cueing. Signal detection analysis with an “indecision” model demonstrated that spatial cueing significantly increased choice certainty in localizing targets. By contrast, error-aversion certainty (certainty of not making an error) remained essentially constant across cueing protocols, target contrasts, and individuals. The results show that chickens shift spatial attention rapidly and dynamically, following principles of stimulus selection that closely parallel those documented in primates. The findings suggest that the mechanisms that control attention have been conserved through evolution, and establish chickens—a highly visual species that is easily trained and amenable to cutting-edge experimental technologies—as an attractive model for linking behavior to neural mechanisms of selective attention.The capacity to select particular locations or stimuli for differential analysis and decision making is essential for any animal to behave intelligently in a complex environment. It follows that neural mechanisms that enable this capacity must have appeared early in evolution (1). In humans, and nonhuman primates, selective attention enables such adaptive behavior by selecting from all possible information the information that is most relevant for making decisions (2, 3). However, little is known about whether the capacity for selective attention exists in nonprimate vertebrate species.Studies that were intended to measure selective attention in nonprimate species have produced inconclusive results for several reasons. First, much of the previous work has inferred the capacity for selective attention based on selective learning or selective reporting of specific cue features, both in birds (4) and in rodents (5). For example, in highly cited work on feature-based attention (4), pigeons were reinforced for pecking on targets that contained combinations of two features (e.g., color and shape). In later trials, when the features were presented individually, they pecked almost exclusively on targets with only one of the two features (e.g., color, ignoring shape). The results were interpreted as indicating that the birds had attended selectively, during training, to only one of the two features. However, a follow up study provided evidence that questioned this interpretation (6). The controversy highlights serious caveats with using tasks that do not distinguish selection for attention from selective learning of cue features or a bias for subsequent responses.Second, previous studies did not distinguish the effects of attention from those of motor preparation. For example, pigeons were shown to be able to anticipate the location of an upcoming target based either on the statistics of target presentation (7) or on the validity of a spatial cue (8). However, these studies measured the effects of cueing only in terms of faster reaction times to the cued location, and faster reaction times can result simply from planning a motor response to a target’s location based on the advance information provided by the cue.Third, and most importantly, even studies that measured the effects of attention on behavior in terms of percent correct did not distinguish perceptual (d′) improvements at the cued location from increases in choice (or response) bias toward the cued location (3). A fundamental requirement in the design of cued spatial attention tasks is that spatial cues must convey only information that is orthogonal to the task. Otherwise, perceptual effects could be confounded by cue-induced changes in bias toward the cued location (3).This study measures the effects of top-down spatial attention on perceptual performance in chickens. Extensive research in primates has generated specific experimental protocols for characterizing and quantifying the effects of attention (3). Quantitative metrics that are diagnostic of attention are (i) improvements in perceptual accuracy and (ii) shortening of reaction times. Using these metrics, the benefits of attention have been shown in primates to vary dramatically with the strength of a target stimulus (9, 10), its location relative to the locus of attention (11, 12), and the presence and strength of distracting stimuli (13, 14). In this study, we demonstrate these same benefits of attention in chickens.A priori, the properties of visual attention might be expected to be substantially different between chickens (an afoveate species with laterally positioned eyes) and humans and nonhuman primates (foveate species with frontal eyes). However, our results document remarkable similarities between chickens and primates regarding the rules that govern selective attention as well as competitive stimulus selection, the core component of attention that determines the information that gains access to working memory (15). The similarities include deleterious effects of bottom-up distracters that increase systematically with distracter strength, perceptual benefits of top-down spatial cueing that diminish the deleterious effects of bottom-up distracters, and improvements in choice certainty with spatial cueing.Our findings are highly relevant to studies that seek to understand the effects of visual attention on sensory processing in a variety of lateral-eyed, afoveate, nonprimate mammalian model species, such as mice and rats. In addition, given the evolutionary distance (>250 million years) and enormous ethological differences between chickens and primates, these similarities suggest that mechanisms for mediating competitive stimulus selection for attention appeared early in vertebrate evolution and that they are conserved across phylogeny. Thus, our findings strongly encourage the application of comparative neuroscience to the study of mechanisms of attention.  相似文献   

14.
Experience-dependent plasticity is a fundamental property of the brain. It is critical for everyday function, is impaired in a range of neurological and psychiatric disorders, and frequently depends on long-term potentiation (LTP). Preclinical studies suggest that augmenting N-methyl-d-aspartate receptor (NMDAR) signaling may promote experience-dependent plasticity; however, a lack of noninvasive methods has limited our ability to test this idea in humans until recently. We examined the effects of enhancing NMDAR signaling using d-cycloserine (DCS) on a recently developed LTP EEG paradigm that uses high-frequency visual stimulation (HFvS) to induce neural potentiation in visual cortex neurons, as well as on three cognitive tasks: a weather prediction task (WPT), an information integration task (IIT), and a n-back task. The WPT and IIT are learning tasks that require practice with feedback to reach optimal performance. The n-back assesses working memory. Healthy adults were randomized to receive DCS (100 mg; n = 32) or placebo (n = 33); groups were similar in IQ and demographic characteristics. Participants who received DCS showed enhanced potentiation of neural responses following repetitive HFvS, as well as enhanced performance on the WPT and IIT. Groups did not differ on the n-back. Augmenting NMDAR signaling using DCS therefore enhanced activity-dependent plasticity in human adults, as demonstrated by lasting enhancement of neural potentiation following repetitive HFvS and accelerated acquisition of two learning tasks. Results highlight the utility of considering cellular mechanisms underlying distinct cognitive functions when investigating potential cognitive enhancers.Experience-dependent neuroplasticity is the capacity of the brain to change in response to environmental input, learning, and use. It is a fundamental property of the brain and is critical for everyday functioning. It allows us to learn and remember patterns, predict and obtain reward, and refine and accelerate response selection for adaptive behavior (1). During development, experience-dependent plasticity interacts with genetic programming to organize neurons into the structurally and functionally connected circuits that characterize a mature brain. Although this basic circuitry is established by early adulthood, experience-dependent plasticity continues to shape connectivity within these circuits such that important inputs and action outputs are represented by larger and more coordinated populations of neurons. Given that these changes are the primary means through which the adult brain enables new behavior and that such plasticity is impaired in a range of neurological and psychiatric disorders (2), identifying manipulations that can harness experience-dependent plasticity offers exciting possibilities. Here, we tested whether augmenting N-methyl-d-aspartate receptor (NMDAR) activity could enhance experience-dependent plasticity in the adult human brain.The classical mechanism underlying experience-dependent plasticity is long-term potentiation (LTP) or depression (LTD) of synaptic strength. The brain encodes external and internal events through spatiotemporal patterns of activity generated by populations of neurons. Lasting changes in synaptic strength via LTP and LTD shapes these patterns of activity and are thought to be the primary cellular mechanism for representing new information in the brain (1, 3). In animals, LTP is identified electrophysiologically as an enduring increase in postsynaptic cellular currents using single-cell or local field recordings and is observed following high-frequency electrical stimulation or new learning. In mature animals, LTP has been observed at subcortical and sensory cortex synapses, including in the amygdala, hippocampus, and striatum, as well as in visual, auditory, and somatosensory cortex (16). Although a lack of noninvasive methods has traditionally limited our ability to investigate LTP in humans, recent research indicates that protocols using high-frequency, repetitive presentation of visual or auditory stimuli provide a naturalistic method for inducing LTP in humans and animals. Studies in rodents demonstrated that changes in neural responses following repetitive sensory stimulation show the cardinal features of synaptic LTP, including persistence (>1 h), input specificity, and NMDAR dependency (7, 8). Furthermore, these LTP-like changes can be measured noninvasively as changes in sensory evoked potentials, which are stimulus-synchronized electroencephalograph (EEG) signals that result from postsynaptic potentials in populations of sensory neurons. High-frequency sensory stimulation has thus been shown to induce lasting potentiation of visual and auditory evoked potentials in human adults (9, 10) and has been used to demonstrate that LTP-like processes are impaired in patients with depression (11), bipolar disorder (12), and schizophrenia (13, 14). Sensory LTP protocols therefore provide a valuable window into the cellular mechanism thought to underlie many forms of experience-dependent plasticity.One potential method for promoting experience-dependent plasticity is to augment NMDAR signaling. The NMDAR is a primary glutamate receptor and is critical for triggering LTP at many synapses in the brain. This role stems from the receptor’s unique biophysical properties, including that (i) NMDARs are blocked by a magnesium ion at rest such that they are dually voltage and ligand gated and therefore detect coincident presynaptic and postsynaptic activity; (ii) NMDARS are calcium permeable and therefore initiate signaling cascades when activated, leading to structural synaptic changes such as α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic (AMPA) receptor up-regulation and enlarged dendritic spines on which synapses are localized; and (iii) NMDARs have slow excitatory postsynaptic potential (EPSP) decay kinetics that facilitate temporal summation of EPSPs and sustained neural excitation (15, 16). Studies of transgenic and knockout mice showed that blocking NMDARs impairs LTP as well as learning and memory performance. Conversely, enhancing NMDAR activity enhanced LTP and the acquisition and retention of information (17). Given the role of NMDARs in triggering the cellular machinery that supports experience-dependent plasticity, augmenting NMDAR signaling may offer a powerful means to promote LTP and learning in humans.In the current study, we used the NMDAR agonist d-cycloserine (DCS) to examine how augmenting NMDAR signaling affects LTP-like processes and learning in the adult human brain. NMDARs are tetramers composed of two NR1 and two NR2 subunits. Activation requires binding of glutamate to the NR2 subunit and concurrent binding of glycine or d-serine to the NR1 subunit (18). Although direct enhancement of NMDAR signaling via the glutamate site can produce excitotoxicity, indirect stimulation via the glycine site offers a safer method for facilitating activity. DCS is a partial agonist at the glycine site that readily crosses the blood–brain barrier, is approved by the Food and Drug Administration for daily use as an antituberculosis drug, and has few side effects at low doses. Thus, DCS offers a safe means to augment NMDAR signaling at low doses.Using a double-blind design, we randomized healthy adults to receive DCS or placebo. We examined the effects of augmenting NMDAR signaling on two indices of experience-dependent plasticity: (i) LTP and (ii) incremental learning. Participants completed the visual LTP task using high-frequency visual stimulation (HFvS) to induce potentiation of visual cortex neurons, followed by a weather prediction task (WPT) (19), an information integration task (IIT) (20), and an n-back task. The WPT and IIT are incremental learning tasks in which stimulus–feedback associations are thought to be encoded by LTP at corticostriatal synapses (21, 22). The n-back is a spatial working memory task. Working memory relies on reverberating activity in cortical microcircuits over short delays to maintain information in the absence of stimuli and, thus, does not rely on LTP (23). To facilitate dissociation of the effects of DCS on experience-dependent plasticity versus working memory, the n-back task was designed to be identical to the IIT in stimuli and trial structure. Thus, the only difference participants experienced between the tasks was whether they were asked to learn about the stimuli (i.e., for the IIT) or recall whether stimuli were in the same location on the screen as recently shown stimuli (i.e., for the n-back). To assess potential delayed effects of DCS, participants returned to the laboratory the following day to repeat cognitive testing. No drug or placebo was administered on the second day. Although the idea of using NMDAR agonists to enhance cognition is not new, past studies examining diverse cognitive domains have yielded mixed results (2436). Difficulty reconciling divergent effects has limited our ability to harness NMDAR agonists as cognitive enhancers. To our knowledge, this is the first human study to systematically test the hypothesis that increasing NMDAR signaling enhances experience-dependent plasticity, and the first study to combine behavioral measures with assessment of a mechanism thought to underlie experience-dependent plasticity. We hypothesized that participants who received DCS would show (i) enhanced neural potentiation following HFvS on the LTP task; (ii) enhanced performance on the WPT and IIT; and (iii) similar performance on the n-back task, compared with Placebo participants.  相似文献   

15.

Purpose

The surgeons of the future will need to have advanced laparoscopic skills. The current challenge in surgical education is to teach these skills and to identify factors that may have a positive influence on training curriculums. The primary aim of this study was to determine if fundamental aptitude impacts on ability to perform a laparoscopic colectomy.

Methods

A practical laparoscopic colectomy course was held by the National Surgical Training Centre at the Royal College of Surgeons in Ireland. The course consisted of didactics, warm-up and the performance of a laparoscopic sigmoid colectomy on thesimulator. Objective metrics such as time and motion analysis were recorded. Each candidate had their psychomotor and visual spatial aptitude assessed. The colectomy trays were assessed by blinded experts post procedure for errors.

Results

Ten trainee surgeons that were novices with respect to advanced laparoscopic procedures attended the course. A significant correlation was found between psychomotor and visual spatial aptitude and performance on both the warm-up session and laparoscopic colectomy (r?>?0.7, p?r?=?0.8, p?=?0.04). There was also a significant correlation between the number of tray errors and time taken to perform the laparoscopic colectomy (r?=?0.83, p?=?0.001).

Conclusion

The results have demonstrated that there is a relationship between aptitude and ability to perform both basic laparoscopic tasks and laparoscopic colectomy on a simulator. The findings suggest that there may be a role for the consideration of an individual’s inherent baseline ability when trying to design and optimise technical teaching curricula for advanced laparoscopic procedures.  相似文献   

16.
The videofluoroscopic dysphagia scale (VDS) was developed as an objective predictor of the prognosis of dysphagia after stroke. We evaluated the clinical validity of the VDS for various diseases. We reviewed the medical records of 1,995 dysphagic patients (1,222 men and 773 women) who underwent videofluoroscopic studies in Seoul National University Hospital from April 2002 through December 2009. Their American Speech–Language–Hearing Association’s National Outcome Measurement System (ASHA NOMS) swallowing scale, clinical dysphagia scale (CDS), and VDS scores were evaluated on the basis of the clinical and/or videofluoroscopic findings by the consensus of two physiatrists. The correlations between the VDS and the other scales were calculated. The VDS displayed significant correlations with the ASHA NOMS swallowing scale and the CDS in every disease group (p < 0.001 in all groups, including central and peripheral nervous system disorders), and these correlations were more apparent for spinal cord injury, peripheral nerve system disorders, and neurodegenerative diseases (correlation coefficients between the VDS and the ASHA NOMS swallowing scale: ?0.603, ?0.602, and ?0.567, respectively). This study demonstrated that the VDS is applicable to dysphagic patients with numerous etiologies that cause dysphagia.  相似文献   

17.

Background

The aim of this study was to investigate the efficacy of rikkunshito (RKT), a traditional Japanese medicine, combined with proton pump inhibitor (PPI) in patients with PPI-refractory non-erosive reflux disease (NERD).

Methods

Patients with PPI-refractory NERD (n = 242) were randomly assigned to the RKT group [rabeprazole (10 mg/day) + RKT (7.5 g/t.i.d.) for 8 weeks] or the placebo group (rabeprazole + placebo). After the 4- and 8-week treatments, we assessed symptoms and quality of life (QOL) using the Frequency Scale for the Symptoms of Gastroesophageal Reflux Disease (FSSG), Gastrointestinal Symptom Rating Scale (GSRS), and Short-Form Health Survey-8 (SF-8).

Results

There were no significant differences in FSSG and GSRS score improvement between these groups after the 4- and 8-week treatments. The mental component summary (MCS) scores of the SF-8 improved more in the RKT group (from 45.8 ± 8.1 to 48.5 ± 7.4) than in the placebo group (from 47.7 ± 7.1 to 48.4 ± 7.5) after the 4-week treatment (P < 0.05). The 8-week treatment with RKT was more effective for improvement of the degree of MCS score in patients with a low body mass index (<22) (P < 0.05) and significantly improved the acid-related dysmotility symptoms of FSSG in female and elderly patients (≥65 years).

Conclusion

There were no significant differences in improvement of GERD symptoms in patients with PPI-refractory NERD between these groups. However, RKT may be useful for improving mental QOL in non-obese patients and acid-related dyspeptic symptoms, especially in women and the elderly.  相似文献   

18.
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.Reading is a cultural activity in which contemporary humans have considerable training. Fluently accessing the sounds and meanings of written words requires very fast and efficient visual recognition of letter strings, at rates exceeding 100 words/min. Neuroimaging studies have begun to show how learning to read modulates the functioning of the visual system, from early retinotopic areas (1, 2) to extrastriate occipital and temporal cortex (1, 3, 4). In particular, a restricted region of the left occipitotemporal cortex, the visual word form area (VWFA), is robustly activated when orthographic stimuli are presented to literate subjects.This VWFA activation is reproducible across participants and writing systems (5, 6), even when orthographic stimuli are flashed unconsciously (7). Orthographic processing in the VWFA is thought to be very fast, peaking at ∼170–200 ms (810), and is colateralized to the dominant hemisphere for language (11, 12). Reading practice enhances activation of the VWFA (1, 13, 14), even in dyslexic children (15). Reading also modulates nonvisual circuits, such as the spoken language network (1, 14, 16, 17).In addition to these positive effects of learning to read, the theory of neuronal recycling (18) proposes that literacy acquisition also may have a negative “unlearning” effect on the visual system, because it invades cortical territories dedicated to other related functions and shifts their processing mode. In particular, learning to read affects a well-established and advantageous mechanism of the primate visual system for invariant recognition of mirror images (mirror invariance), which allows the prompt recognition of images that are identical up to a left–right inversion (1921). This mirror invariance mechanism interferes with reading, because a reader needs to distinguish between mirror letters, such as “b” and “d,” to access the correct phonology and semantics of the printed words. Indeed, literacy acquisition is associated with a reduction in mirror invariance (2224), as well as an enhanced capacity to discriminate mirror images (25).During literacy acquisition, many children initially find it difficult to discriminate mirror letters, but top-down inputs from phonologic, speech production, and motor areas coding for handwriting gestures may carry discriminative information that ultimately help the visual system to “break its symmetry” (26). Brain imaging and transcranial magnetic stimulation (TMS) studies have shown that in good readers, automatic mirror discrimination is present for letters and words at the VWFA site but is not detected for pictures of objects (2729), although a small mirror generalization cost can be detected for pictures of faces, houses, or tools using a sensitive same–different behavioral task (23).The main goal of the present work was to assess the influence of the acquisition of reading ability on the successive stages of visual processing, and to evaluate to what extent early visual processing (<200 ms) is already affected. In our previous work (1), we used functional magnetic resonance imaging (fMRI) to demonstrate the profound influence of reading acquisition on the visual system. By scanning a large group of adult subjects with different literacy levels, we detected a modulation of visual activation as a function of reading ability not only in the VWFA, but also in extrastriate and striate cortex, suggesting an effect on early vision; however, given the low temporal resolution of fMRI, the time course of these effects remained unknown. Literacy acquisition may affect early feedforward processing in the visual cortex, perhaps even including area V1, much like other forms of perceptual learning (3032); however, it is also possible that the fMRI-detected effects of literacy are related to late top-down interactions with language areas (33).We determined the timing of literacy effects in literate and illiterate adults by recording event-related potentials (ERPs) from essentially the same sample of participants as in our previous fMRI study and with an identical visual paradigm. To the best of our knowledge, this is the first ERP investigation on the impact of reading on visual system function that includes fully illiterate adults. As in our previous study, we also included “ex-illiterate” subjects, who learned to read in adulthood and achieved variable levels of reading fluency. We obtained valid ERP data from 24 literate, 16 ex-illiterate, and 9 illiterate adult subjects.Our visual paradigm consisted of the sequential visual presentation, in separate blocks, of pairs of stimuli from six different categories: strings (pseudowords), false fonts, faces, houses, tools, and checkerboards (Fig. 1 and Methods). The stimuli in each pair were identical, mirror image, or different stimuli from the same category, which allowed us to measure identity and mirror repetition priming. The subjects were simply asked to pay attention and to press a button whenever an odd target picture (a black star) appeared, thereby precluding differences in performance and strategies among subjects of differing literacy levels. Using regression, we evaluated the precise moment at which evoked responses were modulated by reading ability.Open in a separate windowFig. 1.Stimuli and procedure. (A) Examples of visual categories used in the experiment. (B) Schematic representation of the experimental design. (Top) After a fixation cross, two successive stimuli within the same category were displayed with a 400-ms stimulus-onset asynchrony (SOA). The pairs could be exactly the same, a mirror version of each other or different exemplars (as above). (Middle) ERPs averaged across subjects and conditions. The GFP time course is plotted in green in the lower part of the figure. (Bottom) Scalp maps showing the topographic distribution of P1s and N1s evoked by the first and second stimuli, respectively.  相似文献   

19.
Scientists have long proposed that memory representations control the mechanisms of attention that focus processing on the task-relevant objects in our visual field. Modern theories specifically propose that we rely on working memory to store the object representations that provide top-down control over attentional selection. Here, we show that the tuning of perceptual attention can be sharply accelerated after 20 min of noninvasive brain stimulation over medial-frontal cortex. Contrary to prevailing theories of attention, these improvements did not appear to be caused by changes in the nature of the working memory representations of the search targets. Instead, improvements in attentional tuning were accompanied by changes in an electrophysiological signal hypothesized to index long-term memory. We found that this pattern of effects was reliably observed when we stimulated medial-frontal cortex, but when we stimulated posterior parietal cortex, we found that stimulation directly affected the perceptual processing of the search array elements, not the memory representations providing top-down control. Our findings appear to challenge dominant theories of attention by demonstrating that changes in the storage of target representations in long-term memory may underlie rapid changes in the efficiency with which humans can find targets in arrays of objects.The cognitive and neural mechanisms that tune visual attention to select certain targets are not completely understood despite decades of intensive study (1, 2). Attention can clearly be tuned to certain object features (similar to tuning a radio to a specific station, also known as an attentional set), but how this tuning occurs as we search for certain objects in our environment is still a matter of debate. The prevailing theoretical view is that working memory representations of target objects provide top-down control of attention as we perform visual search for these objects embedded in arrays of distractors (37). However, an alternative view is that long-term memory representations play a critical role in the top-down control of attention, enabling us to guide attention based on the more enduring representations of this memory store (816). To distinguish between these competing theoretical perspectives, we used transcranial direct-current stimulation (tDCS) to manipulate activity in the brain causally (17), and combined this causal manipulation of neural activity with electrophysiological measurements that are hypothesized to index the working memory and long-term memory representations that guide visual attention to task-relevant target objects.To determine the nature of the working memory and long-term memory representations that control visual attention during search, we simultaneously measured two separate human event-related potentials (ERPs) (8, 18, 19). The contralateral delay activity (or CDA) of subjects’ ERPs provides a measure of the maintenance of target object representations in visual working memory (20, 21). The CDA is a large negative waveform that is maximal over posterior cortex, contralateral to the position of a remembered item. This large-amplitude lateralized negativity is observed even when nonspatial features are being remembered, and persists as information is held in working memory to perform a task. A separate component, termed the anterior P1, or P170, is hypothesized to measure the build-up of long-term memory representations. The anterior P1 is a positive waveform that is maximal over frontal cortex and becomes increasingly negative as exposures to a stimulus accumulate traces in long-term memory (8, 19, 22). This component is thought to reflect the accumulation of information that supports successful recognition of a stimulus on the basis of familiarity (23). For example, the anterior P1 amplitude can be used to predict subsequent recognition memory for a stimulus observed hundreds of stimuli in the past (i.e., across minutes to hours of time) (23) (additional information on the critical features of these ERP components is provided in SI Materials and Methods). We used simultaneous measurements of the CDA and anterior P1 to determine the role that working memory and long-term memory representations play in the tuning of attention following brain stimulation.Our tDCS targeted the medial-frontal region in our first experiments (Fig. 1A) because anodal stimulation of this area results in rapid improvement of simple visual discriminations relative to baseline sham conditions (24). If it is possible to induce rapid improvements in the selection of targets among distractors as humans perform search, then the competing theories of visual attention would account for the accelerated tuning of attention in different ways. The theories that propose working memory representations provide top-down control of visual attention predict that the stimulation-induced improvement in visual search will be due to changes in the nature of the visual working memory representations indexed by the CDA component (Fig. 1 B and C). Specifically, the CDA elicited by the target cue presented on each trial should increase in amplitude, relative to the sham condition, to explain the improvement of attentional selection during search. This type of modulation is expected if working memory-driven theories of attention are correct based on previous evidence that the CDA is larger on trials of a short-term memory task when performed correctly compared with incorrect trials (20). In contrast, theories that propose long-term memory representations rapidly assume control of attention during visual search predict that the stimulation-induced improvement will be due to changes in the long-term memory representations indexed by the anterior P1 elicited by the target cue presented on each trial. Specifically, we should see the anterior P1 exhibit a more negative potential as search improves following stimulation.Open in a separate windowFig. 1.tDCS model, task, and results of experiment 1. (A) Modeled distribution of current during frontocentral midline anodal tDCS on top and front views of a 3D reconstruction of the cortical surface. (B) Task-relevant cue (green Landolt C in this example) signaled the shape of the target in the upcoming search array. Subjects searched for the same target across a run of three to seven trials. Central fixation was maintained for the trial duration. (C) Representative anterior P1, CDA, and N2pc from repetition 1 in the sham condition show each component’s distinctive temporal and spatial profile, with analysis windows shaded in gray. Mean RTs (D), N2pc amplitudes (E), anterior P1 amplitudes (F), and CDA amplitudes (G) are shown across target repetitions for sham (dashed line) and anodal (solid line) conditions. Error bars are ±1 SEM. Red shading highlights dynamics across trials 1 and 2. Grand average ERP waveforms from the frontal midline electrode (Fz) synchronized to cue onset are shown across target repetitions for sham (dashed line) and anodal (solid line) conditions. The measurement window of the anterior P1 is shaded in gray. (H) Relationship between logarithmic rate parameter enhancements for mean anterior P1 amplitude and RT after anodal stimulation relative to sham.Each subject completed anodal and sham tDCS sessions on different days, with order counterbalanced across subjects (n = 18). Immediately after 20 min of tDCS over medial-frontal (experiments 1 and 2) or right parietal (experiment 3) regions of the head (see the current flow model for experiment 1 in Fig. 1A, and additional information about stimulation locations in SI Materials and Methods), we recorded subjects’ ERPs while they completed a visual search task. In this search task, the target was cued at the beginning of each trial (Fig. 1 B and C). The task-relevant cue signaled the identity of the target that could appear in the search array presented a second later. In experiments 1 and 3, the targets and distractors were Landolt-C stimuli, and in experiment 2, they were pictures of real-world objects. A task-irrelevant item was presented with each cue to balance the hemispheric visual input so that the lateralized ERPs that elicit the CDA could be unambiguously interpreted (25). The key manipulation in this task was that the target remained the same for three to seven consecutive trials (length of run randomized) before it was changed to a different object. These target repetitions allowed us to observe attentional tuning becoming more precise across trials.We found that anodal medial-frontal tDCS in experiment 1 accelerated the rate of attentional tuning across trials, as evidenced by the speed of behavior and attention-indexing ERPs elicited by the search arrays (Fig. 1 D and E). First, in the baseline sham condition, we observed that subjects became faster at searching for the target across the same-target runs of trials, as shown by reaction time (RT) speeding (F2,34 = 6.031, P = 0.007) (additional analyses of the sham condition and analyses to verify the absence of effects on accuracy are provided in Fig. S1A and SI Materials and Methods). However, following anodal stimulation, subjects’ RTs dramatically increased in speed, such that search RTs reached floor levels within a single trial. This striking causal aftereffect of anodal tDCS was evidenced by a stimulation condition × target repetition interaction on RTs (F2,34 = 3.735, P = 0.042), with this RT effect being significant between the first two trials of search for a particular Landolt C (F1,17 = 6.204, P = 0.023) but with no significant change thereafter (P > 0.310). Additionally, by fitting these behavioral RT data with a logarithmic function to model the rate of improvement (9), we found that anodal tDCS significantly increased the rate parameters of RT speeding (F1,17 = 5.097, P = 0.037).Consistent with the interpretation that tDCS changed how attention selected the targets in the search arrays, we found that the N2-posterior-contralateral (N2pc) component, an index of the deployment of covert attention to the possible target in a search array (26), showed a pattern that mirrored the single-trial RT effects (F1,17 = 4.792, P = 0.043) (Fig. 1E; N2pc waveforms are provided in Fig. S1A). However, other ERP components indexing lower level perceptual processing or late-stage response selection during search were unchanged by the tDCS (Fig. S1 C and D and Table S1). Our findings demonstrate that the brain stimulation only changed the deployment of visual attention to targets in the search arrays and did not change the operation of any other cognitive mechanism we could measure during the visual search task. Thus, by delivering electrical current over the medial-frontal area, we were able to accelerate the speed with which subjects tuned their attention to select the task-relevant objects causally.To determine whether the tDCS-induced attentional improvements were caused by changes in working memory or long-term memory mechanisms of top-down control, we examined the putative neurophysiological signatures of visual working memory (i.e., the CDA) and long-term memory (i.e., the anterior P1) elicited by the target cues. Given the rapid tuning of attention following tDCS relative to sham, we might expect the flexible working memory system to underlie this effect. Contrary to this intuition, we found that the rapid, one-trial improvement in attentional tuning following medial-frontal tDCS was mirrored by changes in the putative neural index of long-term memory but left the putative neural index of working memory unchanged (Fig. 1 F and G). Fig. 1F shows that the accelerated effects of attentional tuning caused by anodal stimulation were preceded by a rapid increase in negativity of the anterior P1 across same-target trials, mirroring the rapid, single-trial improvement in RT and the N2pc as the search array was analyzed. This effect was confirmed statistically by a significant stimulation condition × target repetition interaction on the anterior P1 amplitude (F2,34 = 3.797, P = 0.049), and most dramatically between the first two trials of search (F1,17 = 5.816, P = 0.027), with no significant pairwise changes in anterior P1 amplitude thereafter (P > 0.707). Logarithmic model fits showed that the rate parameters of the anterior P1 significantly increased after anodal tDCS relative to the more gradual attentional tuning observed in the sham condition (F1,17 = 5.502, P = 0.031; anterior P1 analyses from the sham condition are described in SI Materials and Methods). Despite these causal changes in anterior P1 activity, neither the amplitude of the CDA (F2,34 = 0.669, P = 0.437) nor its rate parameters (F1,17 = 1.183, P = 0.292) significantly differed between stimulation conditions, showing the selectivity of medial-frontal tDCS on the putative neural metric of long-term memory (CDA waveforms are provided in Fig. S1B). We note that the absence of a stimulation-induced CDA increase is not due to ceiling effects. The single target cue gave us ample room to measure such a boost of the CDA, given that without brain stimulation, this memory load is far from eliciting ceiling amplitude levels for this component (20).If the better long-term memory representations indexed by the anterior P1 were the source of the improved search performance, then the size of the stimulation-induced boost of the anterior P1 elicited by the cue should be predictive of the search performance that followed a second later. Consistent with the prediction, we found that an individual subject’s anterior P1 amplitude change across the same-target runs following medial-frontal stimulation was highly predictive of the accelerated rates at which the subjects searched through the visual search array that followed (r18 = 0.764, P = 0.0002) (Fig. 1H). Thus, the ERPs elicited by the target cues ruled out the working memory explanation of the rapid changes in attentional tuning we observed, and were consistent with the hypothesis that changes in the nature of the long-term memory representations that control attention were the source of this dramatic improvement.In experiment 2, we replicated the pattern of findings from experiment 1 using a search task in which the targets and distractors were pictures of real-world objects (Fig. 2 and Fig. S2). These results demonstrate the robustness and reliability of the pattern of effects shown in experiment 1. Specifically, brain stimulation resulted in attention being rapidly retuned to the new targets after one trial, as evidenced by RTs hitting the floor by the second trial in a run. Again, this change in RT was mirrored by stimulation changing the anterior P1, and not the CDA, consistent with accounts that posit an important role for long-term memory in the guidance of attention.Open in a separate windowFig. 2.Task and results of experiment 2. (A) Task in experiment 2 was identical to that of experiment 1 with the exception that Landolt-C stimuli were replaced with real-world objects. Mean RTs (B), N2pc amplitudes (C), anterior P1 amplitudes (D), and CDA amplitudes (E) are shown across target repetitions for sham (dashed line) and anodal (solid line) conditions. Error bars are ±1 SEM. Red shading highlights dynamics across trials 1 and 2. Grand average ERP waveforms from the frontal midline electrode (Fz) synchronized to cue onset are shown across target repetitions for sham (dashed line) and anodal (solid line) conditions. The measurement window of the anterior P1 is shaded in gray. (F) Relationship between logarithmic rate parameter enhancements for mean anterior P1 amplitude and RT after anodal stimulation relative to sham.Next, we sought to provide converging evidence for our conclusion that the stimulation was changing subjects’ behavior by changing the nature of subjects’ long-term memory, consistent with previous functional interpretations of the anterior P1. So far, we have drawn conclusions using our analyses across the fairly short runs of same-target trials. However, we next looked at the learning that took place across the entire experimental session, lasting almost 3 h. If our interpretation of the anterior P1 underlying accelerated attentional tuning is correct, then we should see that the anterior P1 is sensitive to the accumulative effects of learning across the entire experimental session and that these long-term effects change following stimulation. To assess the cumulative effects of learning across these long experimental sessions, we examined how behavior, the anterior P1, and the CDA changed across the beginning, middle, and end of experiments 1 and 2 (Fig. 3 and SI Materials and Methods); that is, we averaged the same-target runs together in the first third, second third, and final third of sessions across all of our subjects. Fig. 3 shows the learning we observed across these long sessions. The RTs were slowest at the beginning of the experiment, when faced with a new target, but as subjects accumulated experience with the set of eight possible targets, we saw the RTs at the beginning of the same-target runs become progressively faster. This accumulation of experience across the entire session that sped RT was mirrored by systematic changes in the amplitude of the anterior P1. The anterior P1 became progressively more negative across the experiment, as we would expect if the magnitude of the negativity were indexing the quality (i.e., strength or number) of the long-term memories for these targets that accumulated across the entire experiment. In contrast, the CDA showed no change across the entire experiment, indicating that the role of working memory in updating the target at the beginning of the same-target runs does not change with protracted learning. For example, it is likely that working memory representations were reactivated to help reduce proactive interference from the target representations built up during the previous run of trials, consistent with influential theoretical proposals (27). Our medial-frontal tDCS boosted these learning effects measured with the anterior P1 and search RTs while leaving the CDA unchanged, consistent with our interpretation of the findings across the shorter same-target runs. Thus, this cumulative learning across the entire experimental session allowed us to observe how the dynamics of the memory representations underlying the focusing of attention evolved over the long term. These results lend further support to the hypothesis that contributions from long-term memory are driving the causal boost of attentional tuning we observed following brain stimulation.Open in a separate windowFig. 3.Within-session dynamics of experiments 1 and 2. Mean RT, anterior P1 amplitude, and CDA amplitude as a function of target repetitions binned according to the first third (black), middle third (red), and last third (green) of runs, collapsed across experiments 1 and 2. Logarithmic model fits are shown for sham (dashed line) and anodal (solid line) tDCS conditions. Error bars are ±1 SEM.To determine whether the effects of experiments 1 and 2 were specific to medial-frontal stimulation, in experiment 3, we stimulated the posterior parietal region in a new group of subjects (order of anodal and sham conditions was counterbalanced, n = 18) (Fig. 4A). This region of the dorsal visual stream plays a role in memory (28) and generating top-down attentional control signals (29), so that it provides a useful contrast with our medial-frontal stimulation, which appeared to influence attentional selection by changing the long-term memory representations. We specifically targeted the right parietal region because previous studies show that disrupting activity in right parietal cortex can influence attention (30, 31).Open in a separate windowFig. 4.tDCS model and results of experiment 3. (A) Modeled distribution of current during right parietal anodal tDCS on top and rear views of a 3D reconstruction of the cortical surface. Mean RTs (B), N2pc amplitudes (C), anterior P1 amplitudes (D), and CDA amplitudes (E) are shown across target repetitions for sham (dashed line) and anodal (solid line) conditions. Bar graphs show data collapsed across target repetitions for each stimulation condition based on whether the target color appeared in the left or right visual hemifield. Error bars are ±1 SEM. (F) Mean N1 amplitudes are illustrated as in BE. The waveforms are search array-locked grand average potentials at lateral occipital sites (OL/OR) contralateral to right (blue) and left (red) hemifield target colors shown across sham (dashed line) and anodal (solid line) conditions. OL, occipital left; OR, occipital right. *P < 0.05.We found that unlike medial-frontal stimulation, right parietal tDCS had no effect on the overall tuning of attention or the memory representations controlling search performance. Fig. 4 BE shows the overlap between stimulation conditions for the RTs (no stimulation condition × target repetition interaction: F2,34 = 0.029, P = 0.955) and the amplitudes of the N2pc (F2,34 = 0.139, P = 0.807), CDA (F2,34 = 0.814, P = 0.439), and anterior P1 (F2,34 = 0.393, P = 0.663) across target repetitions. Because subjects again searched for the same target across the runs of trials in experiment 3, we did observe main effects of target repetition on RTs (F2,34 = 6.190, P = 0.015) and the amplitudes of the N2pc (F2,34 = 4.053, P = 0.045), CDA (F2,34 = 5.292, P = 0.024) and anterior P1 (F2,34 = 6.320, P = 0.006). These effects were due to the steady speeding of RTs, declining CDA amplitude, and increasing amplitudes of the anterior P1 and N2pc across same-target trials. The effects of target repetition indicate that the roles played by working memory and long-term memory in tuning attention across trials in the baseline sham condition were unchanged following right parietal stimulation (Fig. 4 BE and Figs. S3D and S4).Given the lateralized application of tDCS in experiment 3, we examined the data based on whether the target appeared in the left or right visual field. We found that parietal stimulation caused lateralized, bidirectional effects on search performance. Relative to sham, subjects were faster at searching for targets after anodal stimulation, but only on trials in which the target color appeared contralateral (i.e., in the left visual field) to the location of the stimulating electrode on the head (i.e., over the right hemisphere) (Fig. 4B). This effect was evidenced by a stimulation condition × target color laterality interaction on search RTs (F1,17 = 12.098 P = 0.003) and a main effect of stimulation condition on contralateral search RTs (F1,17 = 6.014 P = 0.025). In contrast, RTs were slower when target colors appeared ipsilateral (i.e., in the right visual hemifield) with respect to the location of tDCS (F1,17 = 4.276 P = 0.054) (Fig. 4B). These results suggest that parietal stimulation facilitated and impeded overall search behavior depending on the location of the target in the visual field.We found that the lateralized, bidirectional effects of parietal tDCS on search performance were caused by directly influencing perceptual processing, not changing the memory representations controlling attention. The amplitude of the posterior N1 component, a neural index of perceptual processing (32), was significantly modulated by stimulation condition and in a pattern mirroring that of the behavior (stimulation condition × target color laterality interaction: F1,17 = 10.494 P = 0.005; stimulation condition main effects: contralateral, F1,17 = 4.755 P = 0.044; stimulation condition main effects: ipsilateral, F1,17 = 4.573 P = 0.047) (Fig. 4F and Fig. S3A). In contrast, our indices of the memory representations of the targets and of the deployment of attention were not significantly changed by tDCS [i.e., no stimulation condition × target color laterality interaction: N2pc (F1,17 = 0.041 P = 0.843), CDA (F1,17 = 0.107 P = 0.748), anterior P1 (F1,17 = 0.169 P = 0.686)] (Fig. 4 CE and Fig. S3 BD).In sum, our parietal stimulation protocol did not change the nature of the memory representations controlling attention but directly influenced the perceptual processing of the objects in the search array. These observations were evidenced by lateralized changes in the early visual ERPs and the behavioral responses to the task-relevant items contralateral vs. ipsilateral to the stimulation. Thus, the effects observed in experiments 1 and 2 are not a ubiquitous pattern observed following stimulation of any cognitive control structure. Instead, when we stimulated the posterior parietal region of the visual stream, we observed changes in early visual responses of the brain and similarly spatially mapped patterns of performance.Our findings from experiments 1 and 2, that stimulation over medial-frontal areas can rapidly improve attentional selection of targets, may seem surprising because the medial-frontal cortex is not commonly thought to be a crucial node in the network of regions that guide attention (29, 33). This region is most frequently discussed as critical for the higher level monitoring of task performance, response conflict, and prediction error (34, 35). However, a variety of studies across species and methods have found connections between regions of medial-frontal cortex and both attention and memory processes. First, human neuroimaging research shows that the cingulate opercular network, including anterior cingulate and presupplementary cortex, is engaged during the implementation of a task set, visuospatial attention, and episodic memory (3638). Second, studies using animal models show that attentional selectivity in the visual domain appears to reside in dorsomedial areas of prefrontal cortex (39), such as the anterior cingulate gyrus. Third, both the dorsomedial and right dorsolateral prefrontal cortices respond strongly in memory recognition tasks with specific activity bordering the anterior cingulate at or near Brodmann’s areas 6, 8, and 32 (40), including supplementary and presupplementary motor areas. The right dorsolateral prefrontal cortex, which also appeared to be in the path of our current-flow modeling, has been causally linked to human long-term memory processes (41). Given the set of regions in this path, the specificity of our empirical observations is striking. However, future work is clearly needed to dissect the contributions of the group of medial-frontal and medial-prefrontal regions within the path of the current used here.Our results present evidence from causal manipulations of the healthy human brain that suggest the rapid reconfiguration of the top-down control of visual attention can be carried out by long-term memory. This conclusion seems counterintuitive, given that the active storage of objects in working memory can strongly control attention (7, 18, 42) and that the dominant theories of attention focus exclusively on the role of working memory in guiding attention (36). The present findings do not suggest that working memory representations do not control attention across the short term; indeed, we observed the neural index of storage of the target in working memory that was concurrent with the large changes in the putative index of long-term memory. The critical implication of the present findings is that the rapid improvements in attentional control following brain stimulation were most closely related to our ERP measure of long-term memory and not working memory. These results are surprising to us, given that effects of long-term memory on attentional control are typically observed in tasks in which improvements evolve slowly across protracted training (10, 1214, 16, 43), or even a lifetime of semantic associations (11). Here, we show that the time course of improvement need not be diagnostic of the type of memory representation involved.Our results can also be interpreted within theoretical models that take a broader view of top-down control and do not rely on a conceptual dichotomy between working memory and long-term memory processes that guide attention (44). Neuroimaging research has identified multiple control mechanisms that configure downstream processing consistent with behavioral goals. Most relevant here is the network consisting of the anterior insula (also referred to as the frontal operculum) and dorsal anterior cingulate cortex (also referred to as the medial superior frontal cortex). This network is thought to integrate information over protracted time scales, in an iterative manner, similar to the dynamics and functional properties of the anterior P1. Further, the cingulate opercular network carries various critical control signals, including the selection and maintenance of task goals and the making and monitoring of choices (38, 45, 46). It is possible that our medial-frontal stimulation changed the functioning of this control network, causing the improvements we observed in attentional control.Finally, our findings provide evidence from causal manipulations of the human brain to support the slowly growing view that the nature of top-down attentional control involves the interplay of different types of memory representations (8, 15, 4749). Moving forward, we believe that such a view moves theories of attention nearly into register with models of learning, automaticity, and skill acquisition (9, 5052). Ideally, this perspective will serve to unify, rather than further hyperspecialize, theories of information processing in the brain.  相似文献   

20.
Compared to the 2-h oral glucose tolerance test (OGTT), the assessment of HbA1c was proposed as a less time-consuming alternative to detect pathologies in carbohydrate metabolism. This report aims to assess the predictive accuracy of HbA1c to detect alterations in glucose disposition early after gestational diabetes mellitus (GDM) pregnancy. A detailed metabolic characterization was performed in 77 women with previous GDM (pGDM) and 41 controls 3–6 month after delivery: 3-h OGTT, frequently sampled intravenous glucose tolerance test. Follow-up examinations of pGDMs were performed up to 10 years. HbA1c (venous samples, HPLC) was assessed at baseline as well as during the follow-up period (475 patient contacts). Moderate associations were observed between HbA1c and measurements of plasma glucose during the OGTT at the baseline examination: The strongest correlation was found for FPG (r = 0.40, p < 0.001), decreasing after ingestion. No associations were detected between HbA1c and OGTT dynamics of insulin or C-peptide. Moreover, baseline HbA1c showed only modest correlation with insulin sensitivity (r = ?0.25, p = 0.010) and disposition index (r = ?0.26, p = 0.007). A linear model including fasting as well as post-load glucose levels was not improved by HbA1c. However, pGDM females with overt diabetes manifestation during the follow-up period showed more pronounced increasing HbA1c in contrast to females remaining normal glucose tolerant or developing prediabetes. It is suggested that the performance of HbA1c assessed early after delivery is inferior to the OGTT for the detection of early alterations in glucose metabolism. However, an increase in HbA1c levels could be used as an indicator of risk for diabetes manifestation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号