首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the “intermediate” orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations.Contrary to our seamless and unobstructed perception of visual objects, raw sensory data are often partial and impoverished. Thus, our visual system regularly fills in extensive details to create enriched representations of visual objects (1, 2). A growing body of evidence suggests that “filled-in” visual features of an object are represented at early stages of cortical processing where physical input is nonexistent. For example, increased activity in early visual cortex (V1) was found in retinotopic locations corresponding to nonstimulated regions of the visual field during the perception of illusory contours (3, 4) and color filling-in (5). Furthermore, recent functional magnetic resonance imaging (fMRI) studies using multivoxel pattern analysis (MVPA) methods show how regions of V1 lacking stimulus input can contain information regarding objects or scenes presented at other locations in the visual field (6, 7), held in visual working memory (8, 9), or used in mental imagery (1013).Although these studies have found evidence for internally generated representations of static stimuli in early cortical processing, the critical question remains of whether and how interpolated visual feature representations are reconstructed in early cortical processing while objects undergo kinetic transformations, a situation that is more prevalent in our day-to-day perception.To address this question, we examined the phenomenon of long-range apparent motion (AM): when a static stimulus appears at two different locations in succession, a smooth transition of the stimulus across the two locations is perceived (1416). Previous behavioral studies have shown that subjects perceive illusory representations along the AM trajectory (14, 17) and that these representations can interfere with the perception of physically presented stimuli on the AM path (1821). In line with this behavioral evidence, it was found that the perception of AM leads to increased blood oxygen level-dependent (BOLD) response in the region of V1 retinotopically mapped to the AM path (2225), suggesting the involvement of early cortical processing. This activation increase induced by the illusory motion trace was also confirmed in neurophysiological investigations on ferrets and mice using voltage-sensitive dye (VSD) imaging (26, 27). Despite these findings, however, a crucial question about the information content of the AM-induced signal remains unsolved: whether and how visual features of an object engaged in AM are reconstructed in early retinotopic cortex.Using fMRI and a forward-encoding model (2831), we examined whether content-specific representations of the intermediate state of a dynamic object engaged in apparent rotation could be reconstructed from the large-scale, population-level, feature-tuning responses in the nonstimulated region of early retinotopic cortex representing the AM path. To dissociate signals linked to high-level interpretations of the stimulus (illusory object features interpolated in motion) from those associated with the bottom-up stimulus input (no retinal input on the path) generating the perception of motion, we used rotational AM, which produces intermediate features that are different from the features of the physically present AM-inducing stimuli (transitional AM). We further probed the nature of such AM-induced feature representations by comparing feature-tuning profiles of the AM path in V1 with those evoked when visually imagining the AM stimuli. Our findings suggest intermediate visual features of dynamic objects, which are not present anywhere in the retinal input, are reconstructed in V1 during kinetic transformations via feedback processing. This result indicates, for the first time to our knowledge, that internally reconstructed representations of dynamic objects in motion are instantiated by retinotopically organized population-level, feature-tuning responses in V1.  相似文献   

2.
Perception reflects an integration of “bottom-up” (sensory-driven) and “top-down” (internally generated) signals. Although models of visual processing often emphasize the central role of feed-forward hierarchical processing, less is known about the impact of top-down signals on complex visual representations. Here, we investigated whether and how the observer’s goals modulate object processing across the cortex. We examined responses elicited by a diverse set of objects under six distinct tasks, focusing on either physical (e.g., color) or conceptual properties (e.g., man-made). Critically, the same stimuli were presented in all tasks, allowing us to investigate how task impacts the neural representations of identical visual input. We found that task has an extensive and differential impact on object processing across the cortex. First, we found task-dependent representations in the ventral temporal and prefrontal cortex. In particular, although object identity could be decoded from the multivoxel response within task, there was a significant reduction in decoding across tasks. In contrast, the early visual cortex evidenced equivalent decoding within and across tasks, indicating task-independent representations. Second, task information was pervasive and present from the earliest stages of object processing. However, although the responses of the ventral temporal, prefrontal, and parietal cortex enabled decoding of both the type of task (physical/conceptual) and the specific task (e.g., color), the early visual cortex was not sensitive to type of task and could only be used to decode individual physical tasks. Thus, object processing is highly influenced by the behavioral goal of the observer, highlighting how top-down signals constrain and inform the formation of visual representations.Perception reflects not only the external world but also our internal goals and biases. Even the simplest actions and decisions about visual objects require a complex integration between “top-down” (internally generated) and “bottom-up” (sensory-driven) signals (1). For example, the information used for object categorization depends on top-down signals arising from the spatial (2) or conceptual (3) context in which the object appears, the prior experience of the observers (4, 5), and the specific task (6, 7). Despite such strong behavioral evidence, the neural correlates of this integration remain unclear, both in terms of the cortical regions involved and the extent of the integration within those regions. Here, we investigate the impact of diverse behavioral goals on the neural architecture that supports object processing.Object recognition is known to depend on the ventral visual pathway, a set of interconnected cortical regions extending from early visual areas (e.g., V1/V2) into the anterior inferotemporal cortex (8). It has been argued that object processing along this pathway can largely be captured in feed-forward hierarchical frameworks without the need for top-down signals (912). For example, in the HMAX model (10), the integration of top-down signals is largely constrained to the extrinsic targets of the pathway (8), and in particular the lateral prefrontal cortex (LPFC) (1317). However, there is strong evidence that top-down signals, such as attention and task, modulate the magnitude of response to simple visual stimuli (e.g., gratings) in early visual areas (1823) and the response to objects in extrastriate regions (2430).Although these prior studies provide evidence of an effect of top-down signals on object processing, they afford only limited insight because they tested only the modulation of overall activity and not the impact of top-down signals on fine-grained object information available in the response. The importance of this distinction between gross modulation versus fine-grained information is apparent in functional MRI (fMRI) investigations of working memory, where not all regions that evidence activity modulations contain information about the maintained objects (31, 32). Crucially, quantifying object information allows for a direct test of whether object representations are task-independent (equivalent information within and across tasks) or task-dependent (reduced information across compared with within tasks). Without this test, it remains unclear whether top-down signals, such as task, fundamentally alter the representations of objects or simply scale the response to them.To investigate the full range of task effects, we presented a broad set of objects in six separate tasks, half of which probed physical properties of the stimulus (e.g., color: red/blue) and half its conceptual properties (e.g., content: manmade/natural). This paradigm overcomes a limitation of previous studies, which often treated task and stimulus as simple dichotomous variables (2630), making it difficult to generalize beyond the limited range of tasks and objects tested. Furthermore, previous studies often manipulated only whether an object was attended or not, and therefore could not establish how different types of information are extracted from the same attended stimuli. In contrast, by presenting an identical set of object images under multiple tasks, all requiring attention to the images, and extracting the response to each combination of task and object, we were able to directly test the effect of task on object responses.Our results revealed that task context has a pervasive effect on visual representations throughout the early visual cortex (EVC), the ventral visual pathway, and the LPFC. Task modulated both response magnitude and multivariate response patterns throughout these regions. Critically, responses in the ventral object-selective as well as the LPFC, were task-dependent, with reduced object information across tasks compared with within task. In contrast, object information in the EVC was task-independent, despite large task-related activity modulations. Together, these findings demonstrate that top-down signals directly contribute to and constrain visual object representations in the ventral object-selective and LPFC. Such effects strongly support a recurrent, highly interactive view of visual object processing within the ventral visual pathway that contrasts with many primarily bottom-up frameworks (9, 10).  相似文献   

3.
It is unknown whether anatomical specializations in the endbrains of different vertebrates determine the neuronal code to represent numerical quantity. Therefore, we recorded single-neuron activity from the endbrain of crows trained to judge the number of items in displays. Many neurons were tuned for numerosities irrespective of the physical appearance of the items, and their activity correlated with performance outcome. Comparison of both behavioral and neuronal representations of numerosity revealed that the data are best described by a logarithmically compressed scaling of numerical information, as postulated by the Weber–Fechner law. The behavioral and neuronal numerosity representations in the crow reflect surprisingly well those found in the primate association cortex. This finding suggests that distantly related vertebrates with independently developed endbrains adopted similar neuronal solutions to process quantity.Birds show elaborate quantification skills (13) that are of adaptive value in naturalistic situations like nest parasitism (4), food caching (5), or communication (6). The neuronal correlates of numerosity representations have only been explored in humans (79) and primates (1018), and they have been found to reside in the prefrontal and posterior parietal neocortices. In contrast to primates, birds lack a six-layered neocortex. The birds’ lineage diverged from mammals 300 Mya (19), at a time when the neocortex had not yet developed from the pallium of the endbrain. Instead, birds developed different pallial parts as dominant endbrain structures (20, 21) based on convergent evolution, with the nidopallium caudolaterale (NCL) as a high-level association area (2226). Where and how numerosity is encoded in vertebrates lacking a neocortex is unknown. Here, we show that neurons in the telencephalic NCL of corvid songbirds respond to numerosity and show a specific code for numerical information.  相似文献   

4.
5.
A series of mono- and dinuclear alkynylplatinum(II) terpyridine complexes containing the hydrophilic oligo(para-phenylene ethynylene) with two 3,6,9-trioxadec-1-yloxy chains was designed and synthesized. The mononuclear alkynylplatinum(II) terpyridine complex was found to display a very strong tendency toward the formation of supramolecular structures. Interestingly, additional end-capping with another platinum(II) terpyridine moiety of various steric bulk at the terminal alkyne would lead to the formation of nanotubes or helical ribbons. These desirable nanostructures were found to be governed by the steric bulk on the platinum(II) terpyridine moieties, which modulates the directional metal−metal interactions and controls the formation of nanotubes or helical ribbons. Detailed analysis of temperature-dependent UV-visible absorption spectra of the nanostructured tubular aggregates also provided insights into the assembly mechanism and showed the role of metal−metal interactions in the cooperative supramolecular polymerization of the amphiphilic platinum(II) complexes.Square-planar d8 platinum(II) polypyridine complexes have long been known to exhibit intriguing spectroscopic and luminescence properties (154) as well as interesting solid-state polymorphism associated with metal−metal and π−π stacking interactions (114, 25). Earlier work by our group showed the first example, to our knowledge, of an alkynylplatinum(II) terpyridine system [Pt(tpy)(C ≡ CR)]+ that incorporates σ-donating and solubilizing alkynyl ligands together with the formation of Pt···Pt interactions to exhibit notable color changes and luminescence enhancements on solvent composition change (25) and polyelectrolyte addition (26). This approach has provided access to the alkynylplatinum(II) terpyridine and other related cyclometalated platinum(II) complexes, with functionalities that can self-assemble into metallogels (2731), liquid crystals (32, 33), and other different molecular architectures, such as hairpin conformation (34), helices (3538), nanostructures (3945), and molecular tweezers (46, 47), as well as having a wide range of applications in molecular recognition (4852), biomolecular labeling (4852), and materials science (53, 54). Recently, metal-containing amphiphiles have also emerged as a building block for supramolecular architectures (4244, 5559). Their self-assembly has always been found to yield different molecular architectures with unprecedented complexity through the multiple noncovalent interactions on the introduction of external stimuli (4244, 5559).Helical architecture is one of the most exciting self-assembled morphologies because of the uniqueness for the functional and topological properties (6069). Helical ribbons composed of amphiphiles, such as diacetylenic lipids, glutamates, and peptide-based amphiphiles, are often precursors for the growth of tubular structures on an increase in the width or the merging of the edges of ribbons (64, 65). Recently, the optimization of nanotube formation vs. helical nanostructures has aroused considerable interests and can be achieved through a fine interplay of the influence on the amphiphilic property of molecules (66), choice of counteranions (67, 68), or pH values of the media (69), which would govern the self-assembly of molecules into desirable aggregates of helical ribbons or nanotube scaffolds. However, a precise control of supramolecular morphology between helical ribbons and nanotubes remains challenging, particularly for the polycyclic aromatics in the field of molecular assembly (6469). Oligo(para-phenylene ethynylene)s (OPEs) with solely π−π stacking interactions are well-recognized to self-assemble into supramolecular system of various nanostructures but rarely result in the formation of tubular scaffolds (7073). In view of the rich photophysical properties of square-planar d8 platinum(II) systems and their propensity toward formation of directional Pt···Pt interactions in distinctive morphologies (2731, 3945), it is anticipated that such directional and noncovalent metal−metal interactions might be capable of directing or dictating molecular ordering and alignment to give desirable nanostructures of helical ribbons or nanotubes in a precise and controllable manner.Herein, we report the design and synthesis of mono- and dinuclear alkynylplatinum(II) terpyridine complexes containing hydrophilic OPEs with two 3,6,9-trioxadec-1-yloxy chains. The mononuclear alkynylplatinum(II) terpyridine complex with amphiphilic property is found to show a strong tendency toward the formation of supramolecular structures on diffusion of diethyl ether in dichloromethane or dimethyl sulfoxide (DMSO) solution. Interestingly, additional end-capping with another platinum(II) terpyridine moiety of various steric bulk at the terminal alkyne would result in nanotubes or helical ribbons in the self-assembly process. To the best of our knowledge, this finding represents the first example of the utilization of the steric bulk of the moieties, which modulates the formation of directional metal−metal interactions to precisely control the formation of nanotubes or helical ribbons in the self-assembly process. Application of the nucleation–elongation model into this assembly process by UV-visible (UV-vis) absorption spectroscopic studies has elucidated the nature of the molecular self-assembly, and more importantly, it has revealed the role of metal−metal interactions in the formation of these two types of nanostructures.  相似文献   

6.
Phasic dopamine transmission is posited to act as a critical teaching signal that updates the stored (or “cached”) values assigned to reward-predictive stimuli and actions. It is widely hypothesized that these cached values determine the selection among multiple courses of action, a premise that has provided a foundation for contemporary theories of decision making. In the current work we used fast-scan cyclic voltammetry to probe dopamine-associated cached values from cue-evoked dopamine release in the nucleus accumbens of rats performing cost–benefit decision-making paradigms to evaluate critically the relationship between dopamine-associated cached values and preferences. By manipulating the amount of effort required to obtain rewards of different sizes, we were able to bias rats toward preferring an option yielding a high-value reward in some sessions and toward instead preferring an option yielding a low-value reward in others. Therefore, this approach permitted the investigation of dopamine-associated cached values in a context in which reward magnitude and subjective preference were dissociated. We observed greater cue-evoked mesolimbic dopamine release to options yielding the high-value reward even when rats preferred the option yielding the low-value reward. This result identifies a clear mismatch between the ordinal utility of the available options and the rank ordering of their cached values, thereby providing robust evidence that dopamine-associated cached values cannot be the sole determinant of choices in simple economic decision making.In contemporary theories of economic decision making, values are assigned to reward-predictive states in which animals can take action to obtain rewards, and these state-action values are stored (“cached”) for the purpose of guiding future choices based upon their rank order (15). It is believed that these cached values are represented as synaptic weights within corticostriatal circuitry, reflected in the activity of subpopulations of striatal projection neurons (69), and are updated by dopamine-dependent synaptic plasticity (1012). Indeed, a wealth of evidence suggests that the phasic activity of dopamine neurons reports instances in which current reward or expectation of future reward differs from current expectations (1324). This pattern of activity resembles the prediction-error term from temporal-difference reinforcement-learning algorithms, which is considered the critical teaching signal for updating cached values. A notable feature of models that integrate dopamine transmission into this computational framework is that the cached value of an action is explicitly read out by the phasic dopamine response to the unexpected presentation of a cue that designates the transition into a state in which that action yields reward. Therefore, cue-evoked dopamine signaling provides a neural representation of the cached values of available actions, and if these cached values serve as the basis for action selection, then cue-evoked dopamine responses should be rank ordered in a manner that is consistent with animals’ behavioral preferences.Numerous studies that recorded cue-evoked dopamine signaling have reported correlations with the expected utility (subjective value) of actions (2436). For example, risk-preferring rats demonstrated greater cue-evoked dopamine release for a risky option than for a certain option with equivalent objective expected value (reward magnitude times probability), whereas risk-averse rats showed greater dopamine release for the certain than for the risky option (30). Likewise, the cached values reported by dopamine neurons in macaque monkeys accounted for individual monkeys’ subjective flavor and risk preferences, with each attribute weighted according to its influence on behavioral preferences (31, 32). These observations, which are consistent across measures of dopamine neuronal activity and dopamine release, reinforce the prevailing notion that the dopamine-associated cached values could be the primary determinant of decision making (25, 17, 2832) because the cue-evoked dopamine responses were rank ordered according to the animals’ subjective preferences. However, there have been some reports that other economic attributes, such as effortful response costs (3538) or the overt aversiveness of an outcome (39), are represented inconsistently by cue-evoked dopamine responses. For example, Gan et al. (35) showed that independent manipulations of two different dimensions (reward magnitude and effort) that had equivalent effects on behavior did not have equivalent effects on dopamine release. Paralleling these findings, a recent report reached a similar conclusion that dopamine transmission preferentially encodes an appetitive dimension but is relatively insensitive to aversiveness (39).Because these cue-evoked dopamine signals represent cached values that are purported to determine action selection, their differential encoding of economic dimensions has potentially problematic implications in the context of decision making. Namely, by extrapolating from these studies (3539), one might infer that when a decision involves the tradeoff between these economic dimensions, the rank order of the dopamine-associated cached values for each of the available options would not consistently reflect the ordinal utility of these options and therefore these cached values could not, on their own, be the basis of choices. However, this counterintuitive prediction was not tested explicitly by any of these previous studies; thus it remains a provocative notion that merits direct examination, because it is contrary to the prevailing hypothesis described above which is fundamental to contemporary theories of decision making. Therefore, we investigated interactions between dimensions that previously have been shown during independent manipulations to be weakly or strongly incorporated into these cached values. Specifically, we increased the amount of effort required to obtain a large reward so that animals instead preferred a low-effort option yielding a smaller reward, and we used fast-scan cyclic voltammetry to record cue-evoked mesolimbic dopamine release as a neurochemical proxy for each option’s cached value. These conditions permitted us to test whether the cached values reported via cue-evoked dopamine indeed align with animals’ subjective preferences across these mixed cost–benefit attributes.  相似文献   

7.
High-level visual categories (e.g., faces, bodies, scenes, and objects) have separable neural representations across the visual cortex. Here, we show that this division of neural resources affects the ability to simultaneously process multiple items. In a behavioral task, we found that performance was superior when items were drawn from different categories (e.g., two faces/two scenes) compared to when items were drawn from one category (e.g., four faces). The magnitude of this mixed-category benefit depended on which stimulus categories were paired together (e.g., faces and scenes showed a greater behavioral benefit than objects and scenes). Using functional neuroimaging (i.e., functional MRI), we showed that the size of the mixed-category benefit was predicted by the amount of separation between neural response patterns, particularly within occipitotemporal cortex. These results suggest that the ability to process multiple items at once is limited by the extent to which those items are represented by separate neural populations.An influential idea in neuroscience is that there is an intrinsic relationship between cognitive capacity and neural organization. For example, seminal cognitive models claim there are distinct resources devoted to perceiving and remembering auditory and visual information (1, 2). This cognitive distinction is reflected in the separate cortical regions devoted to processing sensory information from each modality (3). Similarly, within the domain of vision, when items are placed near each other, they interfere more than when they are spaced farther apart (4, 5). These behavioral effects have been linked to receptive fields and the retinotopic organization of early visual areas, in which items that are farther apart activate more separable neural populations (68). Thus, there are multiple cognitive domains in which it has been proposed that capacity limitations in behavior are intrinsically driven by competition for representation at the neural level (4, 710).However, in the realm of high-level vision, evidence linking neural organization to behavioral capacities is sparse, although neural findings suggest there may be opportunities for such a link. For example, results from functional MRI (fMRI) and single-unit recording have found distinct clusters of neurons that selectively respond to categories such as faces, bodies, scenes, and objects (11, 12). These categories also elicit distinctive activation patterns across the ventral stream as measured with fMRI (13, 14). Together, these results raise the interesting possibility that there are partially separate cognitive resources available for processing different object categories.In contrast, many prominent theories of visual cognition do not consider the possibility that different categories are processed by different representational mechanisms. For example, most models of attention and working memory assume or imply that these processes are limited by content-independent mechanisms such as the number of items that can be represented (1518), the amount of information that can be processed (1921), or the degree of spatial interference between items (4, 2224). Similarly, classical accounts of object recognition are intended to apply equally to all object categories (25, 26). These approaches implicitly assume that visual cognition is limited by mechanisms that are not dependent on any major distinctions between objects.Here, we examined (i) how high-level visual categories (faces, bodies, scenes, and objects) compete for representational resources in a change-detection task, and (ii) whether this competition is related to the separation of neural patterns across the cortex. To estimate the degree of competition between different categories, participants performed a task that required encoding multiple items at once from the same category (e.g., four faces) or different categories (e.g., two faces and two scenes). Any benefit in behavioral performance for mixed-category conditions relative to same-category conditions would suggest that different object categories draw on partially separable representational resources. To relate these behavioral measures to neural organization, we used fMRI to measure the neural responses of these categories individually and quantified the extent to which these categories activate different cortical regions.Overall, we found evidence for separate representational resources for different object categories: performance with mixed-category displays was systematically better than performance with same-category displays. Critically, we also observed that the size of this mixed-category benefit was correlated with the degree to which items elicited distinct neural patterns, particularly within occipitotemporal cortex. These results support the view that a key limitation to simultaneously processing multiple high-level items is the extent to which those items are represented by nonoverlapping neural channels within occipitotemporal cortex.  相似文献   

8.
Although it is well accepted that the speech motor system (SMS) is activated during speech perception, the functional role of this activation remains unclear. Here we test the hypothesis that the redundant motor activation contributes to categorical speech perception under adverse listening conditions. In this functional magnetic resonance imaging study, participants identified one of four phoneme tokens (/ba/, /ma/, /da/, or /ta/) under one of six signal-to-noise ratio (SNR) levels (–12, –9, –6, –2, 8 dB, and no noise). Univariate and multivariate pattern analyses were used to determine the role of the SMS during perception of noise-impoverished phonemes. Results revealed a negative correlation between neural activity and perceptual accuracy in the left ventral premotor cortex and Broca’s area. More importantly, multivoxel patterns of activity in the left ventral premotor cortex and Broca’s area exhibited effective phoneme categorization when SNR ≥ –6 dB. This is in sharp contrast with phoneme discriminability in bilateral auditory cortices and sensorimotor interface areas (e.g., left posterior superior temporal gyrus), which was reliable only when the noise was extremely weak (SNR > 8 dB). Our findings provide strong neuroimaging evidence for a greater robustness of the SMS than auditory regions for categorical speech perception in noise. Under adverse listening conditions, better discriminative activity in the SMS may compensate for loss of specificity in the auditory system via sensorimotor integration.The perception and identification of speech signals have traditionally been attributed to the superior temporal cortices (13). However, the speech motor system (SMS)—the premotor cortex (PMC) and the posterior inferior frontal gyrus (IFG), including Broca’s area—that traditionally supports speech production is also implicated in speech perception tasks as revealed by functional magnetic resonance imaging (fMRI) (48), magnetoencephalography (9), electrocorticography in patients (10), and transcranial magnetic stimulation (TMS) (11, 12). Although there is little doubt about these redundant representations, contentious debate remains about the role of the SMS in speech perception. The idea of action-based (articulatory) representations of speech tokens was proposed long ago in the motor theory of speech perception (13) and has been revived recently with the discovery of “mirror neurons” (14). However, empirical evidence does not support a strong version of the motor theory (15). Instead, current theories of speech processing posit that the SMS may implement a sensorimotor integration function to facilitate speech perception (2, 1618). Specifically, the SMS generates internal models that predict sensory consequences of articulatory gestures under consideration, and such forward predictions are matched with acoustic representations in sensorimotor interface areas located in the left posterior superior temporal gyrus (pSTG) and/or left inferior parietal lobule (IPL) to constrain perception (17, 18). Forward sensorimotor mapping may sharpen the perceptual acuity of the sensory system to the expected inputs via a top–down gain allocation mechanism (16), which, we assume, would be especially useful for disambiguating phonological information under adverse listening conditions. However, the assumption, that the SMS is more robust than the auditory cortex in phonological processing in noise so as to achieve successful forward mapping during speech perception, has not yet been substantiated.In addition, there is a debate about whether the motor function is (11) or is not (16) essential for speech perception. Studies using TMS have found that stimulation of PMC resulted in declined phonetic discrimination in noise (11) but had no effect on phoneme identification under optimal listening conditions (16), suggesting a circumstantial recruitment of the SMS in speech perception. Moreover, neuroimaging studies have shown elevated activity in the SMS as speech intelligibility decreases (5, 1721). For instance, there was greater activation in the PMC or Broca’s area when participants listened to distorted relative to clear speech (19), or nonnative than native speech (17, 18). Activity in the left IFG increased as temporal compression of the speech signals increased until comprehension failed at the most compressed levels (20). For speech in noise perception, stronger activation in the left PMC and IFG was observed at lower signal-to-noise ratios (SNRs) (21), and bilateral IFG activity was positively correlated with SNR-modulated reaction time (RT) (5). Those findings have given rise to the hypothesis that the SMS contributes to speech in noise perception in an adaptive and task-specific manner. Presumably, under optimal listening conditions (i.e., no background noise), speech perception emerges primarily from acoustic representations within the auditory system with little or no support from the SMS. In contrast, the SMS would play a greater role in speech perception when the speech signal is impoverished under adverse listening conditions. However, there is likely a limit in the extent to which the SMS can compensate for poor SNR. That is, in some cases, information from articulatory commands fails to generate plausible predictions regarding the speech signals. Thus, the forward mapping may adaptively change with SNR in a linear or a convex (the forward mapping efficiency peaks at a certain SNR and decreases when the SNR increases or decreases) pattern. However, the SNR conditions under which the SMS can successfully compensate for perception of impoverished speech signals by such a forward mapping mechanism are unknown.In the current fMRI study, 16 young participants identified English phoneme tokens (/ba/, /ma/, /da/, and /ta/) masked by broadband noise at multiple SNR levels (–12, –9, –6, –2, 8 dB, and no noise) via button press. A subvocal production task was also included at the end of scanning in which participants were instructed to repetitively and silently pronounce the four phonemes. Univariate General Linear Model (GLM) analysis and multivariate pattern analysis (MVPA) (2225) were combined to investigate the recruitment [mean blood oxygenation level-dependent (BOLD) activation] and phoneme discriminability (spatial distribution of activity) of the SMS during speech in noise perception. MVPA compares the distributed activity patterns evoked by different stimuli/conditions across voxels and reveals the within-subject consistency of the activation patterns. It is robust to individual anatomical variability, is sensitive to small differences in activation, and provides a powerful tool for examining the processes underlying speech categorization (25). We predicted that (i) because the dorsal auditory stream (i.e., IFG, PMC, pSTG, and IPL) supporting sensorimotor integration is activated as a result of task-related speech perception (5, 1721) and phonological working memory processes (2628), the mean BOLD activity in those regions would negatively correlate with SNR-manipulated accuracy (increasing activity with increasing difficulty), supporting the compensatory recruitment of the SMS under adverse listening conditions; (ii) to implement effective forward sensorimotor mapping, the SMS would exhibit stronger multivoxel phoneme discrimination than auditory regions under noisy listening conditions; and (iii) when SNR decreases, the difference in phoneme discriminability between the SMS and auditory regions may increase linearly, or increase first and then decrease at a certain SNR level because of failed forward prediction processes under extensive noise interference. That is, the efficiency of the forward mapping would adaptively change with SNR in a linear or a convex pattern, respectively.  相似文献   

9.
In humans, spontaneous movements are often preceded by early brain signals. One such signal is the readiness potential (RP) that gradually arises within the last second preceding a movement. An important question is whether people are able to cancel movements after the elicitation of such RPs, and if so until which point in time. Here, subjects played a game where they tried to press a button to earn points in a challenge with a brain–computer interface (BCI) that had been trained to detect their RPs in real time and to emit stop signals. Our data suggest that subjects can still veto a movement even after the onset of the RP. Cancellation of movements was possible if stop signals occurred earlier than 200 ms before movement onset, thus constituting a point of no return.It has been repeatedly shown that spontaneous movements are preceded by early brain signals (18). As early as a second before a simple voluntary movement, a so-called readiness potential (RP) is observed over motor-related brain regions (13, 5). The RP was found to precede the self-reported time of the “‘decision’ to act” (ref. 3, p. 623). Similar preparatory signals have been observed using invasive electrophysiology (8, 9) and functional MRI (7, 10), and have been demonstrated also for choices between multiple-response options (6, 7, 10), for abstract decisions (10), for perceptual choices (11), and for value-based decisions (12). To date, the exact nature and causal role of such early signals in decision making is debated (1220).One important question is whether a person can still exert a veto by inhibiting the movement after onset of the RP (13, 18, 21, 22). One possibility is that the onset of the RP triggers a causal chain of events that unfolds in time and cannot be cancelled. The onset of the RP in this case would be akin to tipping the first stone in a row of dominoes. If there is no chance of intervening, the dominoes will gradually fall one-by-one until the last one is reached. This has been coined a ballistic stage of processing (23, 24). A different possibility is that participants can still terminate the process, akin to taking out a domino at some later stage in the chain and thus preventing the process from completing. Here, we directly tested this in a real-time experiment that required subjects to terminate their decision to move once a RP had been detected by a brain–computer interface (BCI) (2531).  相似文献   

10.
Protein toxins from tarantula venom alter the activity of diverse ion channel proteins, including voltage, stretch, and ligand-activated cation channels. Although tarantula toxins have been shown to partition into membranes, and the membrane is thought to play an important role in their activity, the structural interactions between these toxins and lipid membranes are poorly understood. Here, we use solid-state NMR and neutron diffraction to investigate the interactions between a voltage sensor toxin (VSTx1) and lipid membranes, with the goal of localizing the toxin in the membrane and determining its influence on membrane structure. Our results demonstrate that VSTx1 localizes to the headgroup region of lipid membranes and produces a thinning of the bilayer. The toxin orients such that many basic residues are in the aqueous phase, all three Trp residues adopt interfacial positions, and several hydrophobic residues are within the membrane interior. One remarkable feature of this preferred orientation is that the surface of the toxin that mediates binding to voltage sensors is ideally positioned within the lipid bilayer to favor complex formation between the toxin and the voltage sensor.Protein toxins from venomous organisms have been invaluable tools for studying the ion channel proteins they target. For example, in the case of voltage-activated potassium (Kv) channels, pore-blocking scorpion toxins were used to identify the pore-forming region of the channel (1, 2), and gating modifier tarantula toxins that bind to S1–S4 voltage-sensing domains have helped to identify structural motifs that move at the protein–lipid interface (35). In many instances, these toxin–channel interactions are highly specific, allowing them to be used in target validation and drug development (68).Tarantula toxins are a particularly interesting class of protein toxins that have been found to target all three families of voltage-activated cation channels (3, 912), stretch-activated cation channels (1315), as well as ligand-gated ion channels as diverse as acid-sensing ion channels (ASIC) (1621) and transient receptor potential (TRP) channels (22, 23). The tarantula toxins targeting these ion channels belong to the inhibitor cystine knot (ICK) family of venom toxins that are stabilized by three disulfide bonds at the core of the molecule (16, 17, 2431). Although conventional tarantula toxins vary in length from 30 to 40 aa and contain one ICK motif, the recently discovered double-knot toxin (DkTx) that specifically targets TRPV1 channels contains two separable lobes, each containing its own ICK motif (22, 23).One unifying feature of all tarantula toxins studied thus far is that they act on ion channels by modifying the gating properties of the channel. The best studied of these are the tarantula toxins targeting voltage-activated cation channels, where the toxins bind to the S3b–S4 voltage sensor paddle motif (5, 3236), a helix-turn-helix motif within S1–S4 voltage-sensing domains that moves in response to changes in membrane voltage (3741). Toxins binding to S3b–S4 motifs can influence voltage sensor activation, opening and closing of the pore, or the process of inactivation (4, 5, 36, 4246). The tarantula toxin PcTx1 can promote opening of ASIC channels at neutral pH (16, 18), and DkTx opens TRPV1 in the absence of other stimuli (22, 23), suggesting that these toxin stabilize open states of their target channels.For many of these tarantula toxins, the lipid membrane plays a key role in the mechanism of inhibition. Strong membrane partitioning has been demonstrated for a range of toxins targeting S1–S4 domains in voltage-activated channels (27, 44, 4750), and for GsMTx4 (14, 50), a tarantula toxin that inhibits opening of stretch-activated cation channels in astrocytes, as well as the cloned stretch-activated Piezo1 channel (13, 15). In experiments on stretch-activated channels, both the d- and l-enantiomers of GsMTx4 are active (14, 50), implying that the toxin may not bind directly to the channel. In addition, both forms of the toxin alter the conductance and lifetimes of gramicidin channels (14), suggesting that the toxin inhibits stretch-activated channels by perturbing the interface between the membrane and the channel. In the case of Kv channels, the S1–S4 domains are embedded in the lipid bilayer and interact intimately with lipids (48, 51, 52) and modification in the lipid composition can dramatically alter gating of the channel (48, 5356). In one study on the gating of the Kv2.1/Kv1.2 paddle chimera (53), the tarantula toxin VSTx1 was proposed to inhibit Kv channels by modifying the forces acting between the channel and the membrane. Although these studies implicate a key role for the membrane in the activity of Kv and stretch-activated channels, and for the action of tarantula toxins, the influence of the toxin on membrane structure and dynamics have not been directly examined. The goal of the present study was to localize a tarantula toxin in membranes using structural approaches and to investigate the influence of the toxin on the structure of the lipid bilayer.  相似文献   

11.
Mutations that lead to Huntington’s disease (HD) result in increased transmission at glutamatergic corticostriatal synapses at early presymptomatic stages that have been postulated to set the stage for pathological changes and symptoms that are observed at later ages. Based on this, pharmacological interventions that reverse excessive corticostriatal transmission may provide a novel approach for reducing early physiological changes and motor symptoms observed in HD. We report that activation of the M4 subtype of muscarinic acetylcholine receptor reduces transmission at corticostriatal synapses and that this effect is dramatically enhanced in presymptomatic YAC128 HD and BACHD relative to wild-type mice. Furthermore, chronic administration of a novel highly selective M4 positive allosteric modulator (PAM) beginning at presymptomatic ages improves motor and synaptic deficits in 5-mo-old YAC128 mice. These data raise the exciting possibility that selective M4 PAMs could provide a therapeutic strategy for the treatment of HD.Huntington’s disease (HD) is a rare and fatal neurodegenerative disease caused by an expansion of a CAG triplet repeat in Htt, the gene that encodes for the protein huntingtin (1, 2). HD is characterized by a prediagnostic phase that includes subtle changes in personality, cognition, and motor function, followed by a more severe symptomatic stage initially characterized by hyperkinesia (chorea), motor incoordination, deterioration of cognitive abilities, and psychiatric symptoms. At later stages of disease progression, patients experience dystonia, rigidity, and bradykinesia, and ultimately death (37). The cortex and striatum are the most severely affected brain regions in HD and, interestingly, an increasing number of reports suggest that alterations in cortical and striatal physiology are present in prediagnostic individuals and in young HD mice (616).Striatal spiny projection neurons (SPNs) receive large glutamatergic inputs from the cortex and thalamus, as well as dopaminergic innervation from the substantia nigra. In the healthy striatum, the interplay of these neurotransmitters coordinates the activity of SPNs and striatal interneurons, regulating motor planning and execution as well as cognition and motivation (17, 18). Htt mutations lead to an early increase in striatal glutamatergic transmission, which begins during the asymptomatic phase of HD (1214) and could contribute to synaptic changes observed in later stages of HD (19, 20). Based on this, pharmacological agents that reduce excitatory transmission in the striatum could reduce or prevent the progression of alterations in striatal synaptic function and behavior observed in symptomatic stages of HD.Muscarinic acetylcholine receptors (mAChRs), particularly M4, can inhibit transmission at corticostriatal synapses (2125). Therefore, it is possible that selective activation of specific mAChR subtypes could normalize excessive corticostriatal transmission in HD. Interestingly, previous studies also suggest that HD is associated with alterations of striatal cholinergic markers, including mAChRs (2629). We now provide exciting new evidence that M4-mediated control of corticostriatal transmission is increased in young asymptomatic HD mice and that M4 positive allosteric modulators (PAMs) may represent a new treatment strategy for normalizing early changes in corticostriatal transmission and reducing the progression of HD.  相似文献   

12.
13.
Background and objectives: Natriuretic peptides have been suggested to be of value in risk stratification in dialysis patients. Data in patients on peritoneal dialysis remain limited.Design, setting, participants, & measurements: Patients of the ADEMEX trial (ADEquacy of peritoneal dialysis in MEXico) were randomized to a control group [standard 4 × 2L continuous ambulatory peritoneal dialysis (CAPD); n = 484] and an intervention group (CAPD with a target creatinine clearance ≥60L/wk/1.73 m2; n = 481). Natriuretic peptides were measured at baseline and correlated with other parameters as well as evaluated for effects on patient outcomes.Results: Control group and intervention group were comparable at baseline with respect to all measured parameters. Baseline values of natriuretic peptides were elevated and correlated significantly with levels of residual renal function but not with body size or diabetes. Baseline values of N-terminal fragment of B-type natriuretic peptide (NT-proBNP) but not proANP(1–30), proANP(31–67), or proANP(1–98) were independently highly predictive of overall survival and cardiovascular mortality. Volume removal was also significantly correlated with patient survival.Conclusions. NT-proBNP have a significant predictive value for survival of CAPD patients and may be of value in guiding risk stratification and potentially targeted therapeutic interventions.Plasma levels of cardiac natriuretic peptides are elevated in patients with chronic kidney disease, owing to impairment of renal function, hypertension, hypervolemia, and/or concomitant heart disease (17). Atrial natriuretic peptide (ANP) and particularly brain natriuretic peptide (BNP) levels are linked independently to left ventricular mass (35,816) and function (3,617) and predict total and cardiovascular mortality (1,3,8,10,12,18) as well as cardiac events (12,19). ANP and BNP decrease significantly during hemodialysis treatment but increase again during the interdialytic interval (1,2,4,6,7,14,17,2023). Levels in patients on peritoneal dialysis (PD) have been found to be lower than in patients on hemodialysis (11,2426), but the correlations with left ventricular function and structure are maintained in both types of dialysis modalities (11,15,27,28).The high mortality of patients on peritoneal dialysis and the failure of dialytic interventions to alter this mortality (29,30) necessitate renewed attention into novel methods of stratification and identification of patients at highest risk to be targeted for specific interventions. Cardiac natriuretic peptides are increasingly considered to fulfill this role in nonrenal patients. Evaluations of cardiac natriuretic peptides in patients on PD have been limited by small numbers (3,9,11,12,15,2426) and only one study examined correlations between natriuretic peptide levels and outcomes (12). The PD population enrolled in the ADEMEX trial offered us the opportunity to evaluate cardiac natriuretic peptides and their value in predicting outcomes in the largest clinical trial ever performed on PD (29,30). It is hoped that such an evaluation would identify patients at risk even in the absence of overt clinical disease and hence facilitate or encourage interventions with salutary outcomes.  相似文献   

14.
The multifunctional AMPK-activated protein kinase (AMPK) is an evolutionarily conserved energy sensor that plays an important role in cell proliferation, growth, and survival. It remains unclear whether AMPK functions as a tumor suppressor or a contextual oncogene. This is because although on one hand active AMPK inhibits mammalian target of rapamycin (mTOR) and lipogenesis—two crucial arms of cancer growth—AMPK also ensures viability by metabolic reprogramming in cancer cells. AMPK activation by two indirect AMPK agonists AICAR and metformin (now in over 50 clinical trials on cancer) has been correlated with reduced cancer cell proliferation and viability. Surprisingly, we found that compared with normal tissue, AMPK is constitutively activated in both human and mouse gliomas. Therefore, we questioned whether the antiproliferative actions of AICAR and metformin are AMPK independent. Both AMPK agonists inhibited proliferation, but through unique AMPK-independent mechanisms and both reduced tumor growth in vivo independent of AMPK. Importantly, A769662, a direct AMPK activator, had no effect on proliferation, uncoupling high AMPK activity from inhibition of proliferation. Metformin directly inhibited mTOR by enhancing PRAS40’s association with RAPTOR, whereas AICAR blocked the cell cycle through proteasomal degradation of the G2M phosphatase cdc25c. Together, our results suggest that although AICAR and metformin are potent AMPK-independent antiproliferative agents, physiological AMPK activation in glioma may be a response mechanism to metabolic stress and anticancer agents.AMP-activated protein kinase (AMPK) is a molecular hub for cellular metabolic control (14). It is a heterotrimer of catalytic α, regulatory β, and γ subunits. The rising AMP:ATP ratio during energy stress leads to AMP-dependent phosphorylation of the catalytic α subunits. This activates AMPK which then phosphorylates numerous substrates to restore energy homeostasis. It phosphorylates acetyl CoA carboxylase (ACCα) to inhibit fatty acid (FA) synthesis (5) and TSC2 and RAPTOR (6, 7) to inhibit mammalian target of rapamycin (mTOR)C1. Because fatty acid synthesis and mTORC1 activity are essential for cell proliferation and growth (8), AMPK activation with two indirect AMPK agonists AICAR and metformin have been correlated with suppression of cell proliferation and growth (911).AICAR is metabolized to an AMP mimetic, ZMP that activates AMPK (12). Although AICAR does inhibit proliferation (1115), it also causes AMPK-independent cellular and metabolic effects (12, 16) including inhibition of glucokinase, glycogen phosphorylase, and nucleotide biosynthesis (17, 18). Whether AICAR requires AMPK to suppress proliferation is questionable because although both AICAR and 2-deoxyglucose activated AMPK, only AICAR inhibited proliferation of trisomic mouse fibroblasts (11). Moreover, although AICAR strongly increases glucose uptake through AMPK activation in muscle cells, it reduced fluorodeoxyglucose-PET signals and inhibited glioma growth in vivo (9), suggesting that reduced PET signals could be due to its AMPK-independent antiglioma action.The antiproliferative mechanisms of metformin also remain unclear. It is argued that because metformin inhibits mitochondrial respiration (19), it induces an energy crisis (metabolic stress), leading to AMPK activation, mTOR inhibition, and suppression of proliferation (20). However, Dykens et al. (21) showed that net cellular ATP is not affected by metformin. Other suggested mechanisms include disruption of cross-talk between GPCRs and insulin receptors (22), inhibition of the ErbB2/IGF1 receptor (23), and mTOR inhibition by blocking RAG function (24). In vivo, metformin and the direct AMPK agonist A769662 delayed onset but not progression of lymphoma in Pten+/−;LKB1+/− mice (25) (LKB1 is the upstream kinase that activates AMPK). Moreover, these experiments were not conducted on AMPK-deficient animals, making it unclear whether the drug effects were AMPK dependent. Contrary to these results, metformin prevented tumorigenesis without activating AMPK in lung tumors (26), and in fact, LKB1-deficient lung tumors were actually more responsive to the metformin analog phenformin (27). The latter results suggest that the LKB1–AMPK pathway protects cancer cells from antiproliferative agents and may support tumorigenesis.In line with the above idea, genetic studies showed a procancer role of AMPK in the in vivo growth of H-RAS–transformed fibroblasts and astrocytic tumors, in pancreatic cancer, and in a subtype of renal cell carcinoma (2831). Additional genetic studies also underscore the requirement of AMPK in cancer cell metabolic programming (32, 33); cell division (3437); migration (38), protection against stress; and anticancer therapy (3941). However, in Myc-driven mouse lymphoma, AMPK was shown to function as a tumor suppressor (42), suggesting a context-dependent role of AMPK in cancer.To definitively determine whether AMPK is necessary for the antiproliferative actions of AICAR and metformin, we conducted a comprehensive pharmacogenetic study in glioma. First, we found that gliomas express constitutively active AMPK, and that AICAR and metformin inhibit proliferation by distinct AMPK-independent and unique mechanisms. Second, A769662, a direct AMPK activator (43) showed no antiproliferative effects. Therefore, many agents that inhibit proliferation with concomitant AMPK activation may not require AMPK for their action. Instead, AMPK activation could be a response mechanism to counter stress induced by anticancer agents.  相似文献   

15.
Cognition presents evolutionary research with one of its greatest challenges. Cognitive evolution has been explained at the proximate level by shifts in absolute and relative brain volume and at the ultimate level by differences in social and dietary complexity. However, no study has integrated the experimental and phylogenetic approach at the scale required to rigorously test these explanations. Instead, previous research has largely relied on various measures of brain size as proxies for cognitive abilities. We experimentally evaluated these major evolutionary explanations by quantitatively comparing the cognitive performance of 567 individuals representing 36 species on two problem-solving tasks measuring self-control. Phylogenetic analysis revealed that absolute brain volume best predicted performance across species and accounted for considerably more variance than brain volume controlling for body mass. This result corroborates recent advances in evolutionary neurobiology and illustrates the cognitive consequences of cortical reorganization through increases in brain volume. Within primates, dietary breadth but not social group size was a strong predictor of species differences in self-control. Our results implicate robust evolutionary relationships between dietary breadth, absolute brain volume, and self-control. These findings provide a significant first step toward quantifying the primate cognitive phenome and explaining the process of cognitive evolution.Since Darwin, understanding the evolution of cognition has been widely regarded as one of the greatest challenges for evolutionary research (1). Although researchers have identified surprising cognitive flexibility in a range of species (240) and potentially derived features of human psychology (4161), we know much less about the major forces shaping cognitive evolution (6271). With the notable exception of Bitterman’s landmark studies conducted several decades ago (63, 7274), most research comparing cognition across species has been limited to small taxonomic samples (70, 75). With limited comparable experimental data on how cognition varies across species, previous research has largely relied on proxies for cognition (e.g., brain size) or metaanalyses when testing hypotheses about cognitive evolution (7692). The lack of cognitive data collected with similar methods across large samples of species precludes meaningful species comparisons that can reveal the major forces shaping cognitive evolution across species, including humans (48, 70, 89, 9398).To address these challenges we measured cognitive skills for self-control in 36 species of mammals and birds (Fig. 1 and Tables S1–S4) tested using the same experimental procedures, and evaluated the leading hypotheses for the neuroanatomical underpinnings and ecological drivers of variance in animal cognition. At the proximate level, both absolute (77, 99107) and relative brain size (108112) have been proposed as mechanisms supporting cognitive evolution. Evolutionary increases in brain size (both absolute and relative) and cortical reorganization are hallmarks of the human lineage and are believed to index commensurate changes in cognitive abilities (52, 105, 113115). Further, given the high metabolic costs of brain tissue (116121) and remarkable variance in brain size across species (108, 122), it is expected that the energetic costs of large brains are offset by the advantages of improved cognition. The cortical reorganization hypothesis suggests that selection for absolutely larger brains—and concomitant cortical reorganization—was the predominant mechanism supporting cognitive evolution (77, 91, 100106, 120). In contrast, the encephalization hypothesis argues that an increase in brain volume relative to body size was of primary importance (108, 110, 111, 123). Both of these hypotheses have received support through analyses aggregating data from published studies of primate cognition and reports of “intelligent” behavior in nature—both of which correlate with measures of brain size (76, 77, 84, 92, 110, 124).Open in a separate windowFig. 1.A phylogeny of the species included in this study. Branch lengths are proportional to time except where long branches have been truncated by parallel diagonal lines (split between mammals and birds ∼292 Mya).With respect to selective pressures, both social and dietary complexities have been proposed as ultimate causes of cognitive evolution. The social intelligence hypothesis proposes that increased social complexity (frequently indexed by social group size) was the major selective pressure in primate cognitive evolution (6, 44, 48, 50, 87, 115, 120, 125141). This hypothesis is supported by studies showing a positive correlation between a species’ typical group size and the neocortex ratio (80, 81, 8587, 129, 142145), cognitive differences between closely related species with different group sizes (130, 137, 146, 147), and evidence for cognitive convergence between highly social species (26, 31, 148150). The foraging hypothesis posits that dietary complexity, indexed by field reports of dietary breadth and reliance on fruit (a spatiotemporally distributed resource), was the primary driver of primate cognitive evolution (151154). This hypothesis is supported by studies linking diet quality and brain size in primates (79, 81, 86, 142, 155), and experimental studies documenting species differences in cognition that relate to feeding ecology (94, 156166).Although each of these hypotheses has received empirical support, a comparison of the relative contributions of the different proximate and ultimate explanations requires (i) a cognitive dataset covering a large number of species tested using comparable experimental procedures; (ii) cognitive tasks that allow valid measurement across a range of species with differing morphology, perception, and temperament; (iii) a representative sample within each species to obtain accurate estimates of species-typical cognition; (iv) phylogenetic comparative methods appropriate for testing evolutionary hypotheses; and (v) unprecedented collaboration to collect these data from populations of animals around the world (70).Here, we present, to our knowledge, the first large-scale collaborative dataset and comparative analysis of this kind, focusing on the evolution of self-control. We chose to measure self-control—the ability to inhibit a prepotent but ultimately counterproductive behavior—because it is a crucial and well-studied component of executive function and is involved in diverse decision-making processes (167169). For example, animals require self-control when avoiding feeding or mating in view of a higher-ranking individual, sharing food with kin, or searching for food in a new area rather than a previously rewarding foraging site. In humans, self-control has been linked to health, economic, social, and academic achievement, and is known to be heritable (170172). In song sparrows, a study using one of the tasks reported here found a correlation between self-control and song repertoire size, a predictor of fitness in this species (173). In primates, performance on a series of nonsocial self-control control tasks was related to variability in social systems (174), illustrating the potential link between these skills and socioecology. Thus, tasks that quantify self-control are ideal for comparison across taxa given its robust behavioral correlates, heritable basis, and potential impact on reproductive success.In this study we tested subjects on two previously implemented self-control tasks. In the A-not-B task (27 species, n = 344), subjects were first familiarized with finding food in one location (container A) for three consecutive trials. In the test trial, subjects initially saw the food hidden in the same location (container A), but then moved to a new location (container B) before they were allowed to search (Movie S1). In the cylinder task (32 species, n = 439), subjects were first familiarized with finding a piece of food hidden inside an opaque cylinder. In the following 10 test trials, a transparent cylinder was substituted for the opaque cylinder. To successfully retrieve the food, subjects needed to inhibit the impulse to reach for the food directly (bumping into the cylinder) in favor of the detour response they had used during the familiarization phase (Movie S2).Thus, the test trials in both tasks required subjects to inhibit a prepotent motor response (searching in the previously rewarded location or reaching directly for the visible food), but the nature of the correct response varied between tasks. Specifically, in the A-not-B task subjects were required to inhibit the response that was previously successful (searching in location A) whereas in the cylinder task subjects were required to perform the same response as in familiarization trials (detour response), but in the context of novel task demands (visible food directly in front of the subject).  相似文献   

16.
Antiretroviral therapy (ART) reduces the infectiousness of HIV-infected persons, but only after testing, linkage to care, and successful viral suppression. Thus, a large proportion of HIV transmission during a period of high infectiousness in the first few months after infection (“early transmission”) is perceived as a threat to the impact of HIV “treatment-as-prevention” strategies. We created a mathematical model of a heterosexual HIV epidemic to investigate how the proportion of early transmission affects the impact of ART on reducing HIV incidence. The model includes stages of HIV infection, flexible sexual mixing, and changes in risk behavior over the epidemic. The model was calibrated to HIV prevalence data from South Africa using a Bayesian framework. Immediately after ART was introduced, more early transmission was associated with a smaller reduction in HIV incidence rate—consistent with the concern that a large amount of early transmission reduces the impact of treatment on incidence. However, the proportion of early transmission was not strongly related to the long-term reduction in incidence. This was because more early transmission resulted in a shorter generation time, in which case lower values for the basic reproductive number (R0) are consistent with observed epidemic growth, and R0 was negatively correlated with long-term intervention impact. The fraction of early transmission depends on biological factors, behavioral patterns, and epidemic stage and alone does not predict long-term intervention impacts. However, early transmission may be an important determinant in the outcome of short-term trials and evaluation of programs.Recent studies have confirmed that effective antiretroviral therapy (ART) reduces the transmission of HIV among stable heterosexual couples (13). This finding has generated interest in understanding the population-level impact of HIV treatment on reducing the rate of new HIV infections in generalized epidemic settings (4). Research, including mathematical modeling (510), implementation research (11), and major randomized controlled trials (1214), are focused on how ART provision might be expanded strategically to maximize its public health benefits (15, 16).One concern is that if a large fraction of HIV transmission occurs shortly after a person becomes infected, before the person can be diagnosed and initiated on ART, this will limit the potential impact of HIV treatment on reducing HIV incidence (9, 17, 18). Data suggest that persons are more infectious during a short period of “early infection” after becoming infected with HIV (1922), although there is debate about the extent, duration, and determinants of elevated infectiousness (18, 23). The amount of transmission that occurs also will depend on patterns of sexual behavior and sexual networks (17, 2427). There have been estimates for the contribution of early infection to transmission from mathematical models (7, 17, 21, 2426) and phylogenetic analyses (2831), but these vary widely, from 5% to above 50% (23).In this study, we use a mathematical model to quantify how the proportion of transmission that comes from persons who have been infected recently affects the impact of treatment scale-up on HIV incidence. The model is calibrated to longitudinal HIV prevalence data from South Africa using a Bayesian framework. Thus, the model accounts for not only the early epidemic growth rate highlighted in previous research (5, 9, 18), but also the heterogeneity and sexual behavior change to explain the peak and decline in HIV incidence observed in sub-Saharan African HIV epidemics (32, 33).The model calibration allows uncertainty about factors that determine the amount of early transmission, including the relative infectiousness during early infection, heterogeneity in propensity for sexual risk behavior, assortativity in sexual partner selection, reduction in risk propensity over the life course, and population-wide reductions in risk behavior in response to the epidemic (32, 33). This results in multiple combinations of parameter values that are consistent with the observed epidemic and variation in the amount of early transmission. We simulated the impact of a treatment intervention and report how the proportion of early transmission correlates with the reduction in HIV incidence from the intervention over the short- and long-term.  相似文献   

17.
Embryonic stem cell-based therapies exhibit great potential for the treatment of Parkinson’s disease (PD) because they can significantly rescue PD-like behaviors. However, whether the transplanted cells themselves release dopamine in vivo remains elusive. We and others have recently induced human embryonic stem cells into primitive neural stem cells (pNSCs) that are self-renewable for massive/transplantable production and can efficiently differentiate into dopamine-like neurons (pNSC–DAn) in culture. Here, we showed that after the striatal transplantation of pNSC–DAn, (i) pNSC–DAn retained tyrosine hydroxylase expression and reduced PD-like asymmetric rotation; (ii) depolarization-evoked dopamine release and reuptake were significantly rescued in the striatum both in vitro (brain slices) and in vivo, as determined jointly by microdialysis-based HPLC and electrochemical carbon fiber electrodes; and (iii) the rescued dopamine was released directly from the grafted pNSC–DAn (and not from injured original cells). Thus, pNSC–DAn grafts release and reuptake dopamine in the striatum in vivo and alleviate PD symptoms in rats, providing proof-of-concept for human clinical translation.Parkinson’s disease (PD) is a chronic progressive neurodegenerative disorder characterized by the specific loss of dopaminergic neurons in the substantia nigra pars compacta and their projecting axons, resulting in loss of dopamine (DA) release in the striatum (1). During the last two decades, cell-replacement therapy has proven, at least experimentally, to be a potential treatment for PD patients (27) and in animal models (815). The basic principle of cell therapy is to restore the DA release by transplanting new DA-like cells. Until recently, obtaining enough transplantable cells was a major bottleneck in the practicability of cell therapy for PD. One possible source is embryonic stem cells (ESCs), which can develop infinitely into self-renewable pluripotent cells with the potential to generate any type of cell, including DA neurons (DAns) (16, 17).Recently, several groups including us have introduced rapid and efficient ways to generate primitive neural stem cells (pNSCs) from human ESCs using small-molecule inhibitors under chemically defined conditions (12, 18, 19). These cells are nonpolarized neuroepithelia and retain plasticity upon treatment with neuronal developmental morphogens. Importantly, pNSCs differentiate into DAns (pNSC–DAn) with high efficiency (∼65%) after patterning by sonic hedgehog (SHH) and fibroblast growth factor 8 (FGF8) in vitro, providing an immediate and renewable source of DAns for PD treatment. Importantly, the striatal transplantation of human ESC-derived DA-like neurons, including pNSC–DAn, are able to relieve the motor defects in a PD rat model (1113, 15, 1923). Before attempting clinical translation of pNSC–DAn, however, there are two fundamental open questions. (i) Can pNSC–DAn functionally restore the striatal DA levels in vivo? (ii) What cells release the restored DA, pNSC–DAn themselves or resident neurons/cells repaired by the transplants?Regarding question 1, a recent study using nafion-coated carbon fiber electrodes (CFEs) reported that the amperometric current is rescued in vivo by ESC (pNSC–DAn-like) therapy (19). Both norepinephrine (NE) and serotonin are present in the striatum (24, 25). However, CFE amperometry/chronoamperometry alone cannot distinguish DA from other monoamines in vivo, such as NE and serotonin (Fig. S1) (see also refs. 2628). Considering that the compounds released from grafted ESC-derived cells are unknown, the work of Kirkeby et al. was unable to determine whether DA or other monoamines are responsible for the restored amperometric signal. Thus, the key question of whether pNSC–DAn can rescue DA release needs to be reexamined for the identity of the restored amperometric signal in vivo.Regarding question 2, many studies have proposed that DA is probably released from the grafted cells (8, 12, 13, 20), whereas others have proposed that the grafted stem cells might restore striatal DA levels by rescuing injured original cells (29, 30). Thus, whether the grafted cells are actually capable of synthesizing and releasing DA in vivo must be investigated to determine the future cellular targets (residual cells versus pNSC–DAn) of treatment.To address these two mechanistic questions, advanced in vivo methods of DA identification and DA recording at high spatiotemporal resolution are required. Currently, microdialysis-based HPLC (HPLC) (3133) and CFE amperometric recordings (34, 35) have been used independently by different laboratories to assess evoked DA release from the striatum in vivo. The major advantage of microdialysis-based HPLC is to identify the substances secreted in the cell-grafted striatum (33), but its spatiotemporal resolution is too low to distinguish the DA release site (residual cells or pNSC–DAn). In contrast, the major advantage of CFE-based amperometry is its very high temporal (ms) and spatial (μm) resolution, making it possible to distinguish the DA release site (residual cells or pNSC–DAn) in cultured cells, brain slices, and in vivo (3439), but it is unable to distinguish between low-level endogenous oxidizable substances (DA versus serotonin and NE) in vivo.In the present study, we developed a challenging experimental paradigm of combining the two in vivo methods, microdialysis-based HPLC and CFE amperometry, to identify the evoked substance as DA and its release site as pNSC–DAn in the striatum of PD rats.  相似文献   

18.
Research links psychosocial stress to premature telomere shortening and accelerated human aging; however, this association has only been demonstrated in so-called “WEIRD” societies (Western, educated, industrialized, rich, and democratic), where stress is typically lower and life expectancies longer. By contrast, we examine stress and telomere shortening in a non-Western setting among a highly stressed population with overall lower life expectancies: poor indigenous people—the Sahariya—who were displaced (between 1998 and 2002) from their ancestral homes in a central Indian wildlife sanctuary. In this setting, we examined adult populations in two representative villages, one relocated to accommodate the introduction of Asiatic lions into the sanctuary (n = 24 individuals), and the other newly isolated in the sanctuary buffer zone after their previous neighbors were moved (n = 22). Our research strategy combined physical stress measures via the salivary analytes cortisol and α-amylase with self-assessments of psychosomatic stress, ethnographic observations, and telomere length assessment [telomere–fluorescence in situ hybridization (TEL-FISH) coupled with 3D imaging of buccal cell nuclei], providing high-resolution data amenable to multilevel statistical analysis. Consistent with expectations, we found significant associations between each of our stress measures—the two salivary analytes and the psychosomatic symptom survey—and telomere length, after adjusting for relevant behavioral, health, and demographic traits. As the first study (to our knowledge) to link stress to telomere length in a non-WEIRD population, our research strengthens the case for stress-induced telomere shortening as a pancultural biomarker of compromised health and aging.Psychosocial stress is associated with elevated risk for a range of human diseases and curtailment of human life expectancy (116). Telomeres—repetitive and stabilizing features of chromosomal termini that cap and protect them—have also been shown to be associated with aging and disease (1719). Telomere length erodes normally with cell division and generally with aging, triggering cellular senescence once telomere length eclipses a threshold, contributing to tissue degeneration and organ decline with longevity (2025). Given evidence that stress can elevate the risk of human mortality, and that premature telomere shortening can serve as a proxy for increased risk of disease and mortality, it is reasonable to posit that stress is also associated with telomere shortening (23, 26, 27). Indeed, research provides evidence of telomere shortening in a range of stress-inducing life situations, including among primary caregivers of chronically ill children and Alzheimer’s patients (28, 29), children spending more time in orphanages or experiencing other forms of neglect and adversity (3032), women suffering from intimate partner violence (33), patients suffering from stress-related mood disorders (34, 35), and individuals of low socioeconomic status (SES) (36). [However, not all research demonstrates the expected associations between stress and telomere length, as in studies failing to identify links between low SES and telomere shortening (27, 37).]Of note, studies relating stress and telomere maintenance typically unfold in what have been termed “WEIRD” societies (38, 39)—i.e., within Western, educated, industrialized, rich, and democratic populations, a small slice of total humanity—potentially limiting our understanding and generalizability of the association between biopsychosocial stressors and telomere maintenance. Populations in WEIRD societies generally enjoy higher life expectancies (4042) and are relatively insulated from the traumas, stressors, and political coercions common throughout the developing world. Insofar as telomere length proxies for life expectancy, and that psychosocial well-being increases with economic development (43), studies involving WEIRD populations can be construed as potentially sampling high on the telomere length dimension and low on the stress dimension.By contrast, we assessed associations between stress and telomere length in the context of tumultuous life changes experienced by an indigenous population—the Sahariya—displaced from their ancestral homes in a central Indian wildlife sanctuary. There, we examined adult heads of household in two representative villages, one relocated from their forest homes in the ecological core of the sanctuary to accommodate the future introduction of Asiatic lions into the sanctuary, and the other newly isolated in the sanctuary’s buffer zone after their previous neighbors were relocated, with this second group of villagers also facing more restricted forest access in this buffer zone (44). Consistent with prior research on the lasting psychosomatic costs of displacement, dispossession, and loss of homeland, a phenomenon found mainly in the developing world, our sample of villagers presented with levels of psychosomatic suffering that approximate populations receiving psychiatric care (45, 46). Developing and underdeveloped societies are estimated to host 80% of the estimated count of refugees and displaced peoples worldwide (46). Indeed, adult Sahariya villagers expressed deep uncertainty about the future, particularly with respect to the welfare of their children, and wished to return to their predisplacement lives.Although displacement of human populations is an intrinsically compelling phenomenon, with an estimated 51.2 million internally displaced peoples worldwide (46, 47), our investigation of the relationship between stress and telomere length in a non-WEIRD setting is motivated in this context by other scientific reasons. Specifically, our population of indigenous Sahariya potentially demonstrates compromised telomere maintenance (given measurably lower life expectancies in rural parts of central India), while being higher on the stress dimension (given the high levels of reported psychosomatic suffering and disease burden), providing valuable information on a unique and understudied population, as well as advancing efforts to more fully characterize statistical associations between stress and telomere maintenance.It is also important to recognize that such real-life studies are often characterized by methodological limitations. For example, research linking life stress and telomere shortening typically rely on psychosocial distress scales or comparisons between presumed or perceived stressed and nonstressed control groups, rather than physical measures such as salivary stress analytes (like cortisol): e.g., orphans vs. nonorphans, abused vs. nonabused women, low- vs. high-SES individuals, and the mentally disordered vs. mentally healthy (19, 28, 3035, 55). In vitro research demonstrates that the stress hormone cortisol can interfere with maintenance of telomere length by reducing telomerase activity (56). It has also been shown that oxidative stress preferentially damages telomeric DNA compared with other genomic regions, and that antioxidants can delay “replicative senescence” (5759). However, in vivo research linking self-reported levels of experienced psychosocial stress, stress biomarkers (i.e., catecholamines and glucocorticoids), and telomere maintenance are few (60, 61), severely limiting our understanding of potential connections and hypothesized mechanisms, such as inflammation and oxidative stress (60, 62), in naturally occurring contexts. Furthermore, current population-based stress and telomere studies typically rely on quantitative real-time PCR methodology, which evaluates all of a cell’s DNA to assess an average telomere length across many different cell types (19, 28, 63). Although certainly representing progress in the field, such an approach is still limited by lack of specificity, not only in regard to cell type, but also in evaluation of particular populations of telomeres (e.g., distributions of shortest and/or longest).In contrast to previous studies, here we combined physical measures of the psychobiology of the stress response (salivary cortisol and α-amylase), collected in these Indian village contexts using minimally invasive field-appropriate techniques (6472), with Sahariya self-assessments of psychosomatic stress, ethnographic observations, and high-resolution telomere length measurement. The value of microscopy-based techniques for evaluation of telomere length on a cell-by-cell basis has been effectively demonstrated (7377). We developed an innovative approach for assessment of telomere length that combined telomere–fluorescence in situ hybridization (TEL-FISH) (73) using a modified protocol to collect and process the obtained samples (78), with 3D reconstruction of individual cell nuclei to facilitate analysis of all visible telomere signals in the entire extension of sampled cells, rather than just a fraction (presumably the longest) of them (75). Our approach allowed us to evaluate the length of hundreds of thousands of individual telomeres in a single class of putative buccal basal stem or progenitor cells, greatly improving the specificity and quantification of telomere length, including the important ability to define distributions of the shortest telomeres (76, 77). Such a strategy afforded us particularly high-resolution data amenable to multilevel statistical analysis.To summarize, with innovative methodologies and strategies that mapped community and individual variation in life stress in a unique non-Western setting, our research explores the case for stress-related telomere shortening as a pancultural biomarker of compromised health and aging.  相似文献   

19.
A series of discrete decanuclear gold(I) μ3-sulfido complexes with alkyl chains of various lengths on the aminodiphosphine ligands, [Au10{Ph2PN(CnH2n+1)PPh2}43-S)4](ClO4)2, has been synthesized and characterized. These complexes have been shown to form supramolecular nanoaggregate assemblies upon solvent modulation. The photoluminescence (PL) colors of the nanoaggregates can be switched from green to yellow to red by varying the solvent systems from which they are formed. The PL color variation was investigated and correlated with the nanostructured morphological transformation from the spherical shape to the cube as observed by transmission electron microscopy and scanning electron microscopy. Such variations in PL colors have not been observed in their analogous complexes with short alkyl chains, suggesting that the long alkyl chains would play a key role in governing the supramolecular nanoaggregate assembly and the emission properties of the decanuclear gold(I) sulfido complexes. The long hydrophobic alkyl chains are believed to induce the formation of supramolecular nanoaggregate assemblies with different morphologies and packing densities under different solvent systems, leading to a change in the extent of Au(I)–Au(I) interactions, rigidity, and emission properties.Gold(I) complexes are one of the fascinating classes of complexes that reveal photophysical properties that are highly sensitive to the nuclearity of the metal centers and the metal–metal distances (159). In a certain sense, they bear an analogy or resemblance to the interesting classes of metal nanoparticles (NPs) (6069) and quantum dots (QDs) (7076) in that the properties of the nanostructured materials also show a strong dependence on their sizes and shapes. Interestingly, while the optical and spectroscopic properties of metal NPs and QDs show a strong dependence on the interparticle distances, those of polynuclear gold(I) complexes are known to mainly depend on the nuclearity and the internuclear separations of gold(I) centers within the individual molecular complexes or clusters, with influence of the intermolecular interactions between discrete polynuclear molecular complexes relatively less explored (3438), and those of polynuclear gold(I) clusters not reported. Moreover, while studies on polynuclear gold(I) complexes or clusters are known (3454), less is explored of their hierarchical assembly and nanostructures as well as the influence of intercluster aggregation on the optical properties (3438). Among the gold(I) complexes, polynuclear gold(I) chalcogenido complexes represent an important and interesting class (4451). While directed supramolecular assembly of discrete Au12 (52), Au16 (53), Au18 (51), and Au36 (54) metallomacrocycles as well as trinuclear gold(I) columnar stacks (3438) have been reported, there have been no corresponding studies on the supramolecular hierarchical assembly of polynuclear gold(I) chalcogenido clusters.Based on our interests and experience in the study of gold(I) chalcogenido clusters (4446, 51), it is believed that nanoaggegrates with interesting luminescence properties and morphology could be prepared by the judicious design of the gold(I) chalcogenido clusters. As demonstrated by our previous studies on the aggregation behavior of square-planar platinum(II) complexes (7780) where an enhancement of the solubility of the metal complexes via introduction of solubilizing groups on the ligands and the fine control between solvophobicity and solvophilicity of the complexes would have a crucial influence on the factors governing supramolecular assembly and the formation of aggregates (80), introduction of long alkyl chains as solubilizing groups in the gold(I) sulfido clusters may serve as an effective way to enhance the solubility of the gold(I) clusters for the construction of supramolecular assemblies of novel luminescent nanoaggegrates.Herein, we report the preparation and tunable spectroscopic properties of a series of decanuclear gold(I) μ3-sulfido complexes with alkyl chains of different lengths on the aminophosphine ligands, [Au10{Ph2PN(CnH2n+1)PPh2}43-S)4](ClO4)2 [n = 8 (1), 12 (2), 14 (3), 18 (4)] and their supramolecular assembly to form nanoaggregates. The emission colors of the nanoaggregates of 2−4 can be switched from green to yellow to red by varying the solvent systems from which they are formed. These results have been compared with their short alkyl chain-containing counterparts, 1 and a related [Au10{Ph2PN(C3H7)PPh2}43-S)4](ClO4)2 (45). The present work demonstrates that polynuclear gold(I) chalcogenides, with the introduction of appropriate functional groups, can serve as building blocks for the construction of novel hierarchical nanostructured materials with environment-responsive properties, and it represents a rare example in which nanoaggregates have been assembled with the use of discrete molecular metal clusters as building blocks.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号