首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A degraded, black-and-white image of an object, which appears meaningless on first presentation, is easily identified after a single exposure to the original, intact image. This striking example of perceptual learning reflects a rapid (one-trial) change in performance, but the kind of learning that is involved is not known. We asked whether this learning depends on conscious (hippocampus-dependent) memory for the images that have been presented or on an unconscious (hippocampus-independent) change in the perception of images, independently of the ability to remember them. We tested five memory-impaired patients with hippocampal lesions or larger medial temporal lobe (MTL) lesions. In comparison to volunteers, the patients were fully intact at perceptual learning, and their improvement persisted without decrement from 1 d to more than 5 mo. Yet, the patients were impaired at remembering the test format and, even after 1 d, were impaired at remembering the images themselves. To compare perceptual learning and remembering directly, at 7 d after seeing degraded images and their solutions, patients and volunteers took either a naming test or a recognition memory test with these images. The patients improved as much as the volunteers at identifying the degraded images but were severely impaired at remembering them. Notably, the patient with the most severe memory impairment and the largest MTL lesions performed worse than the other patients on the memory tests but was the best at perceptual learning. The findings show that one-trial, long-lasting perceptual learning relies on hippocampus-independent (nondeclarative) memory, independent of any requirement to consciously remember.

A striking visual effect can be demonstrated by using a grayscale image of an object that has been degraded to a low-resolution, black-and-white image (1, 2). Such an image is difficult to identify (Fig. 1) but can be readily recognized after a single exposure to the original, intact image (Fig. 2) (36). Neuroimaging studies have found regions of the neocortex, including high-level visual areas and the medial parietal cortex, which exhibited a different pattern of activity when a degraded image was successfully identified (after seeing the intact image) than when the same degraded image was first presented and not identified (4, 5, 7). This phenomenon reflects a rapid change in performance based on experience, in this case one-trial learning, but the kind of learning that is involved is unclear.Open in a separate windowFig. 1.A sample degraded image. Most people cannot identify what is depicted. See Fig. 2.Open in a separate windowFig. 2.An intact version of the image in Fig. 1. When the intact version is presented just once directly after presentation of the degraded version, the ability to later identify the degraded image is greatly improved, even after many months. Reprinted from ref. (42), which is licensed under CC BY 4.0.One possibility is that successful identification of degraded images reflects conscious memory of having recently seen degraded images followed by their intact counterparts. When individuals see degraded images after seeing their “solutions,” they may remember what is represented in the images, at least for a time. In one study, performance declined sharply from 15 min to 1 d after the solutions were presented and then declined more gradually to a lower level after 21 d (3). Alternatively, the phenomenon might reflect a more automatic change in perception not under conscious control (8). Once the intact image is presented, the object in the degraded image may be perceived directly, independently of whether it is remembered as having been presented. By this account, successful identification of degraded images is reminiscent of the phenomenon of priming, whereby perceptual identification of words and objects is facilitated by single encounters with the same or related stimuli (911). Some forms of priming persist for quite a long time (weeks or months) (1214).These two possibilities describe the distinction between declarative and nondeclarative memory (15, 16). Declarative memory affords the capacity for recollection of facts and events and depends on the integrity of the hippocampus and related medial temporal lobe structures (17, 18). Nondeclarative memory refers to a collection of unconscious memory abilities including skills, habits, and priming, which are expressed through performance rather than recollection and are supported by other brain systems (1921). Does one-trial learning of degraded images reflect declarative or nondeclarative memory? How long does it last? In an early report that implies the operation of nondeclarative memory, two patients with traumatic amnesia improved the time needed to identify hidden images from 1 d to the next, but could not recognize which images they had seen (22). Yet, another amnesic patient reportedly failed such a task (23). The matter has not been studied in patients with medial temporal lobe (MTL) damage.To determine whether declarative (hippocampus-dependent) or nondeclarative (hippocampus-independent) memory supports the one-trial learning of degraded images, we tested five patients with bilateral hippocampal lesions or larger MTL lesions who have severely impaired declarative memory. The patients were fully intact at perceptual learning, and performance persisted undiminished from 1 d to more than 5 mo. At the same time, the patients were severely impaired at remembering both the structure of the test and the images themselves.  相似文献   

2.
The puzzling sex ratio behavior of Melittobia wasps has long posed one of the greatest questions in the field of sex allocation. Laboratory experiments have found that, in contrast to the predictions of theory and the behavior of numerous other organisms, Melittobia females do not produce fewer female-biased offspring sex ratios when more females lay eggs on a patch. We solve this puzzle by showing that, in nature, females of Melittobia australica have a sophisticated sex ratio behavior, in which their strategy also depends on whether they have dispersed from the patch where they emerged. When females have not dispersed, they lay eggs with close relatives, which keeps local mate competition high even with multiple females, and therefore, they are selected to produce consistently female-biased sex ratios. Laboratory experiments mimic these conditions. In contrast, when females disperse, they interact with nonrelatives, and thus adjust their sex ratio depending on the number of females laying eggs. Consequently, females appear to use dispersal status as an indirect cue of relatedness and whether they should adjust their sex ratio in response to the number of females laying eggs on the patch.

Sex allocation has produced many of the greatest success stories in the study of social behaviors (14). Time and time again, relatively simple theory has explained variation in how individuals allocate resources to male and female reproduction. Hamilton’s local mate competition (LMC) theory predicts that when n diploid females lay eggs on a patch and the offspring mate before the females disperse, the evolutionary stable proportion of male offspring (sex ratio) is (n − 1)/2n (Fig. 1) (5). A female-biased sex ratio is favored to reduce competition between sons (brothers) for mates and to provide more mates (daughters) for those sons (68). Consistent with this prediction, females of >40 species produce female-biased sex ratios and reduce this female bias when multiple females lay eggs on the same patch (higher n; Fig. 1) (9). The fit of data to theory is so good that the sex ratio under LMC has been exploited as a “model trait” to study the factors that can constrain “perfect adaptation” (4, 1013).Open in a separate windowFig. 1.LMC. The sex ratio (proportion of sons) is plotted versus the number of females laying eggs on a patch. The bright green dashed line shows the LMC theory prediction for the haplodiploid species (5, 39). A more female-biased sex ratio is favored in haplodiploids because inbreeding increases the relative relatedness of mothers to their daughters (7, 32). Females of many species adjust their offspring sex ratio as predicted by theory, such as the parasitoid Nasonia vitripennis (green diamonds) (82). In contrast, the females of several Melittobia species, such as M. australica, continue to produce extremely female-biased sex ratios, irrespective of the number of females laying eggs on a patch (blue squares) (15).In stark contrast, the sex ratio behavior of Melittobia wasps has long been seen as one of the greatest problems for the field of sex allocation (3, 4, 1421). The life cycle of Melittobia wasps matches the assumptions of Hamilton’s LMC theory (5, 15, 19, 21). Females lay eggs in the larvae or pupae of solitary wasps and bees, and then after emergence, female offspring mate with the short-winged males, who do not disperse. However, laboratory experiments on four Melittobia species have found that females lay extremely female-biased sex ratios (1 to 5% males) and that these extremely female-biased sex ratios change little with increasing number of females laying eggs on a patch (higher n; Fig. 1) (15, 1720, 22). A number of hypotheses to explain this lack of sex ratio adjustment have been investigated and rejected, including sex ratio distorters, sex differential mortality, asymmetrical male competition, and reciprocal cooperation (1518, 20, 2226).We tested whether Melittobia’s unusual sex ratio behavior can be explained by females being related to the other females laying eggs on the same patch. After mating, some females disperse to find new patches, while some may stay at the natal patch to lay eggs on previously unexploited hosts (Fig. 2). If females do not disperse, they can be related to the other females laying eggs on the same host (2731). If females laying eggs on a host are related, this increases the extent to which relatives are competing for mates and so can favor an even more female-biased sex ratio (28, 3235). Although most parasitoid species appear unable to directly assess relatedness, dispersal behavior could provide an indirect cue of whether females are with close relatives (3638). Consequently, we predict that when females do not disperse and so are more likely to be with closer relatives, they should maintain extremely female-biased sex ratios, even when multiple females lay eggs on a patch (28, 35).Open in a separate windowFig. 2.Host nest and dispersal manners of Melittobia. (A) Photograph of the prepupae of the leaf-cutter bee C. sculpturalis nested in a bamboo cane and (B) a diagram showing two ways that Melittobia females find new hosts. The mothers of C. sculpturalis build nursing nests with pine resin consisting of individual cells in which their offspring develop. If Melittobia wasps parasitize a host in a cell, female offspring that mate with males inside the cell find a different host on the same patch (bamboo cane) or disperse by flying to other patches.We tested whether the sex ratio of Melittobia australica can be explained by dispersal status in a natural population. We examined how the sex ratio produced by females varies with the number of females laying eggs on a patch and whether or not they have dispersed before laying eggs. To match our data to the predictions of theory, we developed a mathematical model tailored to the unique population structure of Melittobia, where dispersal can be a cue of relatedness. We then conducted a laboratory experiment to test whether Melittobia females are able to directly access the relatedness to other females and adjust their sex ratio behavior accordingly. Our results suggest that females are adjusting their sex ratio in response to both the number of females laying eggs on a patch and their relatedness to the other females. However, relatedness is assessed indirectly by whether or not they have dispersed. Consequently, the solution to the puzzling behavior reflects a more-refined sex ratio strategy.  相似文献   

3.
Electrophysiological studies in rodents show that active navigation enhances hippocampal theta oscillations (4–12 Hz), providing a temporal framework for stimulus-related neural codes. Here we show that active learning promotes a similar phase coding regime in humans, although in a lower frequency range (3–8 Hz). We analyzed intracranial electroencephalography (iEEG) from epilepsy patients who studied images under either volitional or passive learning conditions. Active learning increased memory performance and hippocampal theta oscillations and promoted a more accurate reactivation of stimulus-specific information during memory retrieval. Representational signals were clustered to opposite phases of the theta cycle during encoding and retrieval. Critically, during active but not passive learning, the temporal structure of intracycle reactivations in theta reflected the semantic similarity of stimuli, segregating conceptually similar items into more distant theta phases. Taken together, these results demonstrate a multilayered mechanism by which active learning improves memory via a phylogenetically old phase coding scheme.

Volitionally controlled—or “active”—learning has become a crucial topic in education, psychology, and neuroscience (1, 2). Behavioral studies show that memory benefits from voluntary action (35), putatively through a distinct modulation of attention, motivation, and cognitive control (2, 6). While these functions depend on widespread frontoparietal networks (7), a critical role of the hippocampus in coordinating volitional learning has been demonstrated in both humans (8) and rodents (9) (for a review see ref. 10). However, the mechanisms by which volition improves learning and memory are not well understood. Rodent recordings suggest that hippocampal theta oscillations (usually occurring between 4 and 12 Hz) might play a critical role, because they increase during voluntary movement (11) and active sensing (12). Consistently, human studies have shown volition-related theta power increases, although in a lower frequency range (typically between 3 and 8 Hz), during navigation in virtual (13, 14) and physical (15, 16) environments. It is believed that theta oscillations facilitate mnemonic processing by providing a temporal framework for the organization of stimulus-related neural codes (17). This is observed in the phenomenon of phase precession, where spatial locations represented by place cells in the rodent hippocampus are sequentially reactivated at distinct phases of theta oscillations (18). A similar phase coding mechanism underlies the representation of possible future scenarios in rats performing a spatial decision-making task, with early and late hippocampal theta phases representing current and prospective scenarios, respectively (19). It has been proposed (17) that these forms of neural phase coding support a range of cognitive processes, including multi-item working memory (20), episodic memory (21, 22), and mental time travel (23). In humans, this proposal has received empirical support from phase-amplitude coupling studies looking at the relationship between the amplitude of high-frequency activity and the phase of activity at a lower frequency, in particular theta (2426). However, these analyses are agnostic to the specific content that is coupled to the theta phase and thus do not reflect “phase coding” in the narrower sense. Recent studies used multivariate analysis techniques to identify stimulus-specific representational signals at the high temporal resolution provided by human intracranial electroencephalography data (iEEG, see refs. 27, 28 for review). These analyses demonstrated the relevance of theta oscillations for hippocampal reinstatement of item-context associations (29), for the orchestration of content-specific representations of goal locations (30), and for word-object associations (31). However, it is unclear whether this mechanism is recruited when learning is volitionally controlled.Building on these empirical findings and methodological advances, we aimed to elucidate whether the improved memory performance typically observed in human active learning paradigms can be traced back to a hippocampal theta phase code. In particular, we hypothesized that during active learning, this theta phase code organizes and structures stimulus-specific memory representations. We analyzed electrophysiological activity from the hippocampus and widespread neocortical regions in epilepsy patients (n = 13, age = 33.5 ± 9.32) implanted with iEEG electrodes (total number of electrodes = 392; Fig. 1F) who performed a virtual reality (VR)-based navigation and memory task. Subjects navigated in a square virtual arena (Fig. 1A) and were asked to remember images of specific objects presented at distinct spatial locations indicated by red “boxes” located on the ground (Fig. 1B). Images were only visible when participants visited the red boxes and were hidden otherwise. Navigation occurred under two conditions: active (A) and passive (P) (Fig. 1B). In the active condition, participants could freely control their movements in visiting the stimulus sites while in the passive condition, they were exposed to the navigation path and order of image presentation generated by another participant (yoked design; Fig. 1 C and D). At the end of the experiment, the recognition memory for both the actively and passively learned items was tested (Fig. 1E). We predicted that active learning would enhance memory by promoting hippocampal theta phase coding of stimulus-specific memory representations.Open in a separate windowFig. 1.Experimental procedure, electrode implantation, and behavioral results. (A) Participants studied images presented at specific locations, indicated by red boxes located on the ground, in a square virtual environment (here shown from a bird’s eye perspective). (B) Stimulus presentation during the encoding phase of the experiment as seen by a participant. (C) Schematic timeline showing the main blocks of the experiment (A = active, P = passive, counterbalanced). (D) Detailed timeline of an example-encoding block. Participants freely determined the timings and materials of study in the active condition and were exposed to the trajectory of a different subject in the passive condition. (E) Timeline of the experiment at retrieval. (F) All electrodes included in the analyses (n = 392, MNI space), color coded by participant identity. (G) Receiver operating characteristic (ROC) curves for each subject (gray) and grand average (red). (H) Proportion of correct items for all stimuli as a function of confidence. (I) Proportion of remembered items (Left) and of high-confidence remembered items (Right) for active and passive conditions. *P < 0.05; ***P < 0.001.  相似文献   

4.
5.
Coordination of behavior for cooperative performances often relies on linkages mediated by sensory cues exchanged between participants. How neurophysiological responses to sensory information affect motor programs to coordinate behavior between individuals is not known. We investigated how plain-tailed wrens (Pheugopedius euophrys) use acoustic feedback to coordinate extraordinary duet performances in which females and males rapidly take turns singing. We made simultaneous neurophysiological recordings in a song control area “HVC” in pairs of singing wrens at a field site in Ecuador. HVC is a premotor area that integrates auditory feedback and is necessary for song production. We found that spiking activity of HVC neurons in each sex increased for production of its own syllables. In contrast, hearing sensory feedback produced by the bird’s partner decreased HVC activity during duet singing, potentially coordinating HVC premotor activity in each bird through inhibition. When birds sang alone, HVC neurons in females but not males were inhibited by hearing the partner bird. When birds were anesthetized with urethane, which antagonizes GABAergic (γ-aminobutyric acid) transmission, HVC neurons were excited rather than inhibited, suggesting a role for GABA in the coordination of duet singing. These data suggest that HVC integrates information across partners during duets and that rapid turn taking may be mediated, in part, by inhibition.

Animals routinely rely on sensory feedback for the control of their own behavior. In cooperative performances, such sensory feedback can include cues produced by other participants (18). For example, in interactive vocal communication, including human speech, individuals take turns vocalizing. This “turn taking” is a consequence of each participant responding to auditory cues from a partner (46, 9, 10). The role of such “heterogenous” (other-generated) feedback in the control of vocal turn taking and other cooperative performances is largely unknown.Plain-tailed wrens (Pheugopedius euophrys) are neotropical songbirds that cooperate to produce extraordinary duet performances but also sing by themselves (Fig. 1A) (4, 10, 11). Singing in plain-tailed wrens is performed by both females and males and used for territorial defense and other functions, including mate guarding and attraction (1, 1116). During duets, female and male plain-tailed wrens take turns, alternating syllables at a rate of between 2 and 5 Hz (Fig. 1A) (4, 11).Open in a separate windowFig. 1.Neural control of solo and duet singing in plain-tailed wrens. (A) Spectrogram of a singing bout that included male solo syllables (blue line, top) followed by a duet. Solo syllables for both sexes (only male solo syllables are shown here) are sung at lower amplitudes than syllables produced in duets. Note that the smeared appearance of wren syllables in spectrograms reflects the acoustic structure of plain-tailed wren singing. (B and C) Each bird has a motor system that is used to produce song and sensory systems that mediate feedback. (B) During solo singing, the bird hears its own song, which is known as autogenous feedback (orange). (C) During duet singing, each bird hears both its own singing and the singing of its partner, known as heterogenous feedback (green). The key difference between solo and duet singing is heterogenous feedback that couples the neural systems of the two birds. This coupling results in changes in syllable amplitude and timing in both birds.There is a categorical difference between solo and duet singing. In solo singing, the singing bird receives only autogenous (hearing its own vocalization) feedback (Fig. 1B). The partner may hear the solo song if it is nearby, a heterogenous (other-generated) cue. In duet singing, birds receive both heterogenous and autogenous feedback as they alternate syllable production (Fig. 1C). Participants use heterogenous feedback during duet singing for precise timing of syllable production (4, 11). For example, when a male temporarily stops participating in a duet, the duration of intersyllable intervals between female syllables increases (4), showing an effect of heterogenous feedback on the timing of syllable production.How does the brain of each wren integrate heterogenous acoustic cues to coordinate the precise timing of syllable production between individuals during duet performances? To address this question, we examined neurophysiological activity in HVC, a nucleus in the nidopallium [an analogue of mammalian cortex (17, 18)]. HVC is necessary for song learning, production, and timing in species of songbirds that do not perform duets (1924). Neurons in HVC are active during singing and respond to playback of the bird’s own learned song (2527). In addition, recent work has shown that HVC is also involved in vocal turn taking (19).To examine the role of heterogenous feedback in the control of duet performances, we compared neurophysiological activity in HVC when female or male wrens sang solo syllables with syllables sung during duets. Neurophysiological recordings were made in awake and anesthetized pairs of wrens at the Yanayacu Biological Station and Center for Creative Studies on the slopes of the Antisana volcano in Ecuador. We found that heterogenous cues inhibited HVC activity during duet performances in both females and males, but inhibition was only observed in females during solo singing.  相似文献   

6.
Cells are exposed to changes in extracellular stimulus concentration that vary as a function of rate. However, how cells integrate information conveyed from stimulation rate along with concentration remains poorly understood. Here, we examined how varying the rate of stress application alters budding yeast mitogen-activated protein kinase (MAPK) signaling and cell behavior at the single-cell level. We show that signaling depends on a rate threshold that operates in conjunction with stimulus concentration to determine the timing of MAPK signaling during rate-varying stimulus treatments. We also discovered that the stimulation rate threshold and stimulation rate-dependent cell survival are sensitive to changes in the expression levels of the Ptp2 phosphatase, but not of another phosphatase that similarly regulates osmostress signaling during switch-like treatments. Our results demonstrate that stimulation rate is a regulated determinant of cell behavior and provide a paradigm to guide the dissection of major stimulation rate dependent mechanisms in other systems.

All cells employ signal transduction pathways to respond to physiologically relevant changes in extracellular stressors, nutrient levels, hormones, morphogens, and other stimuli that vary as functions of both concentration and rate in healthy and diseased states (17). Switch-like “instantaneous” changes in the concentrations of stimuli in the extracellular environment have been widely used to show that the strength of signaling and overall cellular response are dependent on the stimulus concentration, which in many cases needs to exceed a certain threshold (8, 9). Previous studies have shown that the rate of stimulation can also influence signaling output in a variety of pathways (1017) and that stimulation profiles of varying rates can be used to probe underlying signaling pathway circuitry (4, 18, 19). However, it is still not clear how cells integrate information conveyed by changes in both the stimulation rate and concentration in determining signaling output. It is also not clear if cells require stimulation gradients to exceed a certain rate in order to commence signaling.Recent investigations have demonstrated that stimulation rate can be a determining factor in signal transduction. In contrast to switch-like perturbations, which trigger a broad set of stress-response pathways, slow stimulation rates activate a specific response to the stress applied in Bacillus subtilis cells (10). Meanwhile, shallow morphogen gradient stimulation fails to activate developmental pathways in mouse myoblast cells in culture, even when concentrations sufficient for activation during pulsed treatment are delivered (12). These observations raise the possibility that stimulation profiles must exceed a set minimum rate or rate threshold to achieve signaling activation. Although such rate thresholds would help cells decide if and how to respond to dynamic changes in stimulus concentration, the possibility of signaling regulation by a rate threshold has never been directly investigated in any system. Further, no study has experimentally examined how stimulation rate requirements impact cell phenotype or how cells molecularly regulate the stimulation rate required for signaling activation. As such, the biological significance of any existing rate threshold regulation of signaling remains unknown.The budding yeast Saccharomyces cerevisiae high osmolarity glycerol (HOG) pathway provides an ideal model system for addressing these issues (Fig. 1A). The evolutionarily conserved mitogen-activated protein kinase (MAPK) Hog1 serves as the central signaling mediator of this pathway (2022). It is well established that instantaneous increases in osmotic stress concentration induce Hog1 phosphorylation, activation, and translocation to the nucleus (18, 21, 2330). Activated Hog1 governs the majority of the cellular osmoadaptation response that enables cells to survive (23, 31, 32). Multiple apparently redundant MAPK phosphatases dephosphorylate and inactivate Hog1, which, along with the termination of upstream signaling after adaptation, results in its return to the cytosol (Fig. 1A) (23, 25, 26, 3339). Because of this behavior, time-lapse analysis of Hog1 nuclear enrichment in single cells has proven an excellent and sensitive way to monitor signaling responses to dynamic stimulation patterns in real time (18, 2730, 40, 41). Further, such assays have been readily combined with traditional growth and molecular genetic approaches to link observed signaling responses with cell behavior and signaling pathway architecture (2729).Open in a separate windowFig. 1.Hog1 signaling and cell survival are sensitive to the rate of preconditioning osmotic stress application. (A) Schematic of the budding yeast HOG response. (B) Preconditioning protection assay workflow indicating the first stress treatments to a final concentration of 0.4 M NaCl (Left), high-stress exposure (Middle), and colony formation readout (Right). (C) High-stress survival as a function of each first treatment relative to the untreated first stress condition. Bars and errors are means and SD from three biological replicates. *Statistically significant by Kolmogorov–Smirnov test (P < 0.05). NS = not significant. (D) Treatment concentration over time. (E) Treatment rate over time for quadratic and pulse treatment. The rate for the pulse is briefly infinite (blue vertical line) before it drops to 0. (F) Hog1 nuclear localization during the treatments depicted in D and E. (Inset) Localization pattern in the quadratic-treated sample. Lines represent means and shaded error represents the SD from three to four biological replicates.Here, we use systematically designed osmotic stress treatments imposed at varying rates of increase to show that a rate threshold condition regulates yeast high-stress survival and Hog1 MAPK signaling. We demonstrate that only stimulus profiles that satisfy both this rate threshold condition and a concentration threshold condition result in robust signaling. We go on to show that the protein tyrosine phosphatase Ptp2, but not the related Ptp3 phosphatase, serves as a major rate threshold regulator. By expressing PTP2 under the control of a series of different enhancer–promoter DNA constructs, we demonstrate that changes in the level of Ptp2 expression can alter the stimulation rate required for signaling induction and survival. These findings establish rate thresholds as a critical and regulated component of signaling biology akin to concentration thresholds.  相似文献   

7.
8.
9.
We assembled a complete reference genome of Eumaeus atala, an aposematic cycad-eating hairstreak butterfly that suffered near extinction in the United States in the last century. Based on an analysis of genomic sequences of Eumaeus and 19 representative genera, the closest relatives of Eumaeus are Theorema and Mithras. We report natural history information for Eumaeus, Theorema, and Mithras. Using genomic sequences for each species of Eumaeus, Theorema, and Mithras (and three outgroups), we trace the evolution of cycad feeding, coloration, gregarious behavior, and other traits. The switch to feeding on cycads and to conspicuous coloration was accompanied by little genomic change. Soon after its origin, Eumaeus split into two fast evolving lineages, instead of forming a clump of close relatives in the phylogenetic tree. Significant overlap of the fast evolving proteins in both clades indicates parallel evolution. The functions of the fast evolving proteins suggest that the caterpillars developed tolerance to cycad toxins with a range of mechanisms including autophagy of damaged cells, removal of cell debris by macrophages, and more active cell proliferation.

The genus Eumaeus Hübner (Lycaenidae, Theclinae) arguably contains the most aposematically colored caterpillars and butterflies among the ∼4,000 Lycaenidae in the world (16). The brilliant red and gold gregarious caterpillars (Fig. 1) sequester cycasin from the leaves of their cycad food plants (Zamiaceae), which deters predators (39). Other secondary metabolites in cycads (e.g., 1011) may also deter predators. Eumaeus adults have a bright orange-red abdomen and an orange-red hindwing spot (except for one species) (Fig. 2). Blue and green iridescent markings are especially conspicuous on a black ground color. Eumaeus adults are among the largest lycaenids and have more rounded wings and a slower, more gliding flight than most Theclinae (1). Cycads are among the most primitive extant seed-plants (9), and the “plethora of aposematic attributes suggests a very ancient association between Eumaeus and the cycad host plants” (3).Open in a separate windowFig. 1.Caterpillars and pupae of Theorema eumenia (Top) and Eumaeus godartii (Bottom) in Costa Rica. Clockwise from Upper Left, second or third instar (length, ∼13 mm), fourth (final) instar (∼20 mm), pupa (∼18 mm), pupa (∼24 mm), fourth (final) instar (∼27 mm), second or third instar (∼20 mm). (Images from authors W.H. and D.H.J.).Open in a separate windowFig. 2.Adult wing uppersides and undersides. Eumaeus childrenae (two Upper Left images), E. atala (two Upper Right images), Theorema eumenia (two Lower Left images), and Mithras nautes (two Lower Right images). Scale bar, 1 cm.Eumaeus has been classified as a separate family (1214), a genus in the Riodinidae (1516), or a monotypic subfamily or tribe of the Lycaenidae (1720). Alternatively, others called it a typical member of the Neotropical Lycaenidae (2122). The evolutionary question behind this discordant taxonomic history is whether Eumaeus is a phylogenetically isolated lineage long associated with cycads (3) or an embedded clade in which a recent food plant shift to cycads resulted in the rapid evolution of aposematism. Recent molecular evidence for a limited number of taxa suggested the latter (23). To answer this question definitively, we analyzed genomic sequences of Eumaeus and its relatives.To trace the evolution of cycad feeding, we report the caterpillar food plants of the genera most closely related to Eumaeus and illustrate their immature stages (Fig. 1 and SI Appendix). This natural history information combined with analyses of genome sequences is the foundation for investigating the subsequent evolutionary impact on the Eumaeus genome of the switch to eating cycads.  相似文献   

10.
Inflammatory pathologies caused by phagocytes lead to numerous debilitating conditions, including chronic pain and blindness due to age-related macular degeneration. Many members of the sialic acid-binding immunoglobulin-like lectin (Siglec) family are immunoinhibitory receptors whose agonism is an attractive approach for antiinflammatory therapy. Here, we show that synthetic lipid-conjugated glycopolypeptides can insert into cell membranes and engage Siglec receptors in cis, leading to inhibitory signaling. Specifically, we construct a cis-binding agonist of Siglec-9 and show that it modulates mitogen-activated protein kinase (MAPK) signaling in reporter cell lines, immortalized macrophage and microglial cell lines, and primary human macrophages. Thus, these cis-binding agonists of Siglecs present a method for therapeutic suppression of immune cell reactivity.

Sialic acid-binding immunoglobulin (IgG)-like lectins (Siglecs) are a family of immune checkpoint receptors that are on all classes of immune cells (15). Siglecs bind various sialoglycan ligands and deliver signals to the immune cells that report on whether the target is healthy or damaged, “self” or “nonself.” Of the 14 human Siglecs, 9 contain cytosolic inhibitory signaling domains. Accordingly, engagement of these inhibitory Siglecs by sialoglycans suppresses the activity of the immune cell, leading to an antiinflammatory effect. In this regard, inhibitory Siglecs have functional parallels with the T cell checkpoint receptors CTLA-4 and PD-1 (69). As with these clinically established targets for cancer immune therapy, there has been a recent surge of interest in antagonizing Siglecs to potentiate immune cell reactivity toward cancer (10). Conversely, engagement of Siglecs with agonist antibodies can suppress immune cell reactivity in the context of antiinflammatory therapy. This approach has been explored to achieve B cell suppression in lupus patients by agonism of CD22 (Siglec-2) (11, 12), and to deplete eosinophils for treatment of eosinophilic gastroenteritis by agonism of Siglec-8 (13). Similarly, a CD24 fusion protein has been investigated clinically as a Siglec-10 agonist for both graft-versus-host disease and viral infection (14, 15).Traditionally, Siglec ligands have been studied as functioning in trans, that is, on an adjacent cell (1618), or as soluble clustering agents (9, 19). In contrast to these mechanisms of action, a growing body of work suggests that cis ligands for Siglecs (i.e., sialoglycans that reside on the same cell membrane) cluster these receptors and maintain a basal level of inhibitory signaling that increases the threshold for immune cell activation. Both Bassik and coworkers (20) and Wyss-Coray and coworkers (21) have linked the depletion of cis Siglec ligands with increased activity of macrophages and microglia, and other studies have shown that a metabolic blockade of sialic acid renders phagocytes more prone to activation (22).Synthetic ligands are a promising class of Siglec agonists (17, 23, 24). Many examples rely on clustering architectures (e.g., sialopolymers, nanoparticles, liposomes) to induce their effect (19, 2326). Indeed, we have previously used glycopolymers to study the effects of Siglec engagement in trans on natural killer (NK) cell activity (16). We and other researchers have employed glycopolymers (16, 23), glycan-remodeling enzymes (27, 28), chemical inhibitors of glycan biosynthesis (22), and mucin overexpression constructs (29, 30) to modulate the cell-surface levels of Siglec ligands. However, current approaches lack specificity for a given Siglec.We hypothesized that Siglec-specific cis-binding sialoglycans displayed on immune cell surfaces could dampen immune cell activity with potential therapeutic applications. Here we test this notion with the synthesis of membrane-tethered cis-binding agonists of Siglec-9 (Fig. 1). Macrophages and microglia widely express Siglec-9 and are responsible for numerous pathologies including age-related inflammation (31), macular degeneration (32), neural inflammation (33), and chronic obstructive pulmonary disease (34). We designed and developed a lipid-linked glycopolypeptide scaffold bearing glycans that are selective Siglec-9 ligands (pS9L-lipid). We show that pS9L-lipid inserts into macrophage membranes, binds Siglec-9 specifically and in cis, and induces Siglec-9 signaling to suppress macrophage activity. By contrast, a lipid-free soluble analog (pS9L-sol) binds Siglec-9 but does not agonize Siglec-9 or modulate macrophage activity. Membrane-tethered glycopolypeptides are thus a potential therapeutic modality for inhibiting phagocyte activity.Open in a separate windowFig. 1.Lipid-tethered glycopolypeptides cluster and agonize Siglecs in cis on effector cells. (A) Immune cells express activating receptors that stimulate inflammatory signaling. (B) Clustering of Siglec-9 by cis-binding agonists stimulates inhibitory signaling that quenches activation.  相似文献   

11.
12.
Earth’s largest biotic crisis occurred during the Permo–Triassic Transition (PTT). On land, this event witnessed a turnover from synapsid- to archosauromorph-dominated assemblages and a restructuring of terrestrial ecosystems. However, understanding extinction patterns has been limited by a lack of high-precision fossil occurrence data to resolve events on submillion-year timescales. We analyzed a unique database of 588 fossil tetrapod specimens from South Africa’s Karoo Basin, spanning ∼4 My, and 13 stratigraphic bin intervals averaging 300,000 y each. Using sample-standardized methods, we characterized faunal assemblage dynamics during the PTT. High regional extinction rates occurred through a protracted interval of ∼1 Ma, initially co-occurring with low origination rates. This resulted in declining diversity up to the acme of extinction near the DaptocephalusLystrosaurus declivis Assemblage Zone boundary. Regional origination rates increased abruptly above this boundary, co-occurring with high extinction rates to drive rapid turnover and an assemblage of short-lived species symptomatic of ecosystem instability. The “disaster taxon” Lystrosaurus shows a long-term trend of increasing abundance initiated in the latest Permian. Lystrosaurus comprised 54% of all specimens by the onset of mass extinction and 70% in the extinction aftermath. This early Lystrosaurus abundance suggests its expansion was facilitated by environmental changes rather than by ecological opportunity following the extinctions of other species as commonly assumed for disaster taxa. Our findings conservatively place the Karoo extinction interval closer in time, but not coeval with, the more rapid marine event and reveal key differences between the PTT extinctions on land and in the oceans.

Mass extinctions are major perturbations of the biosphere resulting from a wide range of different causes including glaciations and sea level fall (1), large igneous provinces (2), and bolide impacts (3, 4). These events caused permanent changes to Earth’s ecosystems, altering the evolutionary trajectory of life (5). However, links between the broad causal factors of mass extinctions and the biological and ecological disturbances that lead to species extinctions have been difficult to characterize. This is because ecological disturbances unfold on timescales much shorter than the typical resolution of paleontological studies (6), particularly in the terrestrial record (68). Coarse-resolution studies have demonstrated key mass extinction phenomena including high extinction rates and lineage turnover (7, 9), changes in species richness (10), ecosystem instability (11), and the occurrence of disaster taxa (12). However, finer time resolutions are central to determining the association and relative timings of these effects, their potential causal factors, and their interrelationships. Achieving these goals represents a key advance in understanding the ecological mechanisms of mass extinctions.The end-Permian mass extinction (ca. 251.9 Ma) was Earth’s largest biotic crisis as measured by taxon last occurrences (1315). Large outpourings from Siberian Trap volcanism (2) are the likely trigger of calamitous climatic changes, including a runaway greenhouse effect and ocean acidification, which had profound consequences for life on land and in the oceans (1618). An estimated 81% of marine species (19) and 89% of tetrapod genera became extinct as established Permian ecosystems gave way to those of the Triassic. In the ocean, this included the complete extinction of reef-forming tabulate and rugose corals (20, 21) and significant losses in previously diverse ammonoid, brachiopod, and crinoid families (22). On land, many nonmammalian synapsids became extinct (16), and the glossopterid-dominated floras of Gondwana also disappeared (23). Stratigraphic sequences document a global “coral gap” and “coal gap” (24, 25), suggesting reef and forest ecosystems were rare or absent for up to 5 My after the event (26). Continuous fossil-bearing deposits documenting patterns of turnover across the Permian–Triassic transition (PTT) on land (27) and in the oceans (28) are geographically widespread (29, 30), including marine and continental successions that are known from China (31, 32) and India (33). Continental successions are known from Russia (34), Australia (35), Antarctica (36), and South Africa’s Karoo Basin (Fig. 1 and 3740), the latter providing arguably the most densely sampled and taxonomically scrutinized (4143) continental record of the PTT. The main extinction has been proposed to occur at the boundary between two biostratigraphic zones with distinctive faunal assemblages, the Daptocephalus and Lystrosaurus declivis assemblage zones (Fig. 1), which marks the traditional placement of the Permian–Triassic geologic boundary [(37) but see ref. 44]. Considerable research has attempted to understand the anatomy of the PTT in South Africa (38, 39, 4552) and to place it in the context of biodiversity changes across southern Gondwana (53, 54) and globally (29, 31, 32, 44, 47, 55).Open in a separate windowFig. 1.Map of South Africa depicting the distribution of the four tetrapod fossil assemblage zones (Cistecephalus, Daptocephalus, Lystrosaurus declivis, Cynognathus) and our two study sites where fossils were collected in this study (sites A and B). Regional lithostratigraphy and biostratigraphy within the study interval are shown alongside isotope dilution–thermal ionization mass spectrometry dates retrieved by Rubidge et al., Botha et al., and Gastaldo et al. (37, 44, 80). The traditional (dashed red line) and associated PTB hypotheses for the Karoo Basin (37, 44) are also shown. Although traditionally associated with the PTB, the DaptocephalusLystrosaurus declivis Assemblage Zone boundary is defined by first appearances of co-occurring tetrapod assemblages, so its position relative to the three PTB hypotheses is unchanged. The Ripplemead member (*) has yet to be formalized by the South African Committee for Stratigraphy.Decades of research have demonstrated the richness of South Africa’s Karoo Basin fossil record, resulting in hundreds of stratigraphically well-documented tetrapod fossils across the PTT (37, 39, 56). This wealth of data has been used qualitatively to identify three extinction phases and an apparent early postextinction recovery phase (39, 45, 51). Furthermore, studies of Karoo community structure and function have elucidated the potential role of the extinction and subsequent recovery in breaking the incumbency of previously dominant clades, including synapsids (11, 57). Nevertheless, understanding patterns of faunal turnover and recovery during the PTT has been limited by the scarcity of quantitative investigations. Previous quantitative studies used coarsely sampled data (i.e., assemblage zone scale, 2 to 3 Ma time intervals) to identify low species richness immediately after the main extinction, potentially associated with multiple “boom and bust” cycles of primary productivity based on δ13C variation during the first 5 My of the Triassic (41, 58). However, many details of faunal dynamics in this interval remain unknown. Here, we investigate the dynamics of this major tetrapod extinction at an unprecedented time resolution (on the order of hundreds of thousands of years), using sample-standardized methods to quantify multiple aspects of regional change across the Cistecephalus, Daptocephalus, and Lystrosaurus declivis assemblage zones.  相似文献   

13.
Whole-brain resting-state functional MRI (rs-fMRI) during 2 wk of upper-limb casting revealed that disused motor regions became more strongly connected to the cingulo-opercular network (CON), an executive control network that includes regions of the dorsal anterior cingulate cortex (dACC) and insula. Disuse-driven increases in functional connectivity (FC) were specific to the CON and somatomotor networks and did not involve any other networks, such as the salience, frontoparietal, or default mode networks. Censoring and modeling analyses showed that FC increases during casting were mediated by large, spontaneous activity pulses that appeared in the disused motor regions and CON control regions. During limb constraint, disused motor circuits appear to enter a standby mode characterized by spontaneous activity pulses and strengthened connectivity to CON executive control regions.

Disuse is a powerful paradigm for inducing plasticity that has uncovered key organizing principles of the human brain (14). Monocular deprivation—prolonged covering of one eye—revealed that multiple afferent inputs can compete for representational territory in the primary visual cortex (1). Similar competition between afferents also shapes the somatomotor system. Manipulations such as peripheral nerve deafferentation, whisker trimming, and limb constraint all drive plasticity in the primary somatosensory and motor cortex (24). Most plasticity studies to date have used focal techniques, such as microelectrode recordings, to study local changes in brain function. As a result, little is known about how behavior and experience shape the brain-wide functional networks that support complex cognitive operations (5).The brain is composed of networks of regions that cooperate to perform specific cognitive functions (58). These functional networks show synchronized spontaneous activity while the brain is at rest, a phenomenon known as resting-state functional connectivity (FC) (911). FC can be measured noninvasively in humans using resting-state functional MRI (rs-fMRI) and has been used to parse the brain into canonical functional networks (12, 13), including visual, auditory, and somatomotor networks (14, 15); ventral and dorsal attention networks (8, 16); a default mode network with roles in internally directed cognition and episodic memory (7, 11); a salience network thought to assess the homeostatic relevance of external stimuli (17); a frontoparietal control network supporting error processing and moment-to-moment adjustments in behavior (1820); and a cingulo-opercular control network (CON), which maintains executive control during goal-directed behavior (18, 19, 21). Each functional network likely carries out a variety of additional functions.A more recent advance in human neuroscience has been the recognition of individual variability in network organization (2225). Most early rs-fMRI studies examined central tendencies in network organization using group-averaged FC measurements (10, 12, 13). Recent work has demonstrated that functional networks can be identified in an individual-specific manner if sufficient rs-fMRI data are acquired, an approach termed precision functional mapping (PFM) (22, 23, 2630). PFM respects the unique functional anatomy of each person and avoids averaging together functionally distinct brain regions across individuals.We recently demonstrated that PFM can be used to follow the time course of disuse-driven plasticity in the human brain (31). Three adult participants (Nico, Ashley, and Omar) were scanned at the same time of day for 42 to 64 consecutive days (30 min of rs-fMRI per day) before, during, and after 2 wk of dominant upper-extremity casting (Fig. 1 A and B). Casting caused persistent disuse of the dominant upper extremity during daily behaviors and led to a marked loss of strength and fine motor skill in all participants. During casting, the upper-extremity regions of the left primary somatomotor cortex (L-SM1ue) and right cerebellum (R-Cblmue) functionally disconnected from the remainder of the somatomotor network. Disused motor circuits also exhibited large, spontaneous pulses of activity (Fig. 1C). Disuse pulses did not occur prior to casting, started to occur frequently within 1 to 2 d of casting, and quickly waned after cast removal.Open in a separate windowFig. 1.Experimental design and spontaneous activity pulses. (A) Three participants (Nico, Ashley, and Omar) wore casts covering the entire dominant upper extremity for 2 wk. (B) Participants were scanned every day for 42 to 64 consecutive days before, during, and after casting. All scans included 30 min of resting-state functional MRI. (C) During the Cast period, disused somatomotor circuits exhibited large pulses of spontaneous activity. (C, Left) Whole-brain ANOVA showing which brain regions contained disuse-driven pulses. (C, Right) Time courses of all pulses recorded from the disused primary somatomotor cortex.Somatomotor circuits do not function in isolation. Action selection and motor control are thought to be governed by complex interactions between the somatomotor network and control networks, including the CON (18). Prior studies of disuse-driven plasticity, including our own, have focused solely on somatomotor circuits. Here, we leveraged the whole-brain coverage of rs-fMRI and the statistical power of PFM to examine disuse-driven plasticity throughout the human brain.  相似文献   

14.
Development has often been viewed as a constraining force on morphological adaptation, but its precise influence, especially on evolutionary rates, is poorly understood. Placental mammals provide a classic example of adaptive radiation, but the debate around rate and drivers of early placental evolution remains contentious. A hallmark of early dental evolution in many placental lineages was a transition from a triangular upper molar to a more complex upper molar with a rectangular cusp pattern better specialized for crushing. To examine how development influenced this transition, we simulated dental evolution on “landscapes” built from different parameters of a computational model of tooth morphogenesis. Among the parameters examined, we find that increases in the number of enamel knots, the developmental precursors of the tooth cusps, were primarily influenced by increased self-regulation of the molecular activator (activation), whereas the pattern of knots resulted from changes in both activation and biases in tooth bud growth. In simulations, increased activation facilitated accelerated evolutionary increases in knot number, creating a lateral knot arrangement that evolved at least ten times on placental upper molars. Relatively small increases in activation, superimposed on an ancestral tritubercular molar growth pattern, could recreate key changes leading to a rectangular upper molar cusp pattern. Tinkering with tooth bud geometry varied the way cusps initiated along the posterolingual molar margin, suggesting that small spatial variations in ancestral molar growth may have influenced how placental lineages acquired a hypocone cusp. We suggest that development could have enabled relatively fast higher-level divergence of the placental molar dentition.

Whether developmental processes bias or constrain morphological adaptation is a long-standing question in evolutionary biology (14). Many of the distinctive features of a species derive from pattern formation processes that establish the position and number of anatomical structures (5). If developmental processes like pattern formation are biased toward generating only particular kinds of variation, adaptive radiations may often be directed along developmental–genetic “lines of least resistance” (2, 4, 6, 7). Generally, the evolutionary consequences of this developmental bias have been considered largely in terms of how it might influence the pattern of character evolution (e.g., refs. 1, 2, 810). But development could also influence evolutionary rates by controlling how much variation is accessible to natural selection in a given generation (11).For mammals, the dentition is often the only morphological system linking living and extinct species (12). Correspondingly, tooth morphology plays a crucial role in elucidating evolutionary relationships, time calibrating phylogenetic trees, and reconstructing adaptive responses to past environmental change (e.g., refs. 1315). One of the most pervasive features of dental evolution among mammals is an increase in the complexity of the tooth occlusal surface, primarily through the addition of new tooth cusps (16, 17). These increases in tooth complexity are functionally and ecologically significant because they enable more efficient mechanical breakdown of lower-quality foods like plant leaves (18).Placental mammals are the most diverse extant mammalian group, comprising more than 6,000 living species spread across 19 extant orders, and this taxonomic diversity is reflected in their range of tooth shapes and dietary ecologies (12). Many extant placental orders, especially those with omnivorous or herbivorous ecologies (e.g., artiodactyls, proboscideans, rodents, and primates), convergently evolved a rectangular upper molar cusp pattern from a placental ancestor with a more triangular cusp pattern (1921). This resulted from separate additions in each lineage of a novel posterolingual cusp, the "hypocone'''' [sensu (19)], to the tritubercular upper molar (Fig. 1), either through modification of a posterolingual cingulum (“true” hypocone) or another posterolingual structure, like a metaconule (pseudohypocone) (19). The fossil record suggests that many of the basic steps in the origin of this rectangular cusp pattern occurred during an enigmatic early diversification window associated with the divergence and early radiation of several placental orders (20, 21; Fig. 1). However, there remains debate about the rate and pattern of early placental divergence (2224). On the one hand, most molecular phylogenies suggest that higher-level placental divergence occurred largely during the Late Cretaceous (25, 26), whereas other molecular phylogenies and paleontological analyses suggest more rapid divergence near the Cretaceous–Paleogene (K–Pg) boundary (21, 24, 2729). Most studies agree that ecological opportunity created in the aftermath of the K–Pg extinction probably played an important role in ecomorphological diversification within the placental orders (30, 31). But exactly how early placentals acquired the innovations needed to capitalize on ecological opportunity remains unclear. Dental innovations, especially those which facilitated increases in tooth complexity, may have been important because they would have promoted expansion into plant-based dietary ecologies left largely vacant after the K–Pg extinction event (32).Open in a separate windowFig. 1.Placental mammal lineages separately evolved complex upper molar teeth with a rectangular cusp pattern composed of two lateral pairs of cusps from a common ancestor with a simpler, triangular cusp pattern. Many early relatives of the extant placental orders, such as Eritherium, possessed a hypocone cusp and a more rectangular primary cusp pattern. Examples of complex upper molars are the following: Proboscidea, the gomphothere Anancus; Rodentia, the wood mouse Apodemus; and Artiodactyla, the suid Nyanzachoerus.Mammalian tooth cusps form primarily during the “cap” and “bell” stage of dental development, when signaling centers called enamel knots establish the future sites of cusp formation within the inner dental epithelium (33, 34). The enamel knots secrete molecules that promote proliferation and changes in cell–cell adhesion, which facilitates invagination of the dental epithelium into an underlying layer of mesenchymal cells (34, 35). Although a range of genes are involved in tooth cusp patterning (3638), the basic dynamics can be effectively modeled using reaction–diffusion models with just three diffusible morphogens: an activator, an inhibitor, and a growth factor (3941). Candidate activator genes in mammalian tooth development include Bmp4, Activin A, Fgf20, and Wnt genes, whereas potential inhibitors include Shh and Sostdc, and Fgf4 and Bmp2 have been hypothesized to act as growth factors (38, 4043). In computer models of tooth development, activator molecules up-regulated in the underlying mesenchyme stimulate differentiation of overlying epithelium into nondividing enamel knot cells. These in turn secrete molecules that inhibit further differentiation of epithelium into knot cells, while also promoting cell proliferation that creates the topographic relief of the cusp (40). Although many molecular, cellular, and physical processes have the potential to influence cusp formation, and thereby tooth complexity (35, 37), parameters that control the strength and conductance of the activator and inhibitor signals, the core components of the reaction–diffusion cusp patterning mechanism (39, 40) are likely to be especially important.Here, we integrate a previous computer model of tooth morphogenesis called ToothMaker (41), with simulations of trait evolution and data from the fossil record (Fig. 2), to examine the developmental origins of tooth complexity in placental mammals. Specifically, we ask the following: 1) What developmental processes can influence how many cusps form? 2) How might these developmental processes influence the evolution of tooth cusp number, especially rates? And 3) what developmental changes may have been important in the origins of the fourth upper molar cusp, the hypocone, in placental mammal evolution?Open in a separate windowFig. 2.Workflow for simulations of tooth complexity evolution. (A) Tooth shape is varied for five signaling and growth parameters in ToothMaker. (B) From an ancestral state, each parameter is varied in 2.5% increments up to a maximum of ± 50% of the ancestral state. (C) Tooth complexity and enamel knot (EK) pattern were quantified for each parameter combination. Tooth complexity was measured using cusp number/EK number and OPC. ToothMaker and placental upper second molars were classified into categories based on EK/cusp pattern. (D) The parameter space was populated with pattern and tooth complexity datums to build a developmental landscape. (E) Tooth complexity evolution was simulated on each developmental landscape. (F) Resulting diversity and pattern of tooth complexity was compared with placental mammal molar diversity.  相似文献   

15.
There is considerable support for the hypothesis that perception of heading in the presence of rotation is mediated by instantaneous optic flow. This hypothesis, however, has never been tested. We introduce a method, termed “nonvarying phase motion,” for generating a stimulus that conveys a single instantaneous optic flow field, even though the stimulus is presented for an extended period of time. In this experiment, observers viewed stimulus videos and performed a forced-choice heading discrimination task. For nonvarying phase motion, observers made large errors in heading judgments. This suggests that instantaneous optic flow is insufficient for heading perception in the presence of rotation. These errors were mostly eliminated when the velocity of phase motion was varied over time to convey the evolving sequence of optic flow fields corresponding to a particular heading. This demonstrates that heading perception in the presence of rotation relies on the time-varying evolution of optic flow. We hypothesize that the visual system accurately computes heading, despite rotation, based on optic acceleration, the temporal derivative of optic flow.

James Gibson first remarked that the instantaneous motion of points on the retina (Fig. 1A) can be formally described as a two-dimensional (2D) field of velocity vectors called the “optic flow field” (or “optic flow”) (1). Such optic flow, caused by an observer’s movement relative to the environment, conveys information about self-motion and the structure of the visual scene (115). When an observer translates in a given direction along a straight path, the optic flow field radiates from a point in the image with zero velocity, or singularity, called the focus of expansion (Fig. 1B). It is well known that under such conditions, one can accurately estimate one’s “heading” (i.e., instantaneous direction of translation in retinocentric coordinates) by simply locating the focus of expansion (SI Appendix). However, if there is angular rotation in addition to translation (by moving along a curved path or by a head or eye movement), the singularity in the optic flow field will be displaced such that it no longer corresponds to the true heading (Fig. 1 C and D). In this case, if one estimates heading by locating the singularity, the estimate will be biased away from the true heading. This is known as the rotation problem (14).Open in a separate windowFig. 1.Projective geometry, the rotation problem, time-varying optic flow, and the optic acceleration hypothesis. (A) Viewer-centered coordinate frame and perspective projection. Because of motion between the viewpoint and the scene, a 3D surface point traverses a path in 3D space. Under perspective projection, the 3D path of this point projects onto a 2D path in the image plane (retina), the temporal derivative of which is called image velocity. The 2D velocities associated with all visible points define a dense 2D vector field called the optic flow field. (BD) Illustration of the rotation problem. (B) Optic flow for pure translation (1.5-m/s translation speed, 0° heading, i.e., heading in the direction of gaze). Optic flow singularity (red circle) corresponds to heading (purple circle). (C) Pure rotation, for illustrative purposes only and not corresponding to any experimental condition (2°/s rightward rotation). (D) Translation + rotation (1.5 m/s translation speed, 0° heading, 2°/s rightward rotation). Optic flow singularity (red circle) is displaced away from heading (purple circle). (E) Three frames from a video depicting movement along a circular path with the line-of-sight initially perpendicular to a single fronto-parallel plane composed of black dots. (F) Time-varying evolution of optic flow. The first optic flow field reflects image motion between the first and second frames of the video. The second optic flow field reflects image motion between the second and third frames of the video. For this special case (circular path), the optic flow field evolves (and the optic flow singularity drifts) only due to the changing depth of the environment relative to the viewpoint. (G) Illustration of the optic acceleration hypothesis. Optic acceleration is the derivative of optic flow over time (here, approximated as the difference between the second and first optic flow fields). The singularity of the optic acceleration field corresponds to the heading direction. Acceleration vectors autoscaled for visibility.Computer vision researchers and vision scientists have developed a variety of algorithms that accurately and precisely extract observer translation and rotation from optic flow, thereby solving the rotation problem. Nearly all of these rely on instantaneous optic flow (i.e., a single optic flow field) (4, 9, 1625) with few exceptions (2629). However, it is unknown whether these algorithms are commensurate with the neural computations underlying heading perception.The consensus of opinion in the experimental literature is that human observers can estimate heading (30, 31) from instantaneous optic flow, in the absence of additional information (5, 10, 15, 3234). Even so, there are reports of systematic biases in heading perception (11); the visual consequences of rotation (eye, head, and body) can bias heading judgments (10, 15, 3537), with the amount of bias typically proportional to the magnitude of rotation. Other visual factors, such as stereo cues (38, 39), depth structure (8, 10, 4043), and field of view (FOV) (33, 4244) can modulate the strength of these biases. Errors in heading judgments have been reported to be greater when eye (3537, 45, 46) or head movements (37) are simulated versus when they are real, which has been taken to mean that observers require extraretinal information, although there is also evidence to the contrary (10, 15, 33, 40, 41, 44, 4750). Regardless, to date no one has tested whether heading perception (even with these biases) is based on instantaneous optic flow or on the information available in how the optic flow field evolves over time. Some have suggested that heading estimates rely on information accumulated over time (32, 44, 51), but no one has investigated the role of time-varying optic flow without confounding it with stimulus duration (i.e., the duration of evidence accumulation).In this study, we employed an application of an image processing technique that ensured that only a single optic flow field was available to observers, even though the stimulus was presented for an extended period of time. We called this condition “nonvarying phase motion” or “nonvarying”: The phases of two component gratings comprising each stationary stimulus patch shifted over time at a constant rate, causing a percept of motion in the absence of veridical movement (52). Phase motion also eliminated other cues that may otherwise have been used for heading judgments, including image point trajectories (15, 32) and their spatial compositions (i.e., looming) (53, 54). For nonvarying phase motion, observers exhibited large biases in heading judgments in the presence of rotation. A second condition, “time-varying phase motion,” or “time-varying,” included acceleration by varying the velocity of phase motion over time to match the evolution of a sequence of optic flow fields. Doing so allowed observers to compensate for the confounding effect of rotation on optic flow, making heading perception nearly veridical. This demonstrates that heading perception in the presence of rotation relies on the time-varying evolution of optic flow.  相似文献   

16.
17.
18.
Dendritic, i.e., tree-like, river networks are ubiquitous features on Earth’s landscapes; however, how and why river networks organize themselves into this form are incompletely understood. A branching pattern has been argued to be an optimal state. Therefore, we should expect models of river evolution to drastically reorganize (suboptimal) purely nondendritic networks into (more optimal) dendritic networks. To date, current physically based models of river basin evolution are incapable of achieving this result without substantial allogenic forcing. Here, we present a model that does indeed accomplish massive drainage reorganization. The key feature in our model is basin-wide lateral incision of bedrock channels. The addition of this submodel allows for channels to laterally migrate, which generates river capture events and drainage migration. An important factor in the model that dictates the rate and frequency of drainage network reorganization is the ratio of two parameters, the lateral and vertical rock erodibility constants. In addition, our model is unique from others because its simulations approach a dynamic steady state. At a dynamic steady state, drainage networks persistently reorganize instead of approaching a stable configuration. Our model results suggest that lateral bedrock incision processes can drive major drainage reorganization and explain apparent long-lived transience in landscapes on Earth.

What should a drainage network look like? Fig. 1A shows a single channel, winding its way through the catchment so as to have access to water and sediment from unchannelized zones in the same manner as the dendritic (tree-like) network of Fig. 1B. It appears straightforward that the dendritic pattern is a model for nature, and the single channel is not. Dendritic drainage networks are called such because of their similarity to branching trees, and their patterns are “characterized by irregular branching in all directions” (1) with “tributaries joining at acute angles” (2). Drainage networks can also take on other forms such as parallel, pinnate, rectangular, and trellis in nature (2). However, drainage networks in their most basic form without topographic, lithologic, and tectonic constraints should tend toward a dendritic form (2). In addition, drainage networks that take a branching, tree-like form have been argued to be “optimal channel networks” that minimize total energy dissipation (3, 4). Therefore, we would expect that models simulating river network formation, named landscape evolution models (LEMs), that use the nondendritic pattern of Fig. 1A as an initial condition to massively reorganize and approach the dendritic steady state of Fig. 1B. To date, no numerical LEM has shown the ability to do this. Here, we present a LEM that can indeed accomplish such a reorganization. A corollary of this ability is the result that landscapes approach a dynamic, rather than static steady state.Open in a separate windowFig. 1.Schematic diagram of a nondendritic and a dendritic drainage network. This figure shows the Wolman Run Basin in Baltimore County, MD (A) drained by a single channel winding across the topography and (B) drained by a dendritic network of channels. Both networks have similar drainage densities (53, 54), but there is a stark difference between their stream ordering (5356). This figure invites discussion as to how a drainage system might evolve from the configuration of A to that of B.There is indeed debate as to whether landscapes tend toward an equilibrium that is frozen or highly dynamic (5). Hack (6) hypothesized that erosional landscapes attain a steady state where “all elements of the topography are downwasting at the same rate.” This hypothesis has been tested in numerical models and small-scale experiments. Researchers found that numerical LEMs create static topographies (7, 8). In this state, erosion and uplift are in balance in all locations in the landscape, resulting in landscapes that are dissected by stable drainage networks in geometric equilibrium (9). The landscape has achieved geometric equilibrium in planform when a proxy for steady-state river elevation, named χ (10), has equal values across all drainage divides. In contrast, experimental landscapes (7, 11) develop drainage networks that persistently reorganize. Recent research on field landscapes suggests that drainage divides migrate until reaching geometric equilibrium (9), but other field-based research suggests that landscapes may never attain geometric equilibrium (12).The dynamism of the equilibrium state determines the persistence of initial conditions in experimental and model landscapes. It is important to understand initial condition effects (13) to better constrain uncertainty in LEM predictions. Kwang and Parker (7) demonstrate that numerical LEMs exhibit “extreme memory,” where small topographic perturbations in initial conditions are amplified and preserved during a landscape’s evolution (Fig. 2A). Extreme memory in the numerical models is closely related to the feasible optimality phenomenon found within the research on optimal channel networks (4). These researchers suggest that nature’s search for the most “stable” river network configuration is “myopic” and unable to find configurations that completely ignore their initial condition. In contrast to numerical models, experimental landscapes (7, 11) reach a highly dynamic state where all traces of initial surface conditions are erased by drainage network reorganization. It has been hypothesized that lateral erosion processes are responsible for drainage network reorganization in landscapes (7, 14); these processes are not included in most LEMs.Open in a separate windowFig. 2.A comparison of LEM-woLE (A) and LEM-wLE (B). Both models utilize the same initial condition, i.e., an initially flat topography with an embedded sinusoidal channel (1.27 m deep) without added topographic perturbations. Without perturbations, the landscape produces angular tributaries that are attached to the main sinusoidal channel (compare with SI Appendix, Fig. S7). Here, LEM-wLE quickly shreds the signal of the initial condition over time, removing the angular tributaries. By 10 RUs eroded the sinusoidal signal is mostly erased. After 100 RUs, the drainage network continues to reorganize itself (i.e., dynamic steady state). The landscape continues to reorganize as shown in Movies S1.Most widely used LEMs simulate incision into bedrock solely in the vertical direction. However, there is growing recognition that bedrock channels also shape the landscape by incising laterally (15, 16). Lateral migration into bedrock is important for the creation of strath terraces (17, 18) and the morphology of wide bedrock valleys (1921). Recently, Langston and Tucker (22) developed a formulation for lateral bedrock erosion in LEMs. Here, we implement their submodel to explore the long-term behavior of LEMs that incorporate lateral erosion.The LEM submodel of Langston and Tucker (22) allows for channels to migrate laterally. By including this autogenic mechanism, we hypothesize that lateral bedrock erosion creates instabilities that 1) shred (23) the memory of initial conditions such as the unrealistic configurations of Fig. 1A and 2) produce landscapes that achieve a statistical steady state instead of a static one. By incorporating the lateral incision component (22) into a LEM, we aim to answer the following: 1) What controls the rate of decay of signals from initial conditions? 2) What are the frequency and magnitude of drainage reorganization in an equilibrium landscape? 3) What roles do model boundary conditions play in landscape reorganization?  相似文献   

19.
Parallel adaptation provides valuable insight into the predictability of evolutionary change through replicated natural experiments. A steadily increasing number of studies have demonstrated genomic parallelism, yet the magnitude of this parallelism varies depending on whether populations, species, or genera are compared. This led us to hypothesize that the magnitude of genomic parallelism scales with genetic divergence between lineages, but whether this is the case and the underlying evolutionary processes remain unknown. Here, we resequenced seven parallel lineages of two Arabidopsis species, which repeatedly adapted to challenging alpine environments. By combining genome-wide divergence scans with model-based approaches, we detected a suite of 151 genes that show parallel signatures of positive selection associated with alpine colonization, involved in response to cold, high radiation, short season, herbivores, and pathogens. We complemented these parallel candidates with published gene lists from five additional alpine Brassicaceae and tested our hypothesis on a broad scale spanning ∼0.02 to 18 My of divergence. Indeed, we found quantitatively variable genomic parallelism whose extent significantly decreased with increasing divergence between the compared lineages. We further modeled parallel evolution over the Arabidopsis candidate genes and showed that a decreasing probability of repeated selection on the same standing or introgressed alleles drives the observed pattern of divergence-dependent parallelism. We therefore conclude that genetic divergence between populations, species, and genera, affecting the pool of shared variants, is an important factor in the predictability of genome evolution.

Evolution is driven by a complex interplay of deterministic and stochastic forces whose relative importance is a matter of debate (1). Being largely a historical process, we have limited ability to experimentally test for the predictability of evolution in its full complexity (i.e., in natural environments) (2). Distinct lineages that independently adapted to similar conditions by similar phenotype (termed parallel,” considered synonymous to “convergent” here) can provide invaluable insights into the issue (3, 4). An improved understanding of the probability of parallel evolution in nature may inform on constraints on evolutionary change and provide insights relevant for predicting the evolution of pathogens (57), pests (8, 9), or species in human-polluted environments (10, 11). Although the past few decades have seen an increasing body of work supporting the parallel emergence of traits by the same genes and even alleles, we know surprisingly little about what makes parallel evolution more likely and, by extension, what factors underlie evolutionary predictability (1, 12).A wealth of literature describes the probability of “genetic” parallelism, showing why certain genes are involved in parallel adaptation more often than others (13). There is theoretical and empirical evidence for the effect of pleiotropic constraints, availability of beneficial mutations or position in the regulatory network all having an impact on the degree of parallelism at the level of a single locus (3, 1318). In contrast, we know little about causes underlying “genomic” parallelism (i.e., what fraction of the genome is reused in adaptation and why). Individual case studies demonstrate large variation in genomic parallelism, ranging from absence of any parallelism (19), similarity in functional pathways but not genes (20, 21), and reuse of a limited number of genes (2224) to abundant parallelism at both gene and functional levels (25, 26). Yet, there is little consensus about what determines variation in the degree of gene reuse (fraction of genes that repeatedly emerge as selection candidates) across investigated systems (1).Divergence (the term used here to consistently describe both intra- and interspecific genetic differentiation) between the compared instances of parallelism appears as a potential driver of the variation in gene reuse (14, 27, 28). Phenotype-oriented meta-analyses suggest that both phenotypic convergence (28) and genetic parallelism underlying phenotypic traits (14) decrease with increasing time to the common ancestor. Although a similar targeted multiscale comparison is lacking at the genomic level, our brief review of published studies (29 cases, Dataset S1) suggests that also gene reuse tends to scale with divergence (Fig. 1A and SI Appendix, Fig. S1). Moreover, allele reuse (repeated sweep of the same haplotype that is shared among populations either via gene flow or from standing genetic variation) frequently underlies parallel adaptation between closely related lineages (2932), while parallelism from independent de novo mutations at the same locus dominates between distantly related taxa (13). Similarly, previous studies reported a decreasing probability of hemiplasy (apparent convergence resulting from gene tree discordance) with divergence in phylogeny-based studies (33, 34). This suggests that the degree of allele reuse may be the primary factor underlying the hypothesized divergence-dependency of parallel genome evolution, possibly reflecting either weak hybridization barriers, widespread ancestral polymorphism between closely related lineages (35), or ecological reasons (lower niche differentiation and geographical proximity) (36, 37). However, the generally restricted focus of individual studies of genomic parallelism on a single level of divergence does not lend itself to a unified comparison across divergence scales. Although different ages of compared lineages affect a variety of evolutionary–ecological processes such as diversification rates, community structure, or niche conservatism (37), the hypothesis that genomic parallelism scales with divergence has not yet been systematically tested, and the underlying evolutionary processes remain poorly understood.Open in a separate windowFig. 1.Hypotheses regarding relationships between genomic parallelism and divergence and the Arabidopsis system used to address these hypotheses. (A) Based on our literature review, we propose that genetically closer lineages adapt to a similar challenge more frequently by gene reuse, sampling suitable variants from the shared pool (allele reuse), which makes their adaptive evolution more predictable. Color ramp symbolizes rising divergence between the lineages (∼0.02 to 18 Mya in this study); the symbols denote different divergence levels tested here using resequenced genomes of 22 Arabidopsis populations (circles) and meta-analysis of candidates in Brassicaceae (asterisks). (B) Spatial arrangement of lineages of varying divergence (neutral FST; bins only aid visualization; all tests were performed on a continuous scale) encompassing parallel alpine colonization within the two Arabidopsis outcrossers from central Europe: A. arenosa (diploid: aVT; autotetraploid: aNT, aZT, aRD, and aFG) and A. halleri (diploid: hNT and hFG). Note that only two of the ten between-species pairs (dark green) are shown to aid visibility. The color scale corresponds to the left part of the color ramp used in A. (C) Photos of representative alpine and foothill habitat. (D) Representative phenotypes of originally foothill and alpine populations grown in common garden demonstrating phenotypic convergence. Scale bar corresponds to 4 cm. (E) Morphological differentiation among 223 A. arenosa individuals originating from foothill (black) and alpine (gray) populations from four regions after two generations in a common garden. Principal component analysis was run using 16 morphological traits taken from ref. 45.Here, we aimed to test this hypothesis and investigate whether allele reuse is a major factor underlying the relationship. We analyzed replicated instances of adaptation to a challenging alpine environment, spanning a range of divergence from populations to tribes within the plant family Brassicaceae (3843) (Fig. 1A). First, we took advantage of a unique naturally multireplicated setup in the plant model genus Arabidopsis that was so far neglected from a genomic perspective (Fig. 1B). Two predominantly foothill-dwelling Arabidopsis outcrossers (A. arenosa, A. halleri) exhibit scattered, morphologically distinct alpine occurrences at rocky outcrops above the timberline (Fig. 1C). These alpine forms are separated from the widespread foothill population by a distribution gap spanning at least 500 m of elevation. Previous genetic and phenotypic investigations and follow-up analyses presented here showed that the scattered alpine forms of both species represent independent alpine colonization in each mountain range, followed by parallel phenotypic differentiation (Fig. 1 D and E) (4446). Thus, we sequenced genomes from seven alpine and adjacent foothill population pairs, covering all European lineages encompassing the alpine ecotype. We discovered a suite of 151 genes from multiple functional pathways relevant to alpine stress that were repeatedly differentiated between foothill and alpine populations. This points toward a polygenic, multifactorial basis of parallel alpine adaptation.We took advantage of this set of well-defined parallel selection candidates and tested whether the degree of gene reuse decreases with increasing divergence between the compared lineages (Fig. 1A). By extending our analysis to five additional alpine Brassicaceae species, we further tested whether there are limits to gene reuse above the species level. Finally, we inquired about possible underlying evolutionary processes by estimating the extent of allele reuse using a designated modeling approach. Overall, our empirical analysis provides a perspective to the ongoing discussion about the variability in the reported magnitude of parallel genome evolution and identifies allele reuse as an important evolutionary process shaping the extent of genomic parallelism between populations, species, and genera.  相似文献   

20.
Here we report complex supramolecular tessellations achieved by the directed self-assembly of amphiphilic platinum(II) complexes. Despite the twofold symmetry, these geometrically simple molecules exhibit complicated structural hierarchy in a columnar manner. A possible key to such an order increase is the topological transition into circular trimers, which are noncovalently interlocked by metal···metal and π–π interactions, thereby allowing for cofacial stacking in a prismatic assembly. Another key to success is to use the immiscibility of the tailored hydrophobic and hydrophilic sidechains. Their phase separation leads to the formation of columnar crystalline nanostructures homogeneously oriented on the substrate, featuring an unusual geometry analogous to a rhombitrihexagonal Archimedean tiling. Furthermore, symmetry lowering of regular motifs by design results in an orthorhombic lattice obtained by the coassembly of two different platinum(II) amphiphiles. These findings illustrate the potentials of supramolecular engineering in creating complex self-assembled architectures of soft materials.

Tessellation in two dimensions (2D) is a very old topic in geometry on how one or more shapes can be periodically arranged to fill a Euclidean plane without any gaps. Tessellation principles have been extensively applied in decorative art since the early times. In natural sciences, there has been a growing attention on creating ordered structures with increasingly complex architectures inspired by semi-regular Archimedean tilings (ATs) and quasicrystalline textures on account of their intriguing physical properties (15) and biological functions (6). Recent advances in this regard have been achieved in various fields of supramolecular science, including the programmable self-assembly of DNA molecules (7), coordination-driven assembly (810), supramolecular interfacial engineering (1113), crystallization of organic polygons (14, 15), colloidal particle superlattices (16), and other soft-matter systems (1720). Moreover, tessellation in 2D can overcome the topological frustration to generate complex semi- or non-regular patterns by using geometrically simple motifs. As exemplified by the self-templating assembly of spherical soft microparticles (21), a vast array of 2D micropatterns encoding non-regular tilings, such as rectangular, rhomboidal, hexagonal, and herringbone superlattices were obtained by layer-by-layer strategy at a liquid–liquid interface. Tessellation principles have also been extended to the self-assembly of giant molecules in three dimensions (3D). Superlattices with high space-group symmetry (Im3¯m, Pm3¯n, and P42/mnm) were reported in dendrimers and dendritic polymers by Percec and coworkers (2224). Recently, Cheng and coworkers identified the highly ordered Frank–Kasper phases obtained from giant amphiphiles containing molecular nanoparticles (2528). Despite such advancements made in the field of soft matter, an understanding of how structural ordering in supramolecular materials is influenced by the geometric factors of its constituent molecules has so far remained elusive.In light of these developments and the desire to explore the supramolecular systems, square-planar platinum(II) (PtII) polypyridine complexes may serve as an ideal candidate for model studies not only because of their intriguing spectroscopic and luminescence properties (29, 30), but also because of their propensity to form supramolecular polymers or oligomers via noncovalent Pt···Pt and π–π interactions (3139). Although rod-shaped and lamellar structures are the most commonly observed in the self-assembly of planar PtII complexes (3439), 2D-ordered nanostructures, such as the hexagonally packed columns (31, 40) and honeycomb-like networks (4143), were recently first demonstrated by us.Herein, we report a serendipitous discovery of a C2h-symmetric PtII amphiphile (Fig. 1A) that can hierarchically self-assemble into a 3D-ordered nanostructure with hexagonal geometry. Interestingly, this structurally anisotropic molecule possibly undergoes topological transition and interlocks to form its circular trimer by noncovalent Pt···Pt and π–π interactions (Fig. 1B). The resultant triangular motif is architecturally stabilized and preorganized for one-dimensional (1D) prismatic assembly (Fig. 1C). Together with the phase separation of the tailored hydrophobic and hydrophilic sidechains, an unusual and unique 3D hexagonal lattice is formed (Fig. 1D), in which the Pt centers adopt a rare rhombitrihexagonal AT-like order. Finally, the nanoarchitecture develops in a hierarchical manner on the substrate due to the homogeneous nucleation (Fig. 1E).Open in a separate windowFig. 1.Hierarchical self-assembly of PtII amphiphile into hexagonal ordering. (A) Space-filling (CPK) model of a C2h-symmetric PtII amphiphile (1). All of the hydrogen atoms and counterions are omitted for clarity. (B) CPK representations of possible models of regular triangular, tetragonal, pentagonal, and hexagonal motifs formed with Pt···Pt and π–π stacking. These motifs possess a hydrophilic core (red) with various diameters wrapped by a hydrophobic shell comprising long alkyl chains (gray). (C) CPK representation of a 1D prismatic structure consisting of circular trimers with long-range Pt···Pt and π–π stacking. (D) CPK representation of a 3D columnar lattice constructed by the prismatic assemblies adopting a rare rhombitrihexagonal AT-like order. With the assistance of the phase separation, the hydrophobic domain serves as a discrete column associated with six prismatic neighbors. (E) Schematic representation of the nanoarchitecture with homogeneous orientation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号