首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Words categorize the semantic fields they refer to in ways that maximize communication accuracy while minimizing complexity. Focusing on the well-studied color domain, we show that artificial neural networks trained with deep-learning techniques to play a discrimination game develop communication systems whose distribution on the accuracy/complexity plane closely matches that of human languages. The observed variation among emergent color-naming systems is explained by different degrees of discriminative need, of the sort that might also characterize different human communities. Like human languages, emergent systems show a preference for relatively low-complexity solutions, even at the cost of imperfect communication. We demonstrate next that the nature of the emergent systems crucially depends on communication being discrete (as is human word usage). When continuous message passing is allowed, emergent systems become more complex and eventually less efficient. Our study suggests that efficient semantic categorization is a general property of discrete communication systems, not limited to human language. It suggests moreover that it is exactly the discrete nature of such systems that, acting as a bottleneck, pushes them toward low complexity and optimal efficiency.

Words partition our world into semantic categories. Converging evidence indicates that, while these categories differ widely across languages, they are shaped by universal constraints (13). In particular, it has been suggested that semantic categorization evolves to support efficient communication (4). Humans develop naming systems to talk about their experience under two competing pressures: “accuracy maximization” (words should encode precise information about their referents) and “complexity avoidance” (preventing unwieldy languages). At an extreme, a maximally accurate system would have a different term for each perceptual or mental experience. At the other, a maximally simple system would use only one term to refer to all experiences, completely hindering communication.Actual human naming systems are efficient in the sense that they optimize the accuracy/complexity trade-off. More generally, since the foundational work of Zipf (5), a similar trade-off between precision and simplicity has been observed in many areas of language (6).Zaslavsky et al. (7) formalized the measurement of naming-system efficiency within the general information–theoretic framework of the Information Bottleneck (IB) (8) (see also the closely related rate-distortion theory framework in ref. 9). A system is deemed efficient if it reaches the maximum possible accuracy for a given complexity. In the IB framework, both accuracy and complexity are computed in a communication model where an idealized Speaker aims to communicate a meaning to an idealized Listener. Accuracy is then inversely related to the cost of a misinterpreted meaning, while complexity measures the quantity of information needed to convey the meaning. The IB efficiency of a system is effectively visualized in plots (see Fig. 3). The black curve in Fig. 3 represents the theoretical limit: no system of a certain complexity (horizontal axis) can have accuracy (vertical axis) above the curve. Hence, according to IB, a system is optimal if it lies on the curve. Equipped with this framework, Zaslavsky et al. (7) demonstrated that color-naming systems (4, 10, 11) are notably close to the theoretical limit and hence efficient in a quantifiable way.Open in a separate windowFig. 3.Human (blue circles) and NN (orange circles) color-naming systems on the information plane. English (light blue circle) is not in WCS, but it is approximated relying on Zaslavsky et al. (SI Appendix, figure S7 in ref. 7). The IB curve (black line) defines the theoretical limit on accuracy given complexity. All color-naming systems achieve near-optimal efficiency.IB theory is agnostic about where on the theoretical-limit curve a system should lie. Degenerate systems lying at the extremes of the curve, and expressing each referent with a different term or all referents with a unique term, are also efficient according to this theory. However, such systems are not attested. Instead, real color-naming systems approximate a small range of possible optimal solutions, avoiding the extremes, and in particular high-complexity trade-offs (7). This avoidance of complexity extremes has been observed more broadly in studies of categorization and naming across many semantic domains (4, 1214).We study the efficiency of color naming from a different perspective. We compare natural language systems with those emerging from the interaction of modern neural networks (NNs) faced with a color-communication task. Artificial NNs trained with deep-learning methods (15) have recently been used to study human (neuro)cognition in many fields (e.g., refs. 1619), including color naming (20, 21). Traditional simulations in cognitive science are specifically designed to assess how certain factors of interest affect system behavior by developing ad hoc models, an approach illustrated by Baronchelli et al. (22) and Loreto et al. (23), in the domain of color naming, and applied by Carr et al. (24) to the study of complexity/accuracy trade-offs in semantic categorization. Deep networks, however, are high-performance general-purpose learners, independently developed for engineering purposes, with no claims of cognitive plausibility concerning their architecture or learning process. In this respect, they might be best seen as complex “animal models” (25, 26). The main interest lies in whether the emergent behavior of these powerful mechanisms mirrors nontrivial properties of human behavior (27). If it does, we can entertain the intriguing hypothesis that the specific converging human and deep-network patterns we observe have common roots. We can moreover directly intervene on the artificial organisms (more easily so than we can on humans), in order to causally assess how different components affect their emergent behavior.Specifically, we show that, when two deep learning-trained NNs play a simple color discrimination game, they develop naming systems that closely match the distribution of human languages on the IB plane, showing both efficiency maximization and complexity control (Fig. 3). The use of human-like artificial systems emerges without imposing ad hoc constraints favoring efficiency or limiting complexity on the training procedure. Having observed the systematic emergence of efficiency and complexity reduction in the NN systems, we proceed to test the hypothesis that these properties crucially depend on the bottleneck imposed by the discrete communication channel. Indeed, as we let NNs exchange messages that are increasingly more continuous, their naming systems become more complex, and, eventually, no longer efficient. Varying the degree of color-discrimination granularity required to play the game affects the complexity of the emergent systems, but not efficiency, and only within the range of attested human variation. NN capacity only affects the complexity of the system in function of discreteness of communication.The emergence of efficient and reasonably simple semantic categorization is not specific to human language but might generally arise in cognitive devices exchanging discrete messages about their world. Discreteness of communication plays a central role in the emergence of efficient and low-complexity naming systems among our artificial agents, raising intriguing questions about the role of discreteness in human language.  相似文献   

2.
3.
Coordination of behavior for cooperative performances often relies on linkages mediated by sensory cues exchanged between participants. How neurophysiological responses to sensory information affect motor programs to coordinate behavior between individuals is not known. We investigated how plain-tailed wrens (Pheugopedius euophrys) use acoustic feedback to coordinate extraordinary duet performances in which females and males rapidly take turns singing. We made simultaneous neurophysiological recordings in a song control area “HVC” in pairs of singing wrens at a field site in Ecuador. HVC is a premotor area that integrates auditory feedback and is necessary for song production. We found that spiking activity of HVC neurons in each sex increased for production of its own syllables. In contrast, hearing sensory feedback produced by the bird’s partner decreased HVC activity during duet singing, potentially coordinating HVC premotor activity in each bird through inhibition. When birds sang alone, HVC neurons in females but not males were inhibited by hearing the partner bird. When birds were anesthetized with urethane, which antagonizes GABAergic (γ-aminobutyric acid) transmission, HVC neurons were excited rather than inhibited, suggesting a role for GABA in the coordination of duet singing. These data suggest that HVC integrates information across partners during duets and that rapid turn taking may be mediated, in part, by inhibition.

Animals routinely rely on sensory feedback for the control of their own behavior. In cooperative performances, such sensory feedback can include cues produced by other participants (18). For example, in interactive vocal communication, including human speech, individuals take turns vocalizing. This “turn taking” is a consequence of each participant responding to auditory cues from a partner (46, 9, 10). The role of such “heterogenous” (other-generated) feedback in the control of vocal turn taking and other cooperative performances is largely unknown.Plain-tailed wrens (Pheugopedius euophrys) are neotropical songbirds that cooperate to produce extraordinary duet performances but also sing by themselves (Fig. 1A) (4, 10, 11). Singing in plain-tailed wrens is performed by both females and males and used for territorial defense and other functions, including mate guarding and attraction (1, 1116). During duets, female and male plain-tailed wrens take turns, alternating syllables at a rate of between 2 and 5 Hz (Fig. 1A) (4, 11).Open in a separate windowFig. 1.Neural control of solo and duet singing in plain-tailed wrens. (A) Spectrogram of a singing bout that included male solo syllables (blue line, top) followed by a duet. Solo syllables for both sexes (only male solo syllables are shown here) are sung at lower amplitudes than syllables produced in duets. Note that the smeared appearance of wren syllables in spectrograms reflects the acoustic structure of plain-tailed wren singing. (B and C) Each bird has a motor system that is used to produce song and sensory systems that mediate feedback. (B) During solo singing, the bird hears its own song, which is known as autogenous feedback (orange). (C) During duet singing, each bird hears both its own singing and the singing of its partner, known as heterogenous feedback (green). The key difference between solo and duet singing is heterogenous feedback that couples the neural systems of the two birds. This coupling results in changes in syllable amplitude and timing in both birds.There is a categorical difference between solo and duet singing. In solo singing, the singing bird receives only autogenous (hearing its own vocalization) feedback (Fig. 1B). The partner may hear the solo song if it is nearby, a heterogenous (other-generated) cue. In duet singing, birds receive both heterogenous and autogenous feedback as they alternate syllable production (Fig. 1C). Participants use heterogenous feedback during duet singing for precise timing of syllable production (4, 11). For example, when a male temporarily stops participating in a duet, the duration of intersyllable intervals between female syllables increases (4), showing an effect of heterogenous feedback on the timing of syllable production.How does the brain of each wren integrate heterogenous acoustic cues to coordinate the precise timing of syllable production between individuals during duet performances? To address this question, we examined neurophysiological activity in HVC, a nucleus in the nidopallium [an analogue of mammalian cortex (17, 18)]. HVC is necessary for song learning, production, and timing in species of songbirds that do not perform duets (1924). Neurons in HVC are active during singing and respond to playback of the bird’s own learned song (2527). In addition, recent work has shown that HVC is also involved in vocal turn taking (19).To examine the role of heterogenous feedback in the control of duet performances, we compared neurophysiological activity in HVC when female or male wrens sang solo syllables with syllables sung during duets. Neurophysiological recordings were made in awake and anesthetized pairs of wrens at the Yanayacu Biological Station and Center for Creative Studies on the slopes of the Antisana volcano in Ecuador. We found that heterogenous cues inhibited HVC activity during duet performances in both females and males, but inhibition was only observed in females during solo singing.  相似文献   

4.
A degraded, black-and-white image of an object, which appears meaningless on first presentation, is easily identified after a single exposure to the original, intact image. This striking example of perceptual learning reflects a rapid (one-trial) change in performance, but the kind of learning that is involved is not known. We asked whether this learning depends on conscious (hippocampus-dependent) memory for the images that have been presented or on an unconscious (hippocampus-independent) change in the perception of images, independently of the ability to remember them. We tested five memory-impaired patients with hippocampal lesions or larger medial temporal lobe (MTL) lesions. In comparison to volunteers, the patients were fully intact at perceptual learning, and their improvement persisted without decrement from 1 d to more than 5 mo. Yet, the patients were impaired at remembering the test format and, even after 1 d, were impaired at remembering the images themselves. To compare perceptual learning and remembering directly, at 7 d after seeing degraded images and their solutions, patients and volunteers took either a naming test or a recognition memory test with these images. The patients improved as much as the volunteers at identifying the degraded images but were severely impaired at remembering them. Notably, the patient with the most severe memory impairment and the largest MTL lesions performed worse than the other patients on the memory tests but was the best at perceptual learning. The findings show that one-trial, long-lasting perceptual learning relies on hippocampus-independent (nondeclarative) memory, independent of any requirement to consciously remember.

A striking visual effect can be demonstrated by using a grayscale image of an object that has been degraded to a low-resolution, black-and-white image (1, 2). Such an image is difficult to identify (Fig. 1) but can be readily recognized after a single exposure to the original, intact image (Fig. 2) (36). Neuroimaging studies have found regions of the neocortex, including high-level visual areas and the medial parietal cortex, which exhibited a different pattern of activity when a degraded image was successfully identified (after seeing the intact image) than when the same degraded image was first presented and not identified (4, 5, 7). This phenomenon reflects a rapid change in performance based on experience, in this case one-trial learning, but the kind of learning that is involved is unclear.Open in a separate windowFig. 1.A sample degraded image. Most people cannot identify what is depicted. See Fig. 2.Open in a separate windowFig. 2.An intact version of the image in Fig. 1. When the intact version is presented just once directly after presentation of the degraded version, the ability to later identify the degraded image is greatly improved, even after many months. Reprinted from ref. (42), which is licensed under CC BY 4.0.One possibility is that successful identification of degraded images reflects conscious memory of having recently seen degraded images followed by their intact counterparts. When individuals see degraded images after seeing their “solutions,” they may remember what is represented in the images, at least for a time. In one study, performance declined sharply from 15 min to 1 d after the solutions were presented and then declined more gradually to a lower level after 21 d (3). Alternatively, the phenomenon might reflect a more automatic change in perception not under conscious control (8). Once the intact image is presented, the object in the degraded image may be perceived directly, independently of whether it is remembered as having been presented. By this account, successful identification of degraded images is reminiscent of the phenomenon of priming, whereby perceptual identification of words and objects is facilitated by single encounters with the same or related stimuli (911). Some forms of priming persist for quite a long time (weeks or months) (1214).These two possibilities describe the distinction between declarative and nondeclarative memory (15, 16). Declarative memory affords the capacity for recollection of facts and events and depends on the integrity of the hippocampus and related medial temporal lobe structures (17, 18). Nondeclarative memory refers to a collection of unconscious memory abilities including skills, habits, and priming, which are expressed through performance rather than recollection and are supported by other brain systems (1921). Does one-trial learning of degraded images reflect declarative or nondeclarative memory? How long does it last? In an early report that implies the operation of nondeclarative memory, two patients with traumatic amnesia improved the time needed to identify hidden images from 1 d to the next, but could not recognize which images they had seen (22). Yet, another amnesic patient reportedly failed such a task (23). The matter has not been studied in patients with medial temporal lobe (MTL) damage.To determine whether declarative (hippocampus-dependent) or nondeclarative (hippocampus-independent) memory supports the one-trial learning of degraded images, we tested five patients with bilateral hippocampal lesions or larger MTL lesions who have severely impaired declarative memory. The patients were fully intact at perceptual learning, and performance persisted undiminished from 1 d to more than 5 mo. At the same time, the patients were severely impaired at remembering both the structure of the test and the images themselves.  相似文献   

5.
The puzzling sex ratio behavior of Melittobia wasps has long posed one of the greatest questions in the field of sex allocation. Laboratory experiments have found that, in contrast to the predictions of theory and the behavior of numerous other organisms, Melittobia females do not produce fewer female-biased offspring sex ratios when more females lay eggs on a patch. We solve this puzzle by showing that, in nature, females of Melittobia australica have a sophisticated sex ratio behavior, in which their strategy also depends on whether they have dispersed from the patch where they emerged. When females have not dispersed, they lay eggs with close relatives, which keeps local mate competition high even with multiple females, and therefore, they are selected to produce consistently female-biased sex ratios. Laboratory experiments mimic these conditions. In contrast, when females disperse, they interact with nonrelatives, and thus adjust their sex ratio depending on the number of females laying eggs. Consequently, females appear to use dispersal status as an indirect cue of relatedness and whether they should adjust their sex ratio in response to the number of females laying eggs on the patch.

Sex allocation has produced many of the greatest success stories in the study of social behaviors (14). Time and time again, relatively simple theory has explained variation in how individuals allocate resources to male and female reproduction. Hamilton’s local mate competition (LMC) theory predicts that when n diploid females lay eggs on a patch and the offspring mate before the females disperse, the evolutionary stable proportion of male offspring (sex ratio) is (n − 1)/2n (Fig. 1) (5). A female-biased sex ratio is favored to reduce competition between sons (brothers) for mates and to provide more mates (daughters) for those sons (68). Consistent with this prediction, females of >40 species produce female-biased sex ratios and reduce this female bias when multiple females lay eggs on the same patch (higher n; Fig. 1) (9). The fit of data to theory is so good that the sex ratio under LMC has been exploited as a “model trait” to study the factors that can constrain “perfect adaptation” (4, 1013).Open in a separate windowFig. 1.LMC. The sex ratio (proportion of sons) is plotted versus the number of females laying eggs on a patch. The bright green dashed line shows the LMC theory prediction for the haplodiploid species (5, 39). A more female-biased sex ratio is favored in haplodiploids because inbreeding increases the relative relatedness of mothers to their daughters (7, 32). Females of many species adjust their offspring sex ratio as predicted by theory, such as the parasitoid Nasonia vitripennis (green diamonds) (82). In contrast, the females of several Melittobia species, such as M. australica, continue to produce extremely female-biased sex ratios, irrespective of the number of females laying eggs on a patch (blue squares) (15).In stark contrast, the sex ratio behavior of Melittobia wasps has long been seen as one of the greatest problems for the field of sex allocation (3, 4, 1421). The life cycle of Melittobia wasps matches the assumptions of Hamilton’s LMC theory (5, 15, 19, 21). Females lay eggs in the larvae or pupae of solitary wasps and bees, and then after emergence, female offspring mate with the short-winged males, who do not disperse. However, laboratory experiments on four Melittobia species have found that females lay extremely female-biased sex ratios (1 to 5% males) and that these extremely female-biased sex ratios change little with increasing number of females laying eggs on a patch (higher n; Fig. 1) (15, 1720, 22). A number of hypotheses to explain this lack of sex ratio adjustment have been investigated and rejected, including sex ratio distorters, sex differential mortality, asymmetrical male competition, and reciprocal cooperation (1518, 20, 2226).We tested whether Melittobia’s unusual sex ratio behavior can be explained by females being related to the other females laying eggs on the same patch. After mating, some females disperse to find new patches, while some may stay at the natal patch to lay eggs on previously unexploited hosts (Fig. 2). If females do not disperse, they can be related to the other females laying eggs on the same host (2731). If females laying eggs on a host are related, this increases the extent to which relatives are competing for mates and so can favor an even more female-biased sex ratio (28, 3235). Although most parasitoid species appear unable to directly assess relatedness, dispersal behavior could provide an indirect cue of whether females are with close relatives (3638). Consequently, we predict that when females do not disperse and so are more likely to be with closer relatives, they should maintain extremely female-biased sex ratios, even when multiple females lay eggs on a patch (28, 35).Open in a separate windowFig. 2.Host nest and dispersal manners of Melittobia. (A) Photograph of the prepupae of the leaf-cutter bee C. sculpturalis nested in a bamboo cane and (B) a diagram showing two ways that Melittobia females find new hosts. The mothers of C. sculpturalis build nursing nests with pine resin consisting of individual cells in which their offspring develop. If Melittobia wasps parasitize a host in a cell, female offspring that mate with males inside the cell find a different host on the same patch (bamboo cane) or disperse by flying to other patches.We tested whether the sex ratio of Melittobia australica can be explained by dispersal status in a natural population. We examined how the sex ratio produced by females varies with the number of females laying eggs on a patch and whether or not they have dispersed before laying eggs. To match our data to the predictions of theory, we developed a mathematical model tailored to the unique population structure of Melittobia, where dispersal can be a cue of relatedness. We then conducted a laboratory experiment to test whether Melittobia females are able to directly access the relatedness to other females and adjust their sex ratio behavior accordingly. Our results suggest that females are adjusting their sex ratio in response to both the number of females laying eggs on a patch and their relatedness to the other females. However, relatedness is assessed indirectly by whether or not they have dispersed. Consequently, the solution to the puzzling behavior reflects a more-refined sex ratio strategy.  相似文献   

6.
7.
8.
The relative warmth of mid-to-late Pleistocene interglacials on Greenland has remained unknown, leading to debates about the regional climate forcing that caused past retreat of the Greenland Ice Sheet (GrIS). We analyze the hydrogen isotopic composition of terrestrial biomarkers in Labrador Sea sediments through interglacials of the past 600,000 y to infer millennial-scale summer warmth on southern Greenland. Here, we reconstruct exceptionally warm summers in Marine Isotope Stage (MIS) 5e, concurrent with strong Northern Hemisphere summer insolation. In contrast, “superinterglacial” MIS11 demonstrated only moderate warmth, sustained throughout a prolonged interval of elevated atmospheric carbon dioxide. Strong inferred GrIS retreat during MIS11 relative to MIS5e suggests an indirect relationship between maximum summer temperature and cumulative interglacial mass loss, indicating strong GrIS sensitivity to duration of regional warmth and elevated atmospheric carbon dioxide.

The Greenland Ice Sheet (GrIS) is projected to contribute between +5 and +33 cm to global sea level by 2100 CE under continued strong anthropogenic forcing (1). Significant uncertainty in projections results, in part, from a lack of constraints on the regional terrestrial climate changes causing past large-scale ice sheet mass loss (2, 3). Extensive retreat of the GrIS likely occurred most recently during Marine Isotope Stage (MIS) 11 (ca. 425 to 375 thousand years before present [ka]), indicated by evidence of coniferous forest cover in southern Greenland coincident with a cessation in the delivery of glacially eroded silts to the Labrador Sea (4, 5) (Figs. 1 and and2).2). Curiously, Northern Hemisphere summer insolation and atmospheric carbon dioxide (CO2) forcing were lower during MIS11 than other Pleistocene interglacials through which continental-scale ice persisted on Greenland. For example, the Last Interglacial (MIS5e) (ca. 130 to 115 ka) was associated with stronger Northern Hemisphere summer insolation and briefly higher atmospheric CO2 concentrations (6, 7). Yet basal sections of seven ice cores contain ice deposited during MIS5e (8), suggesting ice was present on much of the island within this stage.Open in a separate windowFig. 1.Map of study region. Location of Eirik Drift core sites (black point), including Ocean Drilling Program Site 646 used in this study. Dotted lines separate silt provenances as in Fig. 2H (5, 12). White is the modern glacier extent. Solid lines are the modern schematic surface ocean currents: the West Greenland Current (WGC), Baffin Island Current (BIC), and Irminger Current (IC). The dashed line is the Deep Western Boundary Undercurrent (WBUC). Points are the Greenland ice cores (27), with ice dated to peak (dark red) or late (light red) MIS5e (2831) and Holocene δ2HC28 records (yellow) (17, 24). The inset map is of Lake El’Gygytgyn (Lake E) (9), Arctic Ocean core HLY-06 (10), and the Faroe Islands (FI) (25).Open in a separate windowFig. 2.Interglacial records from MIS13 to MIS1. Datasets are plotted on their own age scales and not synchronized, except those from the same sites. Formally defined MIS and peak substage (22) (as in Fig. 4) are shaded. (A) June 21st insolation at 65°N (6). (B) Atmospheric CO2 concentration (7). (C) Site 646 δ2HC28 (this study). Analytical error is smaller than point size (SI Appendix). (D) Site 646 δ2HC28 as an anomaly relative to the last millennium. (E) Global benthic (blue) and Site 646 planktic foraminifera δ18O (black) (4, 22). (F) Stable carbon isotopes (δ13C, ‰ VPDB [Vienna Pee Dee Belemnite]) of Cibicidoides wuellerstorfi from U1305 (34). (G) SSTs from U1305 (summer: black and red error envelope; winter: black and blue error envelope) (11) and Site 646 (summer: red; winter: blue) (4). (H) Provenance of MD99-2227 silts as in Fig. 1 (5, 12). (I) Mean temperature of the warmest month (MTWM) from Lake El’Gygytgyn (9). (J) Site 646 pollen concentrations (4).Sparse paleoclimate evidence suggests that Arctic climate responded nonlinearly to global-scale forcings during past interglacials. For example, MIS11 was one of a few Pleistocene “superinterglacials” identified in the eastern Arctic, with inferred summer air temperatures 4 to 5 °C higher than the current interglacial (MIS1, the Holocene, 11.7 to 0 ka) (9) (Fig. 2I). Outstanding Arctic warmth during MIS11 is supported by ostracod assemblages in the Arctic Ocean, indicating summer sea surface temperatures (SSTs) 8 to 10 °C higher than modern (10). Yet regional Arctic temperatures likely differed; summer Labrador SSTs were cooler during MIS11 than MIS1 or MIS5e (4, 11) (Fig. 2G). Terrestrial climate on Greenland, where summer air temperature directly influences ice sheet mass balance, remains unconstrained by geologic evidence throughout most Pleistocene interglacials older than MIS5e, including MIS11.  相似文献   

9.
We assembled a complete reference genome of Eumaeus atala, an aposematic cycad-eating hairstreak butterfly that suffered near extinction in the United States in the last century. Based on an analysis of genomic sequences of Eumaeus and 19 representative genera, the closest relatives of Eumaeus are Theorema and Mithras. We report natural history information for Eumaeus, Theorema, and Mithras. Using genomic sequences for each species of Eumaeus, Theorema, and Mithras (and three outgroups), we trace the evolution of cycad feeding, coloration, gregarious behavior, and other traits. The switch to feeding on cycads and to conspicuous coloration was accompanied by little genomic change. Soon after its origin, Eumaeus split into two fast evolving lineages, instead of forming a clump of close relatives in the phylogenetic tree. Significant overlap of the fast evolving proteins in both clades indicates parallel evolution. The functions of the fast evolving proteins suggest that the caterpillars developed tolerance to cycad toxins with a range of mechanisms including autophagy of damaged cells, removal of cell debris by macrophages, and more active cell proliferation.

The genus Eumaeus Hübner (Lycaenidae, Theclinae) arguably contains the most aposematically colored caterpillars and butterflies among the ∼4,000 Lycaenidae in the world (16). The brilliant red and gold gregarious caterpillars (Fig. 1) sequester cycasin from the leaves of their cycad food plants (Zamiaceae), which deters predators (39). Other secondary metabolites in cycads (e.g., 1011) may also deter predators. Eumaeus adults have a bright orange-red abdomen and an orange-red hindwing spot (except for one species) (Fig. 2). Blue and green iridescent markings are especially conspicuous on a black ground color. Eumaeus adults are among the largest lycaenids and have more rounded wings and a slower, more gliding flight than most Theclinae (1). Cycads are among the most primitive extant seed-plants (9), and the “plethora of aposematic attributes suggests a very ancient association between Eumaeus and the cycad host plants” (3).Open in a separate windowFig. 1.Caterpillars and pupae of Theorema eumenia (Top) and Eumaeus godartii (Bottom) in Costa Rica. Clockwise from Upper Left, second or third instar (length, ∼13 mm), fourth (final) instar (∼20 mm), pupa (∼18 mm), pupa (∼24 mm), fourth (final) instar (∼27 mm), second or third instar (∼20 mm). (Images from authors W.H. and D.H.J.).Open in a separate windowFig. 2.Adult wing uppersides and undersides. Eumaeus childrenae (two Upper Left images), E. atala (two Upper Right images), Theorema eumenia (two Lower Left images), and Mithras nautes (two Lower Right images). Scale bar, 1 cm.Eumaeus has been classified as a separate family (1214), a genus in the Riodinidae (1516), or a monotypic subfamily or tribe of the Lycaenidae (1720). Alternatively, others called it a typical member of the Neotropical Lycaenidae (2122). The evolutionary question behind this discordant taxonomic history is whether Eumaeus is a phylogenetically isolated lineage long associated with cycads (3) or an embedded clade in which a recent food plant shift to cycads resulted in the rapid evolution of aposematism. Recent molecular evidence for a limited number of taxa suggested the latter (23). To answer this question definitively, we analyzed genomic sequences of Eumaeus and its relatives.To trace the evolution of cycad feeding, we report the caterpillar food plants of the genera most closely related to Eumaeus and illustrate their immature stages (Fig. 1 and SI Appendix). This natural history information combined with analyses of genome sequences is the foundation for investigating the subsequent evolutionary impact on the Eumaeus genome of the switch to eating cycads.  相似文献   

10.
11.
Despite their desirable attributes, boronic acids have had a minimal impact in biological contexts. A significant problem has been their oxidative instability. At physiological pH, phenylboronic acid and its boronate esters are oxidized by reactive oxygen species at rates comparable to those of thiols. After considering the mechanism and kinetics of the oxidation reaction, we reasoned that diminishing electron density on boron could enhance oxidative stability. We found that a boralactone, in which a carboxyl group serves as an intramolecular ligand for the boron, increases stability by 104-fold. Computational analyses revealed that the resistance to oxidation arises from diminished stabilization of the p orbital of boron that develops in the rate-limiting transition state of the oxidation reaction. Like simple boronic acids and boronate esters, a boralactone binds covalently and reversibly to 1,2-diols such as those in saccharides. The kinetic stability of its complexes is, however, at least 20-fold greater. A boralactone also binds covalently to a serine side chain in a protein. These attributes confer unprecedented utility upon boralactones in the realms of chemical biology and medicinal chemistry.

The modern pharmacopeia is composed of only a handful of elements. Built on hydrocarbon scaffolds (1), nearly all drugs contain nitrogen and oxygen, and many contain fluorine and sulfur (2). A surprising omission from this list is the fifth element in the periodic table, boron (3, 4). Since bortezomib received regulatory approval in 2003, only four additional boron-containing drugs have demonstrated clinical utility (Fig. 1A). Each is a boronic acid or ester.Open in a separate windowFig. 1.(A) Food and Drug Administration–approved pharmaceuticals containing a boronic acid. (B) Putative mechanism for the oxidative deboronation of a boronic acid by hydrogen peroxide (30).Bortezomib is a boronic acid, and ixazomib citrate hydrolyzes to one in aqueous solution (5). Other boron-containing drugs feature cyclic esters. The cyclic ester formed spontaneously from 2-hydroxymethylphenylboronic acid (2-HMPBA) is known as “benzoxaborole” and has received much attention due to its enhanced affinity for saccharides at physiological pH (68). This scaffold is present in the antifungal drug tavaborole and the antidermatitis drug crisaborole (9). Vaborbactam, which contains an analogous six-membered ring, is an efficacious β-lactamase inhibitor (10). Neuropathy has been associated with the use of bortezomib but not other boronic acids, which have minimal toxicity (11).The boron atom in a boronic acid (or ester) is isoelectronic with the carbon atom of a carbocation. Both are sp2 hybridized, have an empty p orbital, and adopt a trigonal planar geometry. In contrast to a carbocation, however, the weak Lewis acidity of a boronic acid allows for the reversible formation of covalent bonds. This attribute has enabled boronic acids to achieve extraordinary utility in synthetic organic chemistry and molecular recognition (1222). Boronic acids are, however, susceptible to oxidative damage. That deficiency is readily controllable in a chemistry laboratory but not in a physiological environment.In a boronic acid, the empty p orbital of boron is prone to attack by nucleophilic species such as the oxygen atom of a reactive oxygen species (ROS). The subsequent migration of carbon from boron to that oxygen leads to a labile boric ester which undergoes rapid hydrolysis (Fig. 1B). This oxidative deboronation converts the boronic acid into an alcohol and boric acid (23, 24).We sought a means to increase the utility of boron in biological contexts by deterring the oxidation of boronic acids. The rate-limiting step in the oxidation of boronic acid is likely to be the migration of carbon from boron to oxygen: a 1,2-shift (Fig. 1B). In that step, the boron becomes more electron deficient. We reasoned that depriving the boron of electron density might slow the 1,2-shift. A subtle means to do so would be to replace the alkoxide of a boronate ester with a carboxylate group. We find that the ensuing mixed anhydrides between a boronic acid and carboxylic acid are remarkable in their chemical attributes and biological utility.  相似文献   

12.
13.
Development has often been viewed as a constraining force on morphological adaptation, but its precise influence, especially on evolutionary rates, is poorly understood. Placental mammals provide a classic example of adaptive radiation, but the debate around rate and drivers of early placental evolution remains contentious. A hallmark of early dental evolution in many placental lineages was a transition from a triangular upper molar to a more complex upper molar with a rectangular cusp pattern better specialized for crushing. To examine how development influenced this transition, we simulated dental evolution on “landscapes” built from different parameters of a computational model of tooth morphogenesis. Among the parameters examined, we find that increases in the number of enamel knots, the developmental precursors of the tooth cusps, were primarily influenced by increased self-regulation of the molecular activator (activation), whereas the pattern of knots resulted from changes in both activation and biases in tooth bud growth. In simulations, increased activation facilitated accelerated evolutionary increases in knot number, creating a lateral knot arrangement that evolved at least ten times on placental upper molars. Relatively small increases in activation, superimposed on an ancestral tritubercular molar growth pattern, could recreate key changes leading to a rectangular upper molar cusp pattern. Tinkering with tooth bud geometry varied the way cusps initiated along the posterolingual molar margin, suggesting that small spatial variations in ancestral molar growth may have influenced how placental lineages acquired a hypocone cusp. We suggest that development could have enabled relatively fast higher-level divergence of the placental molar dentition.

Whether developmental processes bias or constrain morphological adaptation is a long-standing question in evolutionary biology (14). Many of the distinctive features of a species derive from pattern formation processes that establish the position and number of anatomical structures (5). If developmental processes like pattern formation are biased toward generating only particular kinds of variation, adaptive radiations may often be directed along developmental–genetic “lines of least resistance” (2, 4, 6, 7). Generally, the evolutionary consequences of this developmental bias have been considered largely in terms of how it might influence the pattern of character evolution (e.g., refs. 1, 2, 810). But development could also influence evolutionary rates by controlling how much variation is accessible to natural selection in a given generation (11).For mammals, the dentition is often the only morphological system linking living and extinct species (12). Correspondingly, tooth morphology plays a crucial role in elucidating evolutionary relationships, time calibrating phylogenetic trees, and reconstructing adaptive responses to past environmental change (e.g., refs. 1315). One of the most pervasive features of dental evolution among mammals is an increase in the complexity of the tooth occlusal surface, primarily through the addition of new tooth cusps (16, 17). These increases in tooth complexity are functionally and ecologically significant because they enable more efficient mechanical breakdown of lower-quality foods like plant leaves (18).Placental mammals are the most diverse extant mammalian group, comprising more than 6,000 living species spread across 19 extant orders, and this taxonomic diversity is reflected in their range of tooth shapes and dietary ecologies (12). Many extant placental orders, especially those with omnivorous or herbivorous ecologies (e.g., artiodactyls, proboscideans, rodents, and primates), convergently evolved a rectangular upper molar cusp pattern from a placental ancestor with a more triangular cusp pattern (1921). This resulted from separate additions in each lineage of a novel posterolingual cusp, the "hypocone'''' [sensu (19)], to the tritubercular upper molar (Fig. 1), either through modification of a posterolingual cingulum (“true” hypocone) or another posterolingual structure, like a metaconule (pseudohypocone) (19). The fossil record suggests that many of the basic steps in the origin of this rectangular cusp pattern occurred during an enigmatic early diversification window associated with the divergence and early radiation of several placental orders (20, 21; Fig. 1). However, there remains debate about the rate and pattern of early placental divergence (2224). On the one hand, most molecular phylogenies suggest that higher-level placental divergence occurred largely during the Late Cretaceous (25, 26), whereas other molecular phylogenies and paleontological analyses suggest more rapid divergence near the Cretaceous–Paleogene (K–Pg) boundary (21, 24, 2729). Most studies agree that ecological opportunity created in the aftermath of the K–Pg extinction probably played an important role in ecomorphological diversification within the placental orders (30, 31). But exactly how early placentals acquired the innovations needed to capitalize on ecological opportunity remains unclear. Dental innovations, especially those which facilitated increases in tooth complexity, may have been important because they would have promoted expansion into plant-based dietary ecologies left largely vacant after the K–Pg extinction event (32).Open in a separate windowFig. 1.Placental mammal lineages separately evolved complex upper molar teeth with a rectangular cusp pattern composed of two lateral pairs of cusps from a common ancestor with a simpler, triangular cusp pattern. Many early relatives of the extant placental orders, such as Eritherium, possessed a hypocone cusp and a more rectangular primary cusp pattern. Examples of complex upper molars are the following: Proboscidea, the gomphothere Anancus; Rodentia, the wood mouse Apodemus; and Artiodactyla, the suid Nyanzachoerus.Mammalian tooth cusps form primarily during the “cap” and “bell” stage of dental development, when signaling centers called enamel knots establish the future sites of cusp formation within the inner dental epithelium (33, 34). The enamel knots secrete molecules that promote proliferation and changes in cell–cell adhesion, which facilitates invagination of the dental epithelium into an underlying layer of mesenchymal cells (34, 35). Although a range of genes are involved in tooth cusp patterning (3638), the basic dynamics can be effectively modeled using reaction–diffusion models with just three diffusible morphogens: an activator, an inhibitor, and a growth factor (3941). Candidate activator genes in mammalian tooth development include Bmp4, Activin A, Fgf20, and Wnt genes, whereas potential inhibitors include Shh and Sostdc, and Fgf4 and Bmp2 have been hypothesized to act as growth factors (38, 4043). In computer models of tooth development, activator molecules up-regulated in the underlying mesenchyme stimulate differentiation of overlying epithelium into nondividing enamel knot cells. These in turn secrete molecules that inhibit further differentiation of epithelium into knot cells, while also promoting cell proliferation that creates the topographic relief of the cusp (40). Although many molecular, cellular, and physical processes have the potential to influence cusp formation, and thereby tooth complexity (35, 37), parameters that control the strength and conductance of the activator and inhibitor signals, the core components of the reaction–diffusion cusp patterning mechanism (39, 40) are likely to be especially important.Here, we integrate a previous computer model of tooth morphogenesis called ToothMaker (41), with simulations of trait evolution and data from the fossil record (Fig. 2), to examine the developmental origins of tooth complexity in placental mammals. Specifically, we ask the following: 1) What developmental processes can influence how many cusps form? 2) How might these developmental processes influence the evolution of tooth cusp number, especially rates? And 3) what developmental changes may have been important in the origins of the fourth upper molar cusp, the hypocone, in placental mammal evolution?Open in a separate windowFig. 2.Workflow for simulations of tooth complexity evolution. (A) Tooth shape is varied for five signaling and growth parameters in ToothMaker. (B) From an ancestral state, each parameter is varied in 2.5% increments up to a maximum of ± 50% of the ancestral state. (C) Tooth complexity and enamel knot (EK) pattern were quantified for each parameter combination. Tooth complexity was measured using cusp number/EK number and OPC. ToothMaker and placental upper second molars were classified into categories based on EK/cusp pattern. (D) The parameter space was populated with pattern and tooth complexity datums to build a developmental landscape. (E) Tooth complexity evolution was simulated on each developmental landscape. (F) Resulting diversity and pattern of tooth complexity was compared with placental mammal molar diversity.  相似文献   

14.
Dendritic, i.e., tree-like, river networks are ubiquitous features on Earth’s landscapes; however, how and why river networks organize themselves into this form are incompletely understood. A branching pattern has been argued to be an optimal state. Therefore, we should expect models of river evolution to drastically reorganize (suboptimal) purely nondendritic networks into (more optimal) dendritic networks. To date, current physically based models of river basin evolution are incapable of achieving this result without substantial allogenic forcing. Here, we present a model that does indeed accomplish massive drainage reorganization. The key feature in our model is basin-wide lateral incision of bedrock channels. The addition of this submodel allows for channels to laterally migrate, which generates river capture events and drainage migration. An important factor in the model that dictates the rate and frequency of drainage network reorganization is the ratio of two parameters, the lateral and vertical rock erodibility constants. In addition, our model is unique from others because its simulations approach a dynamic steady state. At a dynamic steady state, drainage networks persistently reorganize instead of approaching a stable configuration. Our model results suggest that lateral bedrock incision processes can drive major drainage reorganization and explain apparent long-lived transience in landscapes on Earth.

What should a drainage network look like? Fig. 1A shows a single channel, winding its way through the catchment so as to have access to water and sediment from unchannelized zones in the same manner as the dendritic (tree-like) network of Fig. 1B. It appears straightforward that the dendritic pattern is a model for nature, and the single channel is not. Dendritic drainage networks are called such because of their similarity to branching trees, and their patterns are “characterized by irregular branching in all directions” (1) with “tributaries joining at acute angles” (2). Drainage networks can also take on other forms such as parallel, pinnate, rectangular, and trellis in nature (2). However, drainage networks in their most basic form without topographic, lithologic, and tectonic constraints should tend toward a dendritic form (2). In addition, drainage networks that take a branching, tree-like form have been argued to be “optimal channel networks” that minimize total energy dissipation (3, 4). Therefore, we would expect that models simulating river network formation, named landscape evolution models (LEMs), that use the nondendritic pattern of Fig. 1A as an initial condition to massively reorganize and approach the dendritic steady state of Fig. 1B. To date, no numerical LEM has shown the ability to do this. Here, we present a LEM that can indeed accomplish such a reorganization. A corollary of this ability is the result that landscapes approach a dynamic, rather than static steady state.Open in a separate windowFig. 1.Schematic diagram of a nondendritic and a dendritic drainage network. This figure shows the Wolman Run Basin in Baltimore County, MD (A) drained by a single channel winding across the topography and (B) drained by a dendritic network of channels. Both networks have similar drainage densities (53, 54), but there is a stark difference between their stream ordering (5356). This figure invites discussion as to how a drainage system might evolve from the configuration of A to that of B.There is indeed debate as to whether landscapes tend toward an equilibrium that is frozen or highly dynamic (5). Hack (6) hypothesized that erosional landscapes attain a steady state where “all elements of the topography are downwasting at the same rate.” This hypothesis has been tested in numerical models and small-scale experiments. Researchers found that numerical LEMs create static topographies (7, 8). In this state, erosion and uplift are in balance in all locations in the landscape, resulting in landscapes that are dissected by stable drainage networks in geometric equilibrium (9). The landscape has achieved geometric equilibrium in planform when a proxy for steady-state river elevation, named χ (10), has equal values across all drainage divides. In contrast, experimental landscapes (7, 11) develop drainage networks that persistently reorganize. Recent research on field landscapes suggests that drainage divides migrate until reaching geometric equilibrium (9), but other field-based research suggests that landscapes may never attain geometric equilibrium (12).The dynamism of the equilibrium state determines the persistence of initial conditions in experimental and model landscapes. It is important to understand initial condition effects (13) to better constrain uncertainty in LEM predictions. Kwang and Parker (7) demonstrate that numerical LEMs exhibit “extreme memory,” where small topographic perturbations in initial conditions are amplified and preserved during a landscape’s evolution (Fig. 2A). Extreme memory in the numerical models is closely related to the feasible optimality phenomenon found within the research on optimal channel networks (4). These researchers suggest that nature’s search for the most “stable” river network configuration is “myopic” and unable to find configurations that completely ignore their initial condition. In contrast to numerical models, experimental landscapes (7, 11) reach a highly dynamic state where all traces of initial surface conditions are erased by drainage network reorganization. It has been hypothesized that lateral erosion processes are responsible for drainage network reorganization in landscapes (7, 14); these processes are not included in most LEMs.Open in a separate windowFig. 2.A comparison of LEM-woLE (A) and LEM-wLE (B). Both models utilize the same initial condition, i.e., an initially flat topography with an embedded sinusoidal channel (1.27 m deep) without added topographic perturbations. Without perturbations, the landscape produces angular tributaries that are attached to the main sinusoidal channel (compare with SI Appendix, Fig. S7). Here, LEM-wLE quickly shreds the signal of the initial condition over time, removing the angular tributaries. By 10 RUs eroded the sinusoidal signal is mostly erased. After 100 RUs, the drainage network continues to reorganize itself (i.e., dynamic steady state). The landscape continues to reorganize as shown in Movies S1.Most widely used LEMs simulate incision into bedrock solely in the vertical direction. However, there is growing recognition that bedrock channels also shape the landscape by incising laterally (15, 16). Lateral migration into bedrock is important for the creation of strath terraces (17, 18) and the morphology of wide bedrock valleys (1921). Recently, Langston and Tucker (22) developed a formulation for lateral bedrock erosion in LEMs. Here, we implement their submodel to explore the long-term behavior of LEMs that incorporate lateral erosion.The LEM submodel of Langston and Tucker (22) allows for channels to migrate laterally. By including this autogenic mechanism, we hypothesize that lateral bedrock erosion creates instabilities that 1) shred (23) the memory of initial conditions such as the unrealistic configurations of Fig. 1A and 2) produce landscapes that achieve a statistical steady state instead of a static one. By incorporating the lateral incision component (22) into a LEM, we aim to answer the following: 1) What controls the rate of decay of signals from initial conditions? 2) What are the frequency and magnitude of drainage reorganization in an equilibrium landscape? 3) What roles do model boundary conditions play in landscape reorganization?  相似文献   

15.
Domestic dogs have experienced population bottlenecks, recent inbreeding, and strong artificial selection. These processes have simplified the genetic architecture of complex traits, allowed deleterious variation to persist, and increased both identity-by-descent (IBD) segments and runs of homozygosity (ROH). As such, dogs provide an excellent model for examining how these evolutionary processes influence disease. We assembled a dataset containing 4,414 breed dogs, 327 village dogs, and 380 wolves genotyped at 117,288 markers and data for clinical and morphological phenotypes. Breed dogs have an enrichment of IBD and ROH, relative to both village dogs and wolves, and we use these patterns to show that breed dogs have experienced differing severities of bottlenecks in their recent past. We then found that ROH burden is associated with phenotypes in breed dogs, such as lymphoma. We next test the prediction that breeds with greater ROH have more disease alleles reported in the Online Mendelian Inheritance in Animals (OMIA). Surprisingly, the number of causal variants identified correlates with the popularity of that breed rather than the ROH or IBD burden, suggesting an ascertainment bias in OMIA. Lastly, we use the distribution of ROH across the genome to identify genes with depletions of ROH as potential hotspots for inbreeding depression and find multiple exons where ROH are never observed. Our results suggest that inbreeding has played a large role in shaping genetic and phenotypic variation in dogs and that future work on understudied breeds may reveal new disease-causing variation.

The unique demographic and selective history of dogs has enabled the persistence of deleterious variation, simplified genetic architecture of complex traits, and caused an increase in both runs of homozygosity (ROH) and identity-by-descent (IBD) segments within breeds (16). Specifically, the average FROH was ∼0.3 in dogs (7), compared to 0.005 in humans, computed from the 1000 Genomes populations (8). The large amount of the genome in ROH in dogs, combined with a wealth of genetic variation and phenotypic data (2, 5, 7, 911), allow us to test how ROH and IBD influence complex traits and fitness (Fig. 1). Furthermore, many of the deleterious alleles within dogs likely arose relatively recently within a breed, and dogs tend to share similar disease pathways and genes with humans (4, 12, 13), increasing their relevance for complex traits in humans.Open in a separate windowFig. 1.Potential mechanisms for associations between ROH and phenotypes that depend on recessive mutations. If a recessive deleterious mutation is nonlethal (blue), it may lead to ROH correlating with disease, while lethal (red) recessive mutations will cause a depletion of ROH.Despite IBD segments and ROH being ubiquitous in genomes, the extent to which they affect the architecture of complex traits as well as reproductive fitness has remained elusive. Given that ROH are formed by inheritance of the same ancestral chromosome from both parents, there is a much higher probability of the individual to become homozygous for a deleterious recessive variant (8, 14), leading to a reduction in fitness. This prediction was verified in recent work in nonhuman mammals that has shown that populations suffering from inbreeding depression tend to have an increase in ROH (15, 16). ROH in human populations are enriched for deleterious variants (8, 14, 17). However, the extent to which ROH impact phenotypes remains unclear. For example, several studies have associated an increase in ROH with complex traits in humans (1823), though some associations remain controversial (2428). Determining how ROH and IBD influence complex traits and fitness could provide a mechanism for differences in complex-trait architecture across populations that vary in their burden of IBD and ROH.Here, we use IBD segments and ROH from 4,741 breed dogs and village dogs, and 380 wolves to determine the recent demographic history of dogs and wolves and establish a connection between recent inbreeding and deleterious variation associated with both disease and inbreeding depression. This comprehensive dataset contains genotype data from 172 breeds of dog, village dogs from 30 countries, and gray wolves from British Colombia, North America, and Europe. We test for an association with the burden of ROH and case-control status for a variety of complex traits. Remarkably, we also find that the number of disease-associated causal variants identified in a breed is positively correlated with breed popularity rather than burden of IBD or ROH in the genome, suggesting ascertainment biases also exist in databases of dog disease mutations and that many breeds of dog are understudied. Lastly, we identify multiple loci that may be associated with inbreeding depression by examining localized depletions of ROH across dog genomes.  相似文献   

16.
Future terrestrial and interplanetary travel will require high-speed flight and reentry in planetary atmospheres by way of robust, controllable means. This, in large part, hinges on having reliable propulsion systems for hypersonic and supersonic flight. Given the availability of fuels as propellants, we likely will rely on some form of chemical or nuclear propulsion, which means using various forms of exothermic reactions and therefore combustion waves. Such waves may be deflagrations, which are subsonic reaction waves, or detonations, which are ultrahigh-speed supersonic reaction waves. Detonations are an extremely efficient, highly energetic mode of reaction generally associated with intense blast explosions and supernovas. Detonation-based propulsion systems are now of considerable interest because of their potential use for greater propulsion power compared to deflagration-based systems. An understanding of the ignition, propagation, and stability of detonation waves is critical to harnessing their propulsive potential and depends on our ability to study them in a laboratory setting. Here we present a unique experimental configuration, a hypersonic high-enthalpy reaction facility that produces a detonation that is fixed in space, which is crucial for controlling and harnessing the reaction power. A standing oblique detonation wave, stabilized on a ramp, is created in a hypersonic flow of hydrogen and air. Flow diagnostics, such as high-speed shadowgraph and chemiluminescence imaging, show detonation initiation and stabilization and are corroborated through comparison to simulations. This breakthrough in experimental analysis allows for a possible pathway to develop and integrate ultra-high-speed detonation technology enabling hypersonic propulsion and advanced power systems.

Achieving high-speed flight at supersonic and hypersonic speeds is now a national priority and an international focus. To achieve this ultimate goal, highly energetic propulsion modes are needed to drive the vehicles (1). One set of new concepts, detonation-based engines, could play an important role in making space exploration and intercontinental travel as routine as intercity travel is today (2).Detonation-based propulsion systems are a transformational technology for maintaining the technological superiority of high-speed propulsion and power systems (3). These systems include gas turbine engines, afterburning jet engines, ramjets, scramjets, and ram accelerators. Detonation is an innovative scheme for hypersonic propulsion that considerably increases thermodynamic cycle efficiencies (10 to 20%) as compared to traditional deflagration based cycles (4, 5). Even for applications where there are no additional thermodynamic benefits, detonation-based cycles have shown to provide enhanced combustion efficiency like ram rotating detonation engines (6). Research advancement in ultrahigh-speed detonation systems will help to realize and develop this technological advantage over existing propulsion and power systems.A detonation is a supersonic combustion wave that consists of a shock wave driven by energy release from closely coupled chemical reactions. These waves travel at many times the speed of sound, often reaching speeds of Mach 5, as in the case of a hydrogen–air fuel mixture. An engine operating with a Mach 5 flow path corresponds to a vehicle flight Mach number of 6 to 17 (79). That is comparable to a half-hour flight from New York to London and is 5 times faster than the average time it took the legendary Concorde to complete the same journey. The idea of using detonation waves for propulsion and energy generation is not new (3), although the implementation of this concept has been difficult. Three main categories of detonation engine concepts have received significant research attention: pulse detonation engines (5, 1012), rotating detonation engines (1315), and standing and oblique detonation wave engines (ODWE) (3, 7, 1618). The ODWE is of particular interest here for its theoretical ability to propel hypersonic aircraft to the speeds needed for spaceplanes and other reusable space launch vehicles. Fig. 1 shows a conceptual hypersonic vehicle powered by an ODWE and illustrates the relation to the experimental and computational results of this study. The challenge in developing these engine concepts is finding reliable mechanisms for detonation initiation and robust stabilization of these waves in the high-speed, high-enthalpy conditions that would be expected of these engine concepts.Open in a separate windowFig. 1.Schematic of oblique detonation engine concept. The experimental and computational ODW domains are highlighted along with their location in the engine flow path.Laboratory experiments and numerical simulations have shown a number of modes of detonation initiation, and numerical simulations have elucidated important underlying concepts in their stabilization (1925). Despite these advances, the problem is compounded by the historical difficulty in achieving a stabilized detonation in an experimental facility that produces realistic flight conditions which can be adapted for use in an actual engine. Previous experimental studies were unable to show a stabilized oblique detonation wave (ODW) for an extended period, due to their use of shock/expansion tubes or projectiles (7, 22, 2628). These types of facilities have limited run times, on the order of microseconds or milliseconds. Another major difficulty in stabilizing the detonation wave is upstream wave propagation through the boundary layer leading to unstart with recent experiments showing deflagration-to-detonation transition in a hypersonic flow and an unstable detonation that propagated upstream (24). Several numerical studies have shown potentially steady ODW but lack experimental verification (21, 23, 29, 30). These leave uncertainty about the stability of ODW, which must be addressed through experiments capable of creating the appropriate conditions and maintaining them for an extended period.This paper reports results from a study demonstrating experimentally controlled detonation initiation and stabilization in a hypersonic flow for a situation similar to proposed flight conditions for these vehicle concepts with an active run time of several seconds. The experimental results capture the stabilized detonation, as shown in the shadowgraph and chemiluminescence images, and are further confirmed and explained by the theory and numerical simulations of the system. A 30° angle ramp is used in the high-enthalpy hypersonic reaction facility to ignite and stabilize an ODW, shown schematically in Fig. 2A. The shock-laden, high-Mach number flow induces a temperature rise to ignite and stabilize a detonation in the incoming hydrogen–air mixture. The combination of matching the flow Mach number to the MCJ conditions and low boundary layer fueling result in the stabilized detonation. Static pressure measurements confirm a pressure rise induced by the detonation wave. High-fidelity computational fluid dynamics simulations have been used to provide additional detailed insight into the detonation initiation and stabilization process.Open in a separate windowFig. 2.(A) HyperReact. (B) Nonreacting flow field and (C) stabilized ODW.  相似文献   

17.
18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号