首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Tubulin-targeted chemotherapy has proven to be a successful and wide spectrum strategy against solid and liquid malignancies. Therefore, new ways to modulate this essential protein could lead to new antitumoral pharmacological approaches. Currently known tubulin agents bind to six distinct sites at α/β-tubulin either promoting microtubule stabilization or depolymerization. We have discovered a seventh binding site at the tubulin intradimer interface where a novel microtubule-destabilizing cyclodepsipeptide, termed gatorbulin-1 (GB1), binds. GB1 has a unique chemotype produced by a marine cyanobacterium. We have elucidated this dual, chemical and mechanistic, novelty through multidimensional characterization, starting with bioactivity-guided natural product isolation and multinuclei NMR-based structure determination, revealing the modified pentapeptide with a functionally critical hydroxamate group; and validation by total synthesis. We have investigated the pharmacology using isogenic cancer cell screening, cellular profiling, and complementary phenotypic assays, and unveiled the underlying molecular mechanism by in vitro biochemical studies and high-resolution structural determination of the α/β-tubulin−GB1 complex.

Microtubules are polarized polymers consisting of α/β-tubulin heterodimers involved in cellular structure, motility, proliferation, and intracellular trafficking (1). Pharmacological targeting of tubulin dynamics at different sites (Fig. 1A) has been a validated strategy for cancer therapy for decades and has mostly been linked to the antimitotic effects of these compounds, although increasing evidence has emerged for the importance of nonmitotic effects (1). Natural products targeting tubulin, in particular, have yielded a wealth of chemically diverse agents and provided the basis for several Food and Drug Administration-approved drugs, for both cancer and other pathologies, either alone or as antibody−drug conjugate (ADC), including paclitaxel, vincristine, maytansine, eribulin, and colchicine (Fig. 1 B and C). Compounds can be classified based on their binding to one of the six known binding sites, and, even though they are all targeting tubulin, they have shown distinct pharmacological effects. Therefore, there is a persistent interest in the identification of novel microtubule-targeting agents. Two α/β-tubulin binding sites are associated with microtubule stabilization (taxane and laulimalide/peloruside sites; Fig. 1 A and B), while binding to four other sites causes microtubule destabilization (vinca, maytansine, colchicine, and pironetin sites; Fig. 1 A and C) (1).Open in a separate windowFig. 1.Binding sites and structures of microtubule-targeting agents. (A) Tubulin heterodimer (α-tubulin in gray and β-tubulin in white) in ribbon representation, where six known binding sites have been highlighted showing representative ligands in sphere representation: maytansine (PDB ID code 4tv8, violet); epothilone (PDB ID code 4o4i, orange); peloruside (PBD ID code 4o4j, red); colchicine (PDB ID code 4o2b, dark blue); pironetin (PDB ID code 5fnv, cyan), and vinblastine (PDB ID code 4eb6, light blue). The gatorbulin binding site has been also included (PDB ID code 7alr, teal). (B and C) Representative compounds targeting tubulin binding sites. (B) Microtubule-stabilizing agents. (C) Microtubule-destabilizing agents, including the structure of GB1 (1a).Our investigation of marine cyanobacteria as a source of potential anticancer agents has previously yielded the modified peptides dolastatin 10 (Fig. 1C) and dolastatin 15 (24), targeting the vinca site (5, 6). Three ADCs with a dolastatin 10 analog (monomethyl auristatin E) as the cytotoxic payload are approved for the treatment of various lymphomas and refractory bladder cancer, while dolastatin 15-based ADCs have advanced to clinical trials (4). We identified both dolastatins 10 and 15 as indirect hypoxia-inducible factor (HIF) inhibitors based on differential cytotoxicity against a panel of isogenic HCT116 colorectal cancer cells (4, 7), which indicated that HIF inhibition is functionally relevant for the mechanisms of action of these compounds. HIF is activated in solid tumors and promotes metastasis, and targeted screening early in the drug discovery process could provide a rapid indication for requisite selectivity for cancer treatment (810). Using the same isogenic screening system, we now identified an antiproliferative agent that also possessed preferential activity for oncogenic KRAS-containing and HIF-1α−containing HCT116 cells and is a microtubule-destabilizing cyclodepsipeptide. We named the compound gatorbulin-1 (GB1, 1a; Fig. 1C), in analogy to eribulin (Eisai Research Institute), to symbolically represent the discovery of its unique chemical structure and pharmacological potential at the University of Florida and global Gator Nation partners. Here we report the bioassay-guided isolation, structure determination, synthesis, preliminary structure−activity relationships, mechanism of action, target identification, and binding mode elucidation. Our studies revealed that GB1 represents a unique chemical scaffold targeting a different binding site near the colchicine site and displays distinct pharmacology (Fig. 1A).  相似文献   

2.
3.
Coordination of behavior for cooperative performances often relies on linkages mediated by sensory cues exchanged between participants. How neurophysiological responses to sensory information affect motor programs to coordinate behavior between individuals is not known. We investigated how plain-tailed wrens (Pheugopedius euophrys) use acoustic feedback to coordinate extraordinary duet performances in which females and males rapidly take turns singing. We made simultaneous neurophysiological recordings in a song control area “HVC” in pairs of singing wrens at a field site in Ecuador. HVC is a premotor area that integrates auditory feedback and is necessary for song production. We found that spiking activity of HVC neurons in each sex increased for production of its own syllables. In contrast, hearing sensory feedback produced by the bird’s partner decreased HVC activity during duet singing, potentially coordinating HVC premotor activity in each bird through inhibition. When birds sang alone, HVC neurons in females but not males were inhibited by hearing the partner bird. When birds were anesthetized with urethane, which antagonizes GABAergic (γ-aminobutyric acid) transmission, HVC neurons were excited rather than inhibited, suggesting a role for GABA in the coordination of duet singing. These data suggest that HVC integrates information across partners during duets and that rapid turn taking may be mediated, in part, by inhibition.

Animals routinely rely on sensory feedback for the control of their own behavior. In cooperative performances, such sensory feedback can include cues produced by other participants (18). For example, in interactive vocal communication, including human speech, individuals take turns vocalizing. This “turn taking” is a consequence of each participant responding to auditory cues from a partner (46, 9, 10). The role of such “heterogenous” (other-generated) feedback in the control of vocal turn taking and other cooperative performances is largely unknown.Plain-tailed wrens (Pheugopedius euophrys) are neotropical songbirds that cooperate to produce extraordinary duet performances but also sing by themselves (Fig. 1A) (4, 10, 11). Singing in plain-tailed wrens is performed by both females and males and used for territorial defense and other functions, including mate guarding and attraction (1, 1116). During duets, female and male plain-tailed wrens take turns, alternating syllables at a rate of between 2 and 5 Hz (Fig. 1A) (4, 11).Open in a separate windowFig. 1.Neural control of solo and duet singing in plain-tailed wrens. (A) Spectrogram of a singing bout that included male solo syllables (blue line, top) followed by a duet. Solo syllables for both sexes (only male solo syllables are shown here) are sung at lower amplitudes than syllables produced in duets. Note that the smeared appearance of wren syllables in spectrograms reflects the acoustic structure of plain-tailed wren singing. (B and C) Each bird has a motor system that is used to produce song and sensory systems that mediate feedback. (B) During solo singing, the bird hears its own song, which is known as autogenous feedback (orange). (C) During duet singing, each bird hears both its own singing and the singing of its partner, known as heterogenous feedback (green). The key difference between solo and duet singing is heterogenous feedback that couples the neural systems of the two birds. This coupling results in changes in syllable amplitude and timing in both birds.There is a categorical difference between solo and duet singing. In solo singing, the singing bird receives only autogenous (hearing its own vocalization) feedback (Fig. 1B). The partner may hear the solo song if it is nearby, a heterogenous (other-generated) cue. In duet singing, birds receive both heterogenous and autogenous feedback as they alternate syllable production (Fig. 1C). Participants use heterogenous feedback during duet singing for precise timing of syllable production (4, 11). For example, when a male temporarily stops participating in a duet, the duration of intersyllable intervals between female syllables increases (4), showing an effect of heterogenous feedback on the timing of syllable production.How does the brain of each wren integrate heterogenous acoustic cues to coordinate the precise timing of syllable production between individuals during duet performances? To address this question, we examined neurophysiological activity in HVC, a nucleus in the nidopallium [an analogue of mammalian cortex (17, 18)]. HVC is necessary for song learning, production, and timing in species of songbirds that do not perform duets (1924). Neurons in HVC are active during singing and respond to playback of the bird’s own learned song (2527). In addition, recent work has shown that HVC is also involved in vocal turn taking (19).To examine the role of heterogenous feedback in the control of duet performances, we compared neurophysiological activity in HVC when female or male wrens sang solo syllables with syllables sung during duets. Neurophysiological recordings were made in awake and anesthetized pairs of wrens at the Yanayacu Biological Station and Center for Creative Studies on the slopes of the Antisana volcano in Ecuador. We found that heterogenous cues inhibited HVC activity during duet performances in both females and males, but inhibition was only observed in females during solo singing.  相似文献   

4.
We report paleomagnetic data showing that an intraoceanic Trans-Tethyan subduction zone existed south of the Eurasian continent and north of the Indian subcontinent until at least Paleocene time. This system was active between 66 and 62 Ma at a paleolatitude of 8.1 ± 5.6 °N, placing it 600–2,300 km south of the contemporaneous Eurasian margin. The first ophiolite obductions onto the northern Indian margin also occurred at this time, demonstrating that collision was a multistage process involving at least two subduction systems. Collisional events began with collision of India and the Trans-Tethyan subduction zone in Late Cretaceous to Early Paleocene time, followed by the collision of India (plus Trans-Tethyan ophiolites) with Eurasia in mid-Eocene time. These data constrain the total postcollisional convergence across the India–Eurasia convergent zone to 1,350–2,150 km and limit the north–south extent of northwestern Greater India to <900 km. These results have broad implications for how collisional processes may affect plate reconfigurations, global climate, and biodiversity.

Classically, the India–Eurasia collision has been considered to be a single-stage event that occurred at 50–55 million years ago (Ma) (1, 2). However, plate reconstructions show thousands of kilometers of separation between India and Eurasia at the inferred time of collision (3, 4). Accordingly, the northern extent of Greater India was thought to have protruded up to 2,000 km relative to present-day India (5, 6) (Fig. 1). Others have suggested that the India–Eurasia collision was a multistage process that involved an east–west trending Trans-Tethyan subduction zone (TTSZ) situated south of the Eurasian margin (79) (Fig. 1). Jagoutz et al. (9) concluded that collision between India and the TTSZ occurred at 50–55 Ma, and the final continental collision occurred between the TTSZ and Eurasia at 40 Ma (9, 10). This model reconciles the amount of convergence between India and Eurasia with the observed shortening across the India–Eurasia collision system with the addition of the Kshiroda oceanic plate. Additionally, the presence of two subduction systems can explain the rapid India–Eurasia convergence rates (up to 16 mm a−1) that existed between 135 and 50 Ma (9), as well as variations in global climate in the Cenozoic (11).Open in a separate windowFig. 1.The first panel is an overview map of tectonic structure of the Karakoram–Himalaya–Tibet orogenic system. Blue represents India, red represents Eurasia, and the Kohistan–Ladakh arc (KLA) is shown in gray. The different shades of blue highlight the deformed margin of the Indian plate that has been uplifted to form the Himalayan belt, and the zones of darker red within the Eurasian plate highlight the Eurasian continental arc batholith. Thick black lines denote the suture zones which separate Indian and Eurasian terranes. The tectonic summary panels illustrate the two conflicting collision models and their differing predictions of the location of the Kohistan–Ladakh arc. India is shown in blue, Eurasia is shown in red, and the other nearby continents are shown in gray. Active plate boundaries are shown with black lines, and recently extinct boundaries are shown with gray lines. Subduction zones are shown with triangular tick marks.While the existence of the TTSZ in the Cretaceous is not disputed, the two conflicting collision models make distinct predictions about its paleolatitude in Late Cretaceous to Paleocene time; these can be tested using paleomagnetism. In the single-stage collision model, the TTSZ amalgamated with the Eurasian margin prior to ∼80 Ma (12) at a latitude of ≥20 °N (13, 14). In contrast, in the multistage model, the TTSZ remained near the equator at ≤10 °N, significantly south of Eurasia, until collision with India (9) (Fig. 1).No undisputed paleomagnetic constraints on the location of the TTSZ are available in the central Himalaya (1517). Westerweel et al. (18) showed that the Burma Terrane, in the eastern Himalaya, was part of the TTSZ and was located near the equator at ∼95 Ma, but they do not constrain the location of the TTSZ in the time period between 50 and 80 Ma, which is required to test the two collision hypotheses. In the western Himalaya, India and Eurasia are separated by the Bela, Khost, and Muslimbagh ophiolites and the 60,000 km2 intraoceanic Kohistan Ladakh arc (19, 20) (Fig. 1). These were obducted onto India in the Late Cretaceous to Early Paleocene (19), prior to the closure of the Eocene to Oligocene Katawaz sedimentary basin (20) (Fig. 1). The Kohistan–Ladakh arc contacts the Eurasian Karakoram terrane in the north along the Shyok suture and the Indian plate in the south along the Indus suture (21) (Fig. 1). Previous paleomagnetic studies suggest that the Kohistan–Ladakh arc formed as part of the TTSZ near the equator in the early Cretaceous but provide no information on its location after 80 Ma (2225). While pioneering, these studies lack robust age constraints, do not appropriately average paleosecular variation of the geodynamo, and do not demonstrate that the measured magnetizations have not been reset during a subsequent metamorphic episode.  相似文献   

5.
Correlating the structures and properties of a polymer to its monomer sequence is key to understanding how its higher hierarchy structures are formed and how its macroscopic material properties emerge. Carbohydrate polymers, such as cellulose and chitin, are the most abundant materials found in nature whose structures and properties have been characterized only at the submicrometer level. Here, by imaging single-cellulose chains at the nanoscale, we determine the structure and local flexibility of cellulose as a function of its sequence (primary structure) and conformation (secondary structure). Changing the primary structure by chemical substitutions and geometrical variations in the secondary structure allow the chain flexibility to be engineered at the single-linkage level. Tuning local flexibility opens opportunities for the bottom-up design of carbohydrate materials.

Natural polymers adopt a multitude of three-dimensional structures that enable a wide range of functions (1). Polynucleotides store and transfer genetic information; polypeptides function as catalysts and structural materials; and polysaccharides play important roles in cellular structure (26), recognition (5), and energy storage (7). The properties of these polymers depend on their structures at various hierarchies: sequence (primary structure), local conformation (secondary structure), and global conformation (tertiary structure).Automated solid-phase techniques provide access to these polymers with full sequence control (812). The correlation between the sequence, the higher hierarchy structures, and the resulting properties is relatively well established for polynucleotides (13, 14) and polypeptides (15, 16), while comparatively little is known for polysaccharides (17). Unlike polypeptides and polynucleotides, polysaccharides are based on monosaccharide building blocks that can form multiple linkages with different configurations (e.g., α- or β-linkages) leading to extremely diverse linear or branched polymers. This complexity is exacerbated by the flexibility of polysaccharides that renders structural characterization by ensemble-averaged techniques challenging (17). Imaging single-polysaccharide molecules using atomic force microscopy has revealed the morphology and properties of polysaccharides at mesoscopic, submicrometer scale (1822). However, imaging at such length scales precludes the observation of individual monosaccharide subunits required to correlate the polysaccharide sequence to its molecular structure and flexibility, the key determinants of its macroscopic functions and properties (23).Imaging polysaccharides at subnanometer resolution by combining scanning tunnelling microscopy (STM) and electrospray ion-beam deposition (ES-IBD) (24, 25) allows for the observation of their monosaccharide subunits to reveal their connectivity (2628) and conformation space (29). Here, we use this technique to correlate the local flexibility of an oligosaccharide chain to its sequence and conformation, the lowest two structural hierarchies. By examining the local freedom of the chain as a function of its primary and secondary structures, we address how low-hierarchy structural motifs affect local oligosaccharide flexibility—an insight critical to the bottom-up design of carbohydrate materials (30).We elucidate the origin of local flexibility in cellulose, the most abundant polymer in nature, composed of glucose (Glc) units linked by β-1,4–linkages (3133). Unveiling what affects the flexibility of cellulose chains is important because it gives rise to amorphous domains in cellulose materials (3437) that change the mechanical performance and the enzyme digestibility of cellulose (38). Cellohexaose, a Glc hexasaccharide (Fig. 1A), was used as a model for a single-cellulose chain as it has been shown to resemble the cellulose polymer behavior (12). Modified analogs prepared by Automated Glycan Assembly (AGA) (11, 12) were designed to manipulate particular intramolecular interactions responsible for cellulose flexibility. Cellohexaose, ionized as a singly deprotonated ion in the gas phase ([M-H]−1) was deposited on a Cu(100) surface held at 120 K by ES-IBD (24) (Materials and Methods). The ions were landed with 5-eV energy, well suited to access diverse conformation states of the molecule without inducing any chemical change in the molecule (29). The resulting cellohexaose observed in various conformation states allowed its mechanical flexibility (defined by the variance in the geometrical bending between two residues) to be quantified for every intermonomer linkage. The observed dependence of local flexibility on the oligosaccharide sequence and conformation thus exemplifies how primary and secondary structures tune the local mechanical flexibility of a carbohydrate polymer.Open in a separate windowFig. 1.STM images of cellohexaose (AAAAAA) and its analogs (AXAAXA). Structures and STM images of cellohexaose (A) and its substituted analogs (BE). Cellohexaose contains six Glcs (labeled as A; colored black) linked via β-1,4–glycosidic bonds. The cellohexaose analogs contain two substituted Glcs, as the second and the fifth residues from the nonreducing end, that have a single methoxy (–OCH3) at C(3) (labeled as B; colored red), two methoxy groups at C(3) and C(6) (labeled as C; colored green), a single carboxymethoxy (–OCH2COOH) at C(3) (labeled as D; colored blue), and a single fluorine (–F) at C(3) (labeled as F; colored purple).The effect of the primary structure on the chain flexibility was explored using sequence-defined cellohexaose analogs (Fig. 1). Cellohexaose, AAAAAA (Fig. 1A), was compared with its substituted analogs, ABAABA, ACAACA, ADAADA, and AFAAFA (written from the nonreducing end) (Fig. 1 BE), where A is Glc, B is Glc methylated at OH(3), C is Glc methylated at OH(3) and OH(6), D is Glc carboxymethylated at OH(3), and F is Glc deoxyfluorinated at C(3). These substitutions are designed to alter the intramolecular hydrogen bonding between the first and the second as well as between the fourth and fifth Glc units (Fig. 1). These functional groups also affect the local steric environment (i.e., the bulky carboxymethyl group) (Fig. 1D) and the local electronic properties (i.e., the electronegative fluorine group) (Fig. 1E). When compared with the unsubstituted parent cellohexaose, these modified cellohexaoses exhibit different aggregation behavior and are more water soluble (12).All cellohexaose derivatives adsorbed on the surface were imaged with STM at 11 K (Fig. 1). The oligosaccharides were deposited as singly deprotonated species and were computed to adsorb on the surface via a single covalent RO–Cu bond, except for ADAADA which was deposited as doubly deprotonated species and was computed to adsorb on the surface via two covalent RCOO–Cu bonds (R = sugar chain). All cellohexaoses appear as chains containing six protrusions corresponding to the six constituent Glcs. The unmodified cellohexaose chains (Fig. 1A) mainly adopt a straight geometry, while the substituted cellohexaoses (Fig. 1 BE) adopt both straight- and bent-chain geometries. Chemical substitution thus increases the geometrical freedom of the cellulose chain, consistent with the reported macroscopic properties (12).Large-chain bending between neighboring Glc units is observed exclusively for the substituted cellohexaose (Fig. 1). The large, localized bending reveals the substitution site and allows for the nonreducing and the reducing ends of the chain to be identified. These chains are understood to bend along the surface plane via the glycosidic linkage without significant tilting of the pyranose ring that remains parallel to the surface (illustrated in SI Appendix, Fig. S1), as indicated by the ∼2.0-Å height of every Glc (29).The bending angle measured for AA and AX linkages (Fig. 2; Materials and Methods has analysis details) shows that, while both AA and AX prefer the straight, unbent geometry, AX displays a greater variation of bending angles than AA. AX angular distribution is consistently ∼10° wider than that for AA, indicating that AX has a greater conformational freedom than AA. This increased bending flexibility results from the absence of the intramolecular hydrogen bonding between OH(3) and O(5) of the neighboring residue. Methylation of OH(6), in addition to methylation of OH(3), results in similar flexibility (Fig. 2 B and C), suggesting the greater importance of OH(3) in determining the bending flexibility. Steric effects were found to be negligible since AD displayed similar flexibility to other less bulky AX linkages.Open in a separate windowFig. 2.Bending flexibility of AA linkage and substituted AX linkages. Chain bending (Fig. 1) is quantified as an angle formed between two neighboring Glcs (Materials and Methods). The results are given in A for AA, in B for AB, in C for AC, in D for AD, and in E for AF, showing that AX (where X = B, C, D, F) has a higher conformational freedom than AA. The angle distributions (bin size: 10°) are fitted with a Gaussian (solid line) shown with its half-width half-maximum. The computed potential energy curves are shown with the half-width at 0.4 eV and fitted with a parabola to estimate its stiffness (k; in millielectronvolts per degree2).Density functional theory (DFT) calculations support the observations, showing that substitution of OH(3) decreases the linkage stiffness by up to ∼40% (Fig. 2). Replacing OH(3) with other functional groups weakens the interglucose interactions by replacing the OH(3)··O(5) hydrogen bond with weak Van der Waals interactions. The similar flexibility between AB and AC linkages is attributed to the similar strength of the interglucose OH(2)··OH(6) hydrogen bond in AB (Fig. 2B) and the OH(2)··OMe(6) hydrogen bond in AC (Fig. 2C). The negligible steric effect in AD is attributed to the positional and rotational freedom of the bulky moiety that prevents any “steric clashes” and diminishes the contribution of steric repulsion in the potential energy curve. Comparing the potential landscape in the gas phase and on the surface shows that the stiffness of the adsorbed cellohexaoses is primarily dictated by their intramolecular interactions instead of molecule–surface interactions (SI Appendix, Fig. S2). Primary structure alteration by chemical substitution modifies the interglucose hydrogen bonds and enables chain flexibility to be locally engineered at the single-linkage level.We subsequently investigate how molecular conformation (secondary structure) affects the local bending flexibility. We define the local secondary structure as the geometry formed between two Glcs, here exemplified by the local twisting of the chain (Fig. 3). The global secondary structure is defined as the overall geometry formed by all Glcs in the chain, here exemplified by the linear and cyclic topologies of the chain (Fig. 4).Open in a separate windowFig. 3.Bending flexibility of untwisted and twisted AA linkages. (A) STM image of a cellohexaose containing two types of AA linkages: untwisted (HH and VV) and twisted (HV and VH; from the nonreducing end). The measured bending angles and the computed potential curve are given in B for HH, in C for HV, and in D for VV, showing that the twisted linkage (HV) is more flexible than the untwisted ones (HH and VV). In the molecular structures, interunit hydrogen bonds are given as dotted blue lines, and the pyranose rings are colored red for the horizontal ring (H) and green for vertical (V). The angle distributions (bin size: 10°) are fitted with a Gaussian distribution (solid line) labeled with its peak and half-width half-maximum. The computed potential curves are labeled with its half-width at 0.4 eV and fitted with a parabola to estimate its stiffness (k; in millielectronvolts per degree2).Open in a separate windowFig. 4.Bending flexibility of AA linkage in linear (LIN) and cyclic (CYC) chains. STM image, measured bending angle distribution, and computed potential of AA linkage are given in A for a linear cellohexaose conformer and in B for a cyclic cellohexaose conformer, showing that chain flexibility is reduced in conformations with cyclic topology. The same data are given in C for α-cyclodextrin that is locked in a conformation with cyclic topology. The measured angles (bin size: 10°) are each fitted with a Gaussian distribution (solid line) labeled with its peak and half-width half-maximum. The computed potentials are each labeled with its half-width at 0.4 eV and fitted with a parabola to estimate its stiffness (k; in millielectronvolts per degree2).The effect of local secondary structure on chain flexibility is exemplified by the bending flexibility of twisted and untwisted linkages in a cellohexaose chain (Fig. 3A). The untwisted and twisted linkages are present due to the Glc units observed in two geometries, H or V (Fig. 3), distinguished by their heights (h). H (h ∼ 2.0 Å) is a Glc with its pyranose ring parallel to the surface, while V (h ∼ 2.5 Å) has its ring perpendicular to the surface (29). These lead to HH and VV as untwisted linkages and HV and VH (written from nonreducing end) as twisted linkages.The twisted linkage is more flexible than the untwisted one, as shown by the unimodal bending angles for the untwisted linkage (HH and VV in Fig. 3 B and D, respectively) and the multimodal distribution for the twisted linkage (HV in Fig. 3C). DFT calculations attribute the increased bending flexibility to the reduction of accessible interunit hydrogen bonds from two to one. Linkage twisting increases the distance between the hydrogen-bonded pair, which weakens the interaction between Glc units and increases the flexibility at the twisting point. The increase in local chain flexibility conferred by chain twisting shows how local secondary structures affect chain flexibility.The effect of the global secondary structure on the local chain flexibility was examined by comparing the local bending flexibility of cellohexaose chains possessing different topologies. Cellohexaose can adopt either linear (Figs. 3A and and4A)4A) or cyclic topology (Fig. 4B), the latter characterized by the presence of a circular, head-to-tail hydrogen bond network (29). The cyclic conformation of cellohexaose is enabled by the head-to-tail chain folding from the 60° chain bending of the VV linkage. The VV segment in the cyclic chain is stiffer than in the linear chain since the bending angle distribution for the cyclic chain is 6° narrower than that for the linear chain. The observation is corroborated by DFT calculations that show that the VV linkage in the cyclic chain is about three times stiffer than that in the linear chain.To characterize the degree of chain stiffening due to the linear-to-cyclic chain folding, we compare the flexibility of the cyclic cellohexaose and α-cyclodextrin (an α-1,4–linked hexaglucose covalently locked in cyclic conformation). The α-cyclodextrin provides the referential local flexibility for a cyclic oligosaccharide chain. Strikingly, the local flexibility in α-cyclodextrin was found to be identical to that in the cyclic cellohexaose, as evidenced by the similar width of the bending angle distribution and the computed potentials (Fig. 4 B and C). The similar stiffness indicates that the folding-induced stiffening in cellohexaose is a general topological effect unaffected by the type of the interactions that give the cyclic conformation (noncovalent hydrogen bond in cellohexaose vs. covalent bond in α-cyclodextrin). The folding-induced stiffening is the result of the creation of a circular spring network that restricts the motion of Glc units and reduces their conformational freedom. The folding-induced stiffening reported here provides a mechanism by which carbohydrate structures can be made rigid. The dependence of the local chain flexibility on the chain topology shows how global secondary structures modify local flexibility.Using cellulose as an example, we have quantified the local flexibility of a carbohydrate polymer and identified structural factors that determine its flexibility. Modification of the carbohydrate primary structure by chemical substitution alters the mechanical flexibility at the single-linkage level. Changing secondary structure by chain twisting and folding provides additional means to modify the flexibility of each linkage. Control of these structural variables enables tuning of polysaccharide flexibility at every linkage as a basis for designing and engineering carbohydrate materials (30). Our general approach to identify structural factors affecting the flexibility of a specific molecular degrees of freedom in a supramolecular system should aid the design of materials and molecular machines (39) and the understanding of biomolecular dynamics.  相似文献   

6.
7.
Heavy monsoon rainfall ravaged a large swath of East Asia in summer 2020. Severe flooding of the Yangtze River displaced millions of residents in the midst of a historic public health crisis. This extreme rainy season was not anticipated from El Niño conditions. Using observations and model experiments, we show that the record strong Indian Ocean Dipole event in 2019 is an important contributor to the extreme Yangtze flooding of 2020. This Indian Ocean mode and a weak El Niño in the Pacific excite downwelling oceanic Rossby waves that propagate slowly westward south of the equator. At a mooring in the Southwest Indian Ocean, the thermocline deepens by a record 70 m in late 2019. The deepened thermocline helps sustain the Indian Ocean warming through the 2020 summer. The Indian Ocean warming forces an anomalous anticyclone in the lower troposphere over the Indo-Northwest Pacific region and intensifies the upper-level westerly jet over East Asia, leading to heavy summer rainfall in the Yangtze Basin. These coupled ocean-atmosphere processes beyond the equatorial Pacific provide predictability. Indeed, dynamic models initialized with observed ocean state predicted the heavy summer rainfall in the Yangtze Basin as early as April 2020.

Summer is the rainy season for East Asia. A northeastward-slanted rain band—called Mei-yu in China and Baiu in Japan—extends from the Yangtze River valley of China to the east of Japan during early summer (early June to mid-July). The Yangtze is the longest river of Asia, flowing from the eastern Tibetan Plateau and exiting into the ocean in Shanghai. Approximately one-third of the population of China live in the river basin. The Mei-yu rain band displays marked interannual variability with great socioeconomic impacts on the densely populated region, including agriculture production, water availability, food security, and economies (15).During June through July 2020, the Mei-yu rain band intensified markedly, with rainfall exceeding the 1981 to 2010 mean of ∼300 mm by ∼300 mm over the Yangtze River valley (Fig. 1A). This corresponds to an excess of up to 4 SDs (Fig. 1B). By 12 July 2020, the Yangtze floods caused 141 deaths, 28,000 homes were flattened, and 3.53 million hectares of crops were affected, with the direct economic loss at 82.23 billion yuan (11.76 billion US dollars) (6). South of the Mei-yu rain band, negative rainfall anomalies (−60 mm/month) extended over a broad region from the Bay of Bengal to tropical Northwest Pacific (Fig. 1A). This meridional dipole of rainfall anomalies is known as the recurrent Pacific–Japan pattern (7).Open in a separate windowFig. 1.Atmospheric dynamics of the Yangtze flooding of 2020. June through July averaged anomalies of (A) rainfall (shading, mm/month), SLP (contours at ±0.3, ±0.6, ±1.2, and ±1.8 hPa), and 850 hPa wind (vector, displayed with speed > 0.3 m/s); (C) column-integrated moisture transport (vector, displayed with magnitude > 15 kg ⋅ m−1 ⋅ s−1) and 500 hPa omega (shading, Pa/s, and negative values for ascent motions); and (D) 500 hPa horizontal temperature advection (shading, K/s) and wind (vector, displayed with speed > 0.5 m/s). Blue solid curves denote the Yangtze and Yellow Rivers. Black dashed curves (2,000 m isoline of topography) denote the Tibetan Plateau and surrounding mountains. SLP anomalies over and north of Tibetan Plateau are masked out for clarity. (B) June through July averaged rainfall anomalies (mm/month) over the Yangtze River Valley (26° to 33°N, 105° to 122°E) during 1979 to 2020, with one SD of 35 mm/month. Major El Niño events are marked.In June through July 2020, an anomalous anticyclone with depressed rainfall dominates the lower troposphere over the tropical and subtropical Northwest Pacific through the South China Sea (Fig. 1A). The easterly wind anomalies on the south flank of the anomalous anticyclone extend into the North Indian Ocean, while the anomalous southwesterlies on the northwest flank transport water vapor from the south to feed the enhanced Mei-yu rainband (Fig. 1C). In the mid-troposphere, the westerlies intensify over midlatitude East Asia, and the anomalous mid-tropospheric warm advection from Tibet (Fig. 1D) adiabatically induce upward motions (Fig. 1C) to enhance Mei-yu rainfall. The resultant anomalous diabatic heating reinforces the anomalous vertical motion, forming a positive feedback (812). This is consistent with the empirical relationship known to Chinese forecasters between the 500 hPa geopotential height and the Mei-yu rain band.On the interannual timescale, El Niño-Southern Oscillation (ENSO) has been identified as the dominant forcing of Mei-yu rainfall variability (35, 13). Mei-yu rainfall in the Yangtze Basin tends to increase (decrease) in post-El Niño (La Niña) summer. A Northwest Pacific anomalous anticyclone often develops rapidly during an El Niño winter (14), interacting with local sea surface temperature (SST) (15, 16) and modulated by the background annual cycle (17). The anomalous anticyclone cools the tropical Northwest Pacific on the southeastern flank by strengthening the northeast trade winds and surface evaporation. The ocean cooling suppresses atmospheric convection, reinforcing the anomalous anticyclone with a Rossby wave response. El Niño also causes the tropical Indian Ocean to warm. The Indo-western Pacific Ocean capacitor refers to the following interbasin positive feedback in summer between the Indian Ocean warming and the Northwest Pacific anomalous anticyclone. The tropical Indian Ocean warming excites a Matsuno-Gill-type (18, 19) response in tropospheric temperature, with a Kelvin response that penetrates eastward and induces northeasterly surface wind anomalies in the tropical Northwest Pacific. The resultant Ekman divergence suppresses convection and induces the anomalous anticyclone (20). The anomalous anticyclone in turn feeds back to the North Indian Ocean warming by weakening the background southwest monsoon and suppressing surface evaporation (2123).The above ocean-atmospheric coupling processes work well for major El Niño events and provide predictability for Mei-yu rainfall over East Asia (Fig. 1B). A robust Northwest Pacific anomalous anticyclone developed during the summers of 1998 and 2016, each following a major El Niño. A strong anomalous anticyclone and excessive Mei-yu rainfall were not expected for 2020 summer, however, since in the 2019/20 winter (November to January), the Niño3.4 index was marginal at only 0.5 °C (SI Appendix, Fig. S1) as compared to 2.4 °C and 2.6 °C in 1997/98 and 2015/16 winters, respectively. SST anomalies in the equatorial central Pacific (Niño4) were positive and nearly constant in magnitude during May 2018 to May 2020 (SI Appendix, Fig. S1), but the Northwest Pacific anomalous anticyclone did not develop in 2019 summer. Then what caused the pronounced anomalous anticyclone during the 2020 summer? Was it due to unpredictable atmospheric internal dynamics as in August 2016 (24, 25), or did some predictable SST anomalies play a role?  相似文献   

8.
Mechanical metamaterials are artificial composites that exhibit a wide range of advanced functionalities such as negative Poisson’s ratio, shape shifting, topological protection, multistability, extreme strength-to-density ratio, and enhanced energy dissipation. In particular, flexible metamaterials often harness zero-energy deformation modes. To date, such flexible metamaterials have a single property, for example, a single shape change, or are pluripotent, that is, they can have many different responses, but typically require complex actuation protocols. Here, we introduce a class of oligomodal metamaterials that encode a few distinct properties that can be selectively controlled under uniaxial compression. To demonstrate this concept, we introduce a combinatorial design space containing various families of metamaterials. These families include monomodal (i.e., with a single zero-energy deformation mode); oligomodal (i.e., with a constant number of zero-energy deformation modes); and plurimodal (i.e., with many zero-energy deformation modes), whose number increases with system size. We then confirm the multifunctional nature of oligomodal metamaterials using both boundary textures and viscoelasticity. In particular, we realize a metamaterial that has a negative (positive) Poisson’s ratio for low (high) compression rate over a finite range of strains. The ability of our oligomodal metamaterials to host multiple mechanical responses within a single structure paves the way toward multifunctional materials and devices.

Flexible metamaterials use carefully designed arrangements of deformable building blocks to achieve unusual and tunable mechanical functionalities (1). Such mechanical responses rely on on-demand deformation pathways that cost a relatively low amount of elastic energy. A useful and widely applicable paradigm for the design of such pathways leverages the limit in which their elastic energy is zero—these pathways then become mechanisms or zero-energy modes. Flexible metamaterials based on such principle are, so far, either monomodal (Fig. 1A) or plurimodal (Fig. 1C). On one hand, monomodal metamaterials feature a single zero-energy mode and a single functionality (28), which is typically robust and easy to control with a simple actuation protocol, that is, a protocol that requires a single actuator, for example, uniaxial compression. On the other hand, plurimodal metamaterials feature a large number of zero-energy modes, which increases with system size (9, 10). The presence of these multiple zero modes offers multiple possible functionalities in principle, but they are hard to control in practice; that is, they require complex actuation protocols—protocols that require more than one actuator—for successful execution (9). The challenge we address here is whether it is possible to find a middle ground between monomodal and plurimodal metamaterials. In other words, can we design and create metamaterials that have more than one zero-energy mode, but not a number that grows with system size? For convenience and clarity, we term such metamaterials oligomodal (Fig. 1B). Could oligomodal metamaterials be actuated in a robust fashion with a simple actuation protocol (Fig. 1B)? Could oligomodal metamaterials host distinct mechanical properties within a single structure?Open in a separate windowFig. 1.Oligomodal materials. (A) Monomodal materials have a single zero-energy mode, hence a single property, that can be obtained via a simple actuation protocol. (B) Oligomodal materials have a small but fixed number of zero-energy modes larger than one, hence a few distinct properties, that can be selected with a simple actuation protocol, for example, uniaxial compression. (C) Plurimodal materials have a large number of zero-energy modes that grows with system size, and hence are kinematically able to host a large number of properties, but they often require complex actuation protocols, for example, multiaxial loading.  相似文献   

9.
Dendritic, i.e., tree-like, river networks are ubiquitous features on Earth’s landscapes; however, how and why river networks organize themselves into this form are incompletely understood. A branching pattern has been argued to be an optimal state. Therefore, we should expect models of river evolution to drastically reorganize (suboptimal) purely nondendritic networks into (more optimal) dendritic networks. To date, current physically based models of river basin evolution are incapable of achieving this result without substantial allogenic forcing. Here, we present a model that does indeed accomplish massive drainage reorganization. The key feature in our model is basin-wide lateral incision of bedrock channels. The addition of this submodel allows for channels to laterally migrate, which generates river capture events and drainage migration. An important factor in the model that dictates the rate and frequency of drainage network reorganization is the ratio of two parameters, the lateral and vertical rock erodibility constants. In addition, our model is unique from others because its simulations approach a dynamic steady state. At a dynamic steady state, drainage networks persistently reorganize instead of approaching a stable configuration. Our model results suggest that lateral bedrock incision processes can drive major drainage reorganization and explain apparent long-lived transience in landscapes on Earth.

What should a drainage network look like? Fig. 1A shows a single channel, winding its way through the catchment so as to have access to water and sediment from unchannelized zones in the same manner as the dendritic (tree-like) network of Fig. 1B. It appears straightforward that the dendritic pattern is a model for nature, and the single channel is not. Dendritic drainage networks are called such because of their similarity to branching trees, and their patterns are “characterized by irregular branching in all directions” (1) with “tributaries joining at acute angles” (2). Drainage networks can also take on other forms such as parallel, pinnate, rectangular, and trellis in nature (2). However, drainage networks in their most basic form without topographic, lithologic, and tectonic constraints should tend toward a dendritic form (2). In addition, drainage networks that take a branching, tree-like form have been argued to be “optimal channel networks” that minimize total energy dissipation (3, 4). Therefore, we would expect that models simulating river network formation, named landscape evolution models (LEMs), that use the nondendritic pattern of Fig. 1A as an initial condition to massively reorganize and approach the dendritic steady state of Fig. 1B. To date, no numerical LEM has shown the ability to do this. Here, we present a LEM that can indeed accomplish such a reorganization. A corollary of this ability is the result that landscapes approach a dynamic, rather than static steady state.Open in a separate windowFig. 1.Schematic diagram of a nondendritic and a dendritic drainage network. This figure shows the Wolman Run Basin in Baltimore County, MD (A) drained by a single channel winding across the topography and (B) drained by a dendritic network of channels. Both networks have similar drainage densities (53, 54), but there is a stark difference between their stream ordering (5356). This figure invites discussion as to how a drainage system might evolve from the configuration of A to that of B.There is indeed debate as to whether landscapes tend toward an equilibrium that is frozen or highly dynamic (5). Hack (6) hypothesized that erosional landscapes attain a steady state where “all elements of the topography are downwasting at the same rate.” This hypothesis has been tested in numerical models and small-scale experiments. Researchers found that numerical LEMs create static topographies (7, 8). In this state, erosion and uplift are in balance in all locations in the landscape, resulting in landscapes that are dissected by stable drainage networks in geometric equilibrium (9). The landscape has achieved geometric equilibrium in planform when a proxy for steady-state river elevation, named χ (10), has equal values across all drainage divides. In contrast, experimental landscapes (7, 11) develop drainage networks that persistently reorganize. Recent research on field landscapes suggests that drainage divides migrate until reaching geometric equilibrium (9), but other field-based research suggests that landscapes may never attain geometric equilibrium (12).The dynamism of the equilibrium state determines the persistence of initial conditions in experimental and model landscapes. It is important to understand initial condition effects (13) to better constrain uncertainty in LEM predictions. Kwang and Parker (7) demonstrate that numerical LEMs exhibit “extreme memory,” where small topographic perturbations in initial conditions are amplified and preserved during a landscape’s evolution (Fig. 2A). Extreme memory in the numerical models is closely related to the feasible optimality phenomenon found within the research on optimal channel networks (4). These researchers suggest that nature’s search for the most “stable” river network configuration is “myopic” and unable to find configurations that completely ignore their initial condition. In contrast to numerical models, experimental landscapes (7, 11) reach a highly dynamic state where all traces of initial surface conditions are erased by drainage network reorganization. It has been hypothesized that lateral erosion processes are responsible for drainage network reorganization in landscapes (7, 14); these processes are not included in most LEMs.Open in a separate windowFig. 2.A comparison of LEM-woLE (A) and LEM-wLE (B). Both models utilize the same initial condition, i.e., an initially flat topography with an embedded sinusoidal channel (1.27 m deep) without added topographic perturbations. Without perturbations, the landscape produces angular tributaries that are attached to the main sinusoidal channel (compare with SI Appendix, Fig. S7). Here, LEM-wLE quickly shreds the signal of the initial condition over time, removing the angular tributaries. By 10 RUs eroded the sinusoidal signal is mostly erased. After 100 RUs, the drainage network continues to reorganize itself (i.e., dynamic steady state). The landscape continues to reorganize as shown in Movies S1.Most widely used LEMs simulate incision into bedrock solely in the vertical direction. However, there is growing recognition that bedrock channels also shape the landscape by incising laterally (15, 16). Lateral migration into bedrock is important for the creation of strath terraces (17, 18) and the morphology of wide bedrock valleys (1921). Recently, Langston and Tucker (22) developed a formulation for lateral bedrock erosion in LEMs. Here, we implement their submodel to explore the long-term behavior of LEMs that incorporate lateral erosion.The LEM submodel of Langston and Tucker (22) allows for channels to migrate laterally. By including this autogenic mechanism, we hypothesize that lateral bedrock erosion creates instabilities that 1) shred (23) the memory of initial conditions such as the unrealistic configurations of Fig. 1A and 2) produce landscapes that achieve a statistical steady state instead of a static one. By incorporating the lateral incision component (22) into a LEM, we aim to answer the following: 1) What controls the rate of decay of signals from initial conditions? 2) What are the frequency and magnitude of drainage reorganization in an equilibrium landscape? 3) What roles do model boundary conditions play in landscape reorganization?  相似文献   

10.
Despite their desirable attributes, boronic acids have had a minimal impact in biological contexts. A significant problem has been their oxidative instability. At physiological pH, phenylboronic acid and its boronate esters are oxidized by reactive oxygen species at rates comparable to those of thiols. After considering the mechanism and kinetics of the oxidation reaction, we reasoned that diminishing electron density on boron could enhance oxidative stability. We found that a boralactone, in which a carboxyl group serves as an intramolecular ligand for the boron, increases stability by 104-fold. Computational analyses revealed that the resistance to oxidation arises from diminished stabilization of the p orbital of boron that develops in the rate-limiting transition state of the oxidation reaction. Like simple boronic acids and boronate esters, a boralactone binds covalently and reversibly to 1,2-diols such as those in saccharides. The kinetic stability of its complexes is, however, at least 20-fold greater. A boralactone also binds covalently to a serine side chain in a protein. These attributes confer unprecedented utility upon boralactones in the realms of chemical biology and medicinal chemistry.

The modern pharmacopeia is composed of only a handful of elements. Built on hydrocarbon scaffolds (1), nearly all drugs contain nitrogen and oxygen, and many contain fluorine and sulfur (2). A surprising omission from this list is the fifth element in the periodic table, boron (3, 4). Since bortezomib received regulatory approval in 2003, only four additional boron-containing drugs have demonstrated clinical utility (Fig. 1A). Each is a boronic acid or ester.Open in a separate windowFig. 1.(A) Food and Drug Administration–approved pharmaceuticals containing a boronic acid. (B) Putative mechanism for the oxidative deboronation of a boronic acid by hydrogen peroxide (30).Bortezomib is a boronic acid, and ixazomib citrate hydrolyzes to one in aqueous solution (5). Other boron-containing drugs feature cyclic esters. The cyclic ester formed spontaneously from 2-hydroxymethylphenylboronic acid (2-HMPBA) is known as “benzoxaborole” and has received much attention due to its enhanced affinity for saccharides at physiological pH (68). This scaffold is present in the antifungal drug tavaborole and the antidermatitis drug crisaborole (9). Vaborbactam, which contains an analogous six-membered ring, is an efficacious β-lactamase inhibitor (10). Neuropathy has been associated with the use of bortezomib but not other boronic acids, which have minimal toxicity (11).The boron atom in a boronic acid (or ester) is isoelectronic with the carbon atom of a carbocation. Both are sp2 hybridized, have an empty p orbital, and adopt a trigonal planar geometry. In contrast to a carbocation, however, the weak Lewis acidity of a boronic acid allows for the reversible formation of covalent bonds. This attribute has enabled boronic acids to achieve extraordinary utility in synthetic organic chemistry and molecular recognition (1222). Boronic acids are, however, susceptible to oxidative damage. That deficiency is readily controllable in a chemistry laboratory but not in a physiological environment.In a boronic acid, the empty p orbital of boron is prone to attack by nucleophilic species such as the oxygen atom of a reactive oxygen species (ROS). The subsequent migration of carbon from boron to that oxygen leads to a labile boric ester which undergoes rapid hydrolysis (Fig. 1B). This oxidative deboronation converts the boronic acid into an alcohol and boric acid (23, 24).We sought a means to increase the utility of boron in biological contexts by deterring the oxidation of boronic acids. The rate-limiting step in the oxidation of boronic acid is likely to be the migration of carbon from boron to oxygen: a 1,2-shift (Fig. 1B). In that step, the boron becomes more electron deficient. We reasoned that depriving the boron of electron density might slow the 1,2-shift. A subtle means to do so would be to replace the alkoxide of a boronate ester with a carboxylate group. We find that the ensuing mixed anhydrides between a boronic acid and carboxylic acid are remarkable in their chemical attributes and biological utility.  相似文献   

11.
The puzzling sex ratio behavior of Melittobia wasps has long posed one of the greatest questions in the field of sex allocation. Laboratory experiments have found that, in contrast to the predictions of theory and the behavior of numerous other organisms, Melittobia females do not produce fewer female-biased offspring sex ratios when more females lay eggs on a patch. We solve this puzzle by showing that, in nature, females of Melittobia australica have a sophisticated sex ratio behavior, in which their strategy also depends on whether they have dispersed from the patch where they emerged. When females have not dispersed, they lay eggs with close relatives, which keeps local mate competition high even with multiple females, and therefore, they are selected to produce consistently female-biased sex ratios. Laboratory experiments mimic these conditions. In contrast, when females disperse, they interact with nonrelatives, and thus adjust their sex ratio depending on the number of females laying eggs. Consequently, females appear to use dispersal status as an indirect cue of relatedness and whether they should adjust their sex ratio in response to the number of females laying eggs on the patch.

Sex allocation has produced many of the greatest success stories in the study of social behaviors (14). Time and time again, relatively simple theory has explained variation in how individuals allocate resources to male and female reproduction. Hamilton’s local mate competition (LMC) theory predicts that when n diploid females lay eggs on a patch and the offspring mate before the females disperse, the evolutionary stable proportion of male offspring (sex ratio) is (n − 1)/2n (Fig. 1) (5). A female-biased sex ratio is favored to reduce competition between sons (brothers) for mates and to provide more mates (daughters) for those sons (68). Consistent with this prediction, females of >40 species produce female-biased sex ratios and reduce this female bias when multiple females lay eggs on the same patch (higher n; Fig. 1) (9). The fit of data to theory is so good that the sex ratio under LMC has been exploited as a “model trait” to study the factors that can constrain “perfect adaptation” (4, 1013).Open in a separate windowFig. 1.LMC. The sex ratio (proportion of sons) is plotted versus the number of females laying eggs on a patch. The bright green dashed line shows the LMC theory prediction for the haplodiploid species (5, 39). A more female-biased sex ratio is favored in haplodiploids because inbreeding increases the relative relatedness of mothers to their daughters (7, 32). Females of many species adjust their offspring sex ratio as predicted by theory, such as the parasitoid Nasonia vitripennis (green diamonds) (82). In contrast, the females of several Melittobia species, such as M. australica, continue to produce extremely female-biased sex ratios, irrespective of the number of females laying eggs on a patch (blue squares) (15).In stark contrast, the sex ratio behavior of Melittobia wasps has long been seen as one of the greatest problems for the field of sex allocation (3, 4, 1421). The life cycle of Melittobia wasps matches the assumptions of Hamilton’s LMC theory (5, 15, 19, 21). Females lay eggs in the larvae or pupae of solitary wasps and bees, and then after emergence, female offspring mate with the short-winged males, who do not disperse. However, laboratory experiments on four Melittobia species have found that females lay extremely female-biased sex ratios (1 to 5% males) and that these extremely female-biased sex ratios change little with increasing number of females laying eggs on a patch (higher n; Fig. 1) (15, 1720, 22). A number of hypotheses to explain this lack of sex ratio adjustment have been investigated and rejected, including sex ratio distorters, sex differential mortality, asymmetrical male competition, and reciprocal cooperation (1518, 20, 2226).We tested whether Melittobia’s unusual sex ratio behavior can be explained by females being related to the other females laying eggs on the same patch. After mating, some females disperse to find new patches, while some may stay at the natal patch to lay eggs on previously unexploited hosts (Fig. 2). If females do not disperse, they can be related to the other females laying eggs on the same host (2731). If females laying eggs on a host are related, this increases the extent to which relatives are competing for mates and so can favor an even more female-biased sex ratio (28, 3235). Although most parasitoid species appear unable to directly assess relatedness, dispersal behavior could provide an indirect cue of whether females are with close relatives (3638). Consequently, we predict that when females do not disperse and so are more likely to be with closer relatives, they should maintain extremely female-biased sex ratios, even when multiple females lay eggs on a patch (28, 35).Open in a separate windowFig. 2.Host nest and dispersal manners of Melittobia. (A) Photograph of the prepupae of the leaf-cutter bee C. sculpturalis nested in a bamboo cane and (B) a diagram showing two ways that Melittobia females find new hosts. The mothers of C. sculpturalis build nursing nests with pine resin consisting of individual cells in which their offspring develop. If Melittobia wasps parasitize a host in a cell, female offspring that mate with males inside the cell find a different host on the same patch (bamboo cane) or disperse by flying to other patches.We tested whether the sex ratio of Melittobia australica can be explained by dispersal status in a natural population. We examined how the sex ratio produced by females varies with the number of females laying eggs on a patch and whether or not they have dispersed before laying eggs. To match our data to the predictions of theory, we developed a mathematical model tailored to the unique population structure of Melittobia, where dispersal can be a cue of relatedness. We then conducted a laboratory experiment to test whether Melittobia females are able to directly access the relatedness to other females and adjust their sex ratio behavior accordingly. Our results suggest that females are adjusting their sex ratio in response to both the number of females laying eggs on a patch and their relatedness to the other females. However, relatedness is assessed indirectly by whether or not they have dispersed. Consequently, the solution to the puzzling behavior reflects a more-refined sex ratio strategy.  相似文献   

12.
The expansion of anatomically modern humans (AMHs) from Africa around 65,000 to 45,000 y ago (ca. 65 to 45 ka) led to the establishment of present-day non-African populations. Some paleoanthropologists have argued that fossil discoveries from Huanglong, Zhiren, Luna, and Fuyan caves in southern China indicate one or more prior dispersals, perhaps as early as ca. 120 ka. We investigated the age of the human remains from three of these localities and two additional early AMH sites (Yangjiapo and Sanyou caves, Hubei) by combining ancient DNA (aDNA) analysis with a multimethod geological dating strategy. Although U–Th dating of capping flowstones suggested they lie within the range ca. 168 to 70 ka, analyses of aDNA and direct AMS 14C dating on human teeth from Fuyan and Yangjiapo caves showed they derive from the Holocene. OSL dating of sediments and AMS 14C analysis of mammal teeth and charcoal also demonstrated major discrepancies from the flowstone ages; the difference between them being an order of magnitude or more at most of these localities. Our work highlights the surprisingly complex depositional history recorded at these subtropical caves which involved one or more episodes of erosion and redeposition or intrusion as recently as the late Holocene. In light of our findings, the first appearance datum for AMHs in southern China should probably lie within the timeframe set by molecular data of ca. 50 to 45 ka.

The fossil record suggests that Homo sapiens had evolved in Africa by 315,000 y ago (315 ka) (1), spread into West Asia before 177 ka (2), but disappeared and were seemingly replaced by Homo neanderthalensis until ca. 75 to 55 ka (3, 4). A second and final excursion from Africa by so-called anatomically modern humans (AMHs) occurred soon after and broadly coincides with the extinction of the last archaic hominins, ca. 40 to 30 ka (5, 6). This dispersal involved the ancestors of all present-day non-Africans and according to molecular data occurred ca. 65 to 45 ka (7, 8). Additional support for this “late dispersal” theory is provided by the geographical structure of contemporary DNA lineages with all non-Africans closely related to present-day and ancient eastern African populations (9, 10), as well as a clinal pattern of decreasing diversity from Africa to Eurasia, the signature of serial founder effect (1012). Corroboration has also been provided by the estimated split time between western and eastern Eurasians of ca. 47 to 42 ka as determined by ancient DNA (aDNA) from the 46,880 to 43,210 cal y B.P. (calendar year before present, i.e., before AD1950) Ust’-Ishim femur (western Siberia, Russian Federation) and the 42,000 to 39,000 cal B.P. Tianyuan skeleton (Northeast China) (1315). Finally, the upper age boundary for this dispersal is set by interbreeding between early AMHs and the Neanderthals estimated to have occurred ca. 65 to 47 ka and the ancestors of New Guineans with the Denisovans ca. 46 ka and again ca. 30 ka (13, 1619).In contrast, some paleoanthropologists have suggested that AMHs settled mainland East Asia much earlier, within the period of ca. 120 to 70 ka, in accordance with the “early dispersal” theory. This model is based largely upon the dating of isolated human teeth recovered at Huanglong, Luna, and Fuyan caves and a partial mandible from Zhirendong in southern China (2024). Yet several researchers have raised questions about these and other sites on the basis of uncertainties surrounding the identification of some of them as AMHs, relationships between human remains and dated materials, or limited information available about their depositional context and dating (2527).Here, we describe the results of an investigation of the arrival time of AMHs in southern China at five apparent early AMH cave localities involving aDNA analyses of human teeth and the dating of flowstones, sediments, fossil remains, and charcoal. The five localities we studied are the following:
  • 1)Huanglong cave, located about 25 km from the town of Yunxi, northern Hubei Province (Fig. 1). Excavations by the Hubei Provincial Institute of Cultural Relics and Archaeology during three field seasons from 2004 to 2006 provided a rich mammal record, comprising 91 taxa and representing a Middle to Late Pleistocene Ailuropoda-Stegodon fauna, stone artifacts, and seven AMH teeth dated indirectly with U–Th dating on thin flowstone formations ca. 101 to 81 ka (20).Open in a separate windowFig. 1.(A) Geographical location of Huanglong Cave (1), Luna Cave (2), Fuyan Cave (3), Yangjiapo Cave (4), and Sanyou Cave (5). (B) Human remains from three localities: Yangjiapo Cave (i), Sanyou Cave (ii), and Fuyan Cave (iii). b = buccal, d = distal, l = lingual, m = mesial, and o = occlusal).
  • 2)Luna cave, situated in the karst mountains of the southeastern part of the Bubing basin, Guangxi Zhuang Autonomous Region (Fig. 1). A small sample of mammal fossils (Ailuropoda-Stegodon assemblage), stone artifacts, and two AMH teeth were recovered during excavations by the Natural History Museum of Guangxi Autonomous Region in 2004 and 2008. They have since been dated indirectly through U–Th dating of flowstone in the range ca. 127 to 70 ka (21).
  • 3)Fuyan cave, located in Daoxian County, Hunan Province (Fig. 1). Excavations from 2011 to 2013 resulted in a large sample of mammal fossils (Ailuropoda-Stegodon faunal group) and 47 AMH teeth but no associated artifacts (22). They have been dated indirectly using U–Th dating of flowstone within the range ca. 120 to 80 ka (22). Two additional (in situ) AMH teeth, stratigraphically associated with the original finds, were recovered by us during field investigations at the site during early 2019.
  • 4)Yangjiapo Cave is a large karstic chamber located in Jianshi County (Fig. 1). It was excavated during 2004 by the Hubei Provincial Institute of Cultural Relics and Archeology and yielded 11 AMH teeth found in association with the fragmentary bones of 80 species belonging to an Ailuropoda-Stegodon fauna, implying it should be of similar age to Huanglong, Luna, and Fuyan caves. No stone artifacts or other cultural remains were found.
  • 5)Sanyou Cave is a small chamber within a limestone hill at the confluence of the Yangtze River and Xiling Gorge, close to Yichang city, Hubei Province (Fig. 1). A small excavation was undertaken in 1986 by the Yichang Museum and led to the recovery of a possible Late Pleistocene age partial AMH cranial vault (Fig. 1).
  相似文献   

13.
14.
15.
Proteins require high developability—quantified by expression, solubility, and stability—for robust utility as therapeutics, diagnostics, and in other biotechnological applications. Measuring traditional developability metrics is low throughput in nature, often slowing the developmental pipeline. We evaluated the ability of 10 variations of three high-throughput developability assays to predict the bacterial recombinant expression of paratope variants of the protein scaffold Gp2. Enabled by a phenotype/genotype linkage, assay performance for 105 variants was calculated via deep sequencing of populations sorted by proxied developability. We identified the most informative assay combination via cross-validation accuracy and correlation feature selection and demonstrated the ability of machine learning models to exploit nonlinear mutual information to increase the assays’ predictive utility. We trained a random forest model that predicts expression from assay performance that is 35% closer to the experimental variance and trains 80% more efficiently than a model predicting from sequence information alone. Utilizing the predicted expression, we performed a site-wise analysis and predicted mutations consistent with enhanced developability. The validated assays offer the ability to identify developable proteins at unprecedented scales, reducing the bottleneck of protein commercialization.

A common constraint across diagnostic, therapeutic, and industrial proteins is the ability to manufacture, store, and use intact and active molecules. These protein properties, collectively termed developability, are often associated to quantitative metrics such as recombinant yield, stability (chemical, thermal, and proteolytic), and solubility (15). Despite this universal importance, developability studies are performed late in the commercialization pipeline (2, 4) and limited by traditional experimental capacity (6). This is problematic because 1) proteins with poor developability limit practical assay capacity for measuring primary function, 2) optimal developability is often not observed with proteins originally found in alternative formats [such as display or two-hybrid technologies (7)], and 3) engineering efforts are limited by the large gap between observation size (∼102) and theoretical mutational diversity (∼1020). Thus, efficient methods to measure developability would alleviate a significant bottleneck in the lead selection process and accelerate protein discovery and engineering.Prior advances to determine developability have focused on calculating hypothesized proxy metrics from existing sequence and structural data or developing material- and time-efficient experiments. Computational sequence-developability models based on experimental antibody data have predicted posttranslational modifications (8, 9), solubility (10, 11), viscosity (12), and overall developability (13). Structural approaches have informed stability (14) and solubility (10, 15). However, many in silico models require an experimentally solved structure or suffer from computational structure prediction inaccuracies (16). Additionally, limited developability information allows for limited predictive model accuracy (17). In vitro methods have identified several experimental protocols to mimic practical developability requirements [e.g., affinity-capture self-interaction nanoparticle spectroscopy (18) and chemical precipitation (19) as metrics for solubility]. However, traditional developability quantification requires significant amounts of purified protein. Noted in both fronts are numerous in silico and/or in vitro metrics to fully quantify developability (1, 5).We sought a protein variant library that would benefit from isolation of proteins with increased developability and demonstrate the broad applicability of the process. Antibodies and other binding scaffolds, comprising a conserved framework and diversified paratope residues, are effective molecular targeting agents (2024). While significant progress has been achieved with regards to identifying paratopes for optimal binding strength and specificity (25, 26), isolating highly developable variants remains plagued. One particular protein scaffold, Gp2, has been evolved into specific binding variants toward multiple targets (2729). Continued study improved charge distribution (30), hydrophobicity (31), and stability (28). While these studies have suggested improvements for future framework and paratope residues (including a disulfide-stabilized loop), a poor developability distribution is still observed (32) (Fig. 1 A and B). Assuming the randomized paratope library will lack similar primary functionality, the Gp2 library will simulate the universal applicability of the proposed high-throughput (HT) developability assays.Open in a separate windowFig. 1.HT assays were evaluated for the ability to identify protein scaffold variants with increased developability. (A and B) Gp2 variant expression, commonly measured via low-throughput techniques such as the dot blot shown, highlights the rarity of ideal developability. (C and D) The HT on-yeast protease assay measures the stability of the POI by proteolytic extent. (E and F) The HT split-GFP assay measures POI expression via recombination of a genetically fused GFP fragment. (G and H) The HT split β-lactamase assay measures the POI stability by observing the change in cell-growth rates when grown at various antibiotic concentrations. (I and J) Assay scores, assigned to each unique sequence via deep sequencing, were evaluated by predicting expression (Fig. 3). (K and L) HT assay capacity enables large-scale developability evaluation and can be used to identify beneficial mutations (Fig. 4).We sought HT assays that allow protein developability differentiation via cellular properties to improve throughput. Variations of three primary assays were examined: 1) on-yeast stability (Fig. 1 C and D)—previously validated to improve the stability of de novo proteins (33), antimicrobial lysins (34), and immune proteins (35)—measures proteolytic cleavage of the protein of interest (POI) on the yeast cell surface via fluorescence-activated cell sorting (FACS). We extend the assay by performing the proteolysis at various denaturing combinations to determine if different stability attributes (thermal, chemical, and protease specificity) can be resolved; 2) Split green fluorescent protein (GFP, Fig. 1 E and F)—previously used to determine soluble protein concentrations (36)—measures the assembled GFP fluorescence emerging from a 16–amino acid fragment (GFP11) fused to the POI after recombining with the separably expressed GFP1-10. We extend the assay by utilizing FACS to separate cells with differential POI expression to increase throughput over the plate-based assay; and 3) Split β-lactamase (Fig. 1 G and H)—previously used to improve thermodynamic stability (37) and solubility (38)—measures cell growth inhibition via ampicillin to determine functional lactamase activity achieved from reconstitution of two enzyme fragments flanking the POI. We expand assay capacity by deep sequencing populations grown at various antibiotic concentrations to relate change in cell frequency to functional enzyme concentration.In this paper, we determined the HT assays’ abilities to predict Gp2 variant developability. We deep sequenced the stratified populations and calculated assay scores (correlating to hypothesized developability) for ∼105 Gp2 variants (Fig. 1I). We then converted the assay scores into a traditional developability metric by building a model that predicts recombinant yield (Fig. 1J). The assays’ capacity enabled yield evaluations for >100-fold traditional assay capacity (Fig. 1K, compared to Fig. 1B) and provide an introductory analysis of factors driving protein developability by observing beneficial mutations via predicted developable proteins (Fig. 1L).  相似文献   

16.
A degraded, black-and-white image of an object, which appears meaningless on first presentation, is easily identified after a single exposure to the original, intact image. This striking example of perceptual learning reflects a rapid (one-trial) change in performance, but the kind of learning that is involved is not known. We asked whether this learning depends on conscious (hippocampus-dependent) memory for the images that have been presented or on an unconscious (hippocampus-independent) change in the perception of images, independently of the ability to remember them. We tested five memory-impaired patients with hippocampal lesions or larger medial temporal lobe (MTL) lesions. In comparison to volunteers, the patients were fully intact at perceptual learning, and their improvement persisted without decrement from 1 d to more than 5 mo. Yet, the patients were impaired at remembering the test format and, even after 1 d, were impaired at remembering the images themselves. To compare perceptual learning and remembering directly, at 7 d after seeing degraded images and their solutions, patients and volunteers took either a naming test or a recognition memory test with these images. The patients improved as much as the volunteers at identifying the degraded images but were severely impaired at remembering them. Notably, the patient with the most severe memory impairment and the largest MTL lesions performed worse than the other patients on the memory tests but was the best at perceptual learning. The findings show that one-trial, long-lasting perceptual learning relies on hippocampus-independent (nondeclarative) memory, independent of any requirement to consciously remember.

A striking visual effect can be demonstrated by using a grayscale image of an object that has been degraded to a low-resolution, black-and-white image (1, 2). Such an image is difficult to identify (Fig. 1) but can be readily recognized after a single exposure to the original, intact image (Fig. 2) (36). Neuroimaging studies have found regions of the neocortex, including high-level visual areas and the medial parietal cortex, which exhibited a different pattern of activity when a degraded image was successfully identified (after seeing the intact image) than when the same degraded image was first presented and not identified (4, 5, 7). This phenomenon reflects a rapid change in performance based on experience, in this case one-trial learning, but the kind of learning that is involved is unclear.Open in a separate windowFig. 1.A sample degraded image. Most people cannot identify what is depicted. See Fig. 2.Open in a separate windowFig. 2.An intact version of the image in Fig. 1. When the intact version is presented just once directly after presentation of the degraded version, the ability to later identify the degraded image is greatly improved, even after many months. Reprinted from ref. (42), which is licensed under CC BY 4.0.One possibility is that successful identification of degraded images reflects conscious memory of having recently seen degraded images followed by their intact counterparts. When individuals see degraded images after seeing their “solutions,” they may remember what is represented in the images, at least for a time. In one study, performance declined sharply from 15 min to 1 d after the solutions were presented and then declined more gradually to a lower level after 21 d (3). Alternatively, the phenomenon might reflect a more automatic change in perception not under conscious control (8). Once the intact image is presented, the object in the degraded image may be perceived directly, independently of whether it is remembered as having been presented. By this account, successful identification of degraded images is reminiscent of the phenomenon of priming, whereby perceptual identification of words and objects is facilitated by single encounters with the same or related stimuli (911). Some forms of priming persist for quite a long time (weeks or months) (1214).These two possibilities describe the distinction between declarative and nondeclarative memory (15, 16). Declarative memory affords the capacity for recollection of facts and events and depends on the integrity of the hippocampus and related medial temporal lobe structures (17, 18). Nondeclarative memory refers to a collection of unconscious memory abilities including skills, habits, and priming, which are expressed through performance rather than recollection and are supported by other brain systems (1921). Does one-trial learning of degraded images reflect declarative or nondeclarative memory? How long does it last? In an early report that implies the operation of nondeclarative memory, two patients with traumatic amnesia improved the time needed to identify hidden images from 1 d to the next, but could not recognize which images they had seen (22). Yet, another amnesic patient reportedly failed such a task (23). The matter has not been studied in patients with medial temporal lobe (MTL) damage.To determine whether declarative (hippocampus-dependent) or nondeclarative (hippocampus-independent) memory supports the one-trial learning of degraded images, we tested five patients with bilateral hippocampal lesions or larger MTL lesions who have severely impaired declarative memory. The patients were fully intact at perceptual learning, and performance persisted undiminished from 1 d to more than 5 mo. At the same time, the patients were severely impaired at remembering both the structure of the test and the images themselves.  相似文献   

17.
Metabolic engineering uses enzymes as parts to build biosystems for specified tasks. Although a part’s working life and failure modes are key engineering performance indicators, this is not yet so in metabolic engineering because it is not known how long enzymes remain functional in vivo or whether cumulative deterioration (wear-out), sudden random failure, or other causes drive replacement. Consequently, enzymes cannot be engineered to extend life and cut the high energy costs of replacement. Guided by catalyst engineering, we adopted catalytic cycles until replacement (CCR) as a metric for enzyme functional life span in vivo. CCR is the number of catalytic cycles that an enzyme mediates in vivo before failure or replacement, i.e., metabolic flux rate/protein turnover rate. We used estimated fluxes and measured protein turnover rates to calculate CCRs for ∼100–200 enzymes each from Lactococcus lactis, yeast, and Arabidopsis. CCRs in these organisms had similar ranges (<103 to >107) but different median values (3–4 × 104 in L. lactis and yeast versus 4 × 105 in Arabidopsis). In all organisms, enzymes whose substrates, products, or mechanisms can attack reactive amino acid residues had significantly lower median CCR values than other enzymes. Taken with literature on mechanism-based inactivation, the latter finding supports the proposal that 1) random active-site damage by reaction chemistry is an important cause of enzyme failure, and 2) reactive noncatalytic residues in the active-site region are likely contributors to damage susceptibility. Enzyme engineering to raise CCRs and lower replacement costs may thus be both beneficial and feasible.

As the synthetic biology revolution brings engineering principles and practices into the life sciences, biomolecules are being rethought as component parts that are used to build new biosystems and improve existing ones (13). Enzymes—the working parts of metabolic systems—are targets for this rethinking and are increasingly being repurposed by rational design and directed evolution (4).Substrate specificity, catalytic efficiency, and expression level are common performance specifications for enzyme parts in metabolic engineering, but life span is not, despite its centrality in other engineering fields. Knowing an engineering component’s life span (how long it lasts in service) is critical to preventing system failures and optimizing maintenance schedules (5). Failure metrics such as “mean time to failure” (6) are consequently used widely in engineering, which distinguishes three types of failures: early, wear-out, and random or stochastic. All three have counterparts in enzymes operating in vivo (Fig. 1A) (718), but wear-out and random failures (Fig. 1A, red font) are most relevant to length of working life.Open in a separate windowFig. 1.The engineering concept of component failure and its application to enzymes in vivo. (A) The types of failure in manufactured components and their counterparts in enzymes operating in vivo. (B) Schematic representation of the time dependence of the hazard rate and the cumulative probability (increasing color density) that an individual component will have failed.In manufactured systems, wear-out failures are caused by cumulative deterioration processes or by use-dependent wear (Fig. 1A). Like all proteins, enzymes are subject to cumulative deterioration from oxidation, racemization, or other chemical events (“protein fatigue”) that can affect any part of the molecule and degrade its function (911). However, use-dependent wear-out has no equivalent in enzymes, i.e., enzyme performance is not progressively degraded by operation of the catalytic cycle in the way a bearing is worn down a little each time it turns (Fig. 1A). Rather, a random catalytic misfire or a chemical attack by a substrate or product on a vulnerable residue in the active-site region can instantly inactivate an enzyme, whatever its age (1418). Such failures thus have a constant hazard rate and are random or stochastic, like the abrupt failure of a transistor due to a current surge (Fig. 1A).Although the hazard of random failure does not depend on a part’s age, the cumulative probability that any individual part will experience a random failure increases with time (Fig. 1B). Given long enough, certain types of enzyme molecule may thus be doomed to have a terminal, catalysis-related accident. Such self-inflicted inactivation processes are important considerations for industrial enzymes (i.e., enzymes used ex vivo as reagents) and the number of catalytic cycles that each enzyme molecule carries out in its lifetime—often called “total turnover number”—is a key industrial performance criterion (1921).The number of catalytic cycles mediated before self-inactivation could also be key to in vivo enzyme performance. Recent proteomic evidence points to damage from the reaction catalyzed as a major mode of enzyme failure and to the possibility that some reactions do more damage than others. Thus, in the bacterium Lactococcus lactis, a fivefold increase in growth rate was accompanied by a sevenfold increase in protein turnover rate (22). This near proportionality implies that L. lactis enzymes catalyze a similar number of reactions in their lifetimes, whatever the growth rate. This fits with reaction-related damage as a cause of failure: The faster the growth, the more flux through reactions, the more damage to enzymes, and the sooner enzymes fail. Similarly, protein turnover in yeast was faster when enzymes were in active use (23). Furthermore, in L. lactis, yeast, and Arabidopsis, the fastest turning-over metabolic enzymes include many with reactive substrates, products, or intermediates (SI Appendix, Table S1) (2224), i.e., with a high risk of spontaneous chemical damage to the active site.The rates at which enzyme proteins are degraded and resynthesized are critical to the cellular energy economy because such turnover can consume about half the maintenance energy budget in microbes and plants (22, 2527). High enzyme protein turnover rates therefore potentially reduce the productivity of biosystems ranging from microbial fermentations to crops (26, 28, 29). Consistent with such reduction, fast protein turnover is associated with low biomass yield in yeast (27) and with low growth rate in Arabidopsis (30). Also, slowing the turnover of abundant, fast-turnover enzymes is predicted to substantially increase growth rate and biomass yield in plants (26, 31) and other organisms (32).Rational design or directed evolution can now be used to tune protein turnover rates (3335). However, before setting out to reduce enzyme turnover it is essential to define target enzymes and to understand why they turn over fast in the first place. Accordingly, here we calculate and compare the life spans of enzymes from three kingdoms using the criterion of “catalytic cycles until replacement” (CCR) (33), defined as the moles of substrate converted per mole of enzyme before the enzyme is replaced, i.e., the following:CCR=MetabolicfluxrateEnzymereplacementrate.[1]CCR is the in vivo equivalent of the ex vivo “total turnover number” mentioned above but is a preferable term as it avoids confusion with the term “turnover number,” a synonym in enzymology for kcat (20). CCR is envisioned as a potential constant, with reaction wear-and-tear being matched with degradation rates to maintain CCR as a factor hardwired to the structural and (bio)chemical stability of a given enzyme (33). We then compare each enzyme’s CCR to its reaction chemistry and across kingdoms to find shared attributes underlying CCR values. Our findings imply that CCRs are commonly influenced by random collateral damage from the reaction catalyzed and that enzymes could be engineered to reduce this damage and its attendant enzyme replacement costs. More generally, the findings point to catalysis-related accidents as a sizeable but underrecognized cause of enzyme failure and replacement.  相似文献   

18.
Photosynthesis of hydrogen peroxide (H2O2) in ambient conditions remains neither cost effective nor environmentally friendly enough because of the rapid charge recombination. Here, a photocatalytic rate of as high as 114 μmol⋅g−1⋅h−1 for the production of H2O2 in pure water and open air is achieved by using a Z-scheme heterojunction, which outperforms almost all reported photocatalysts under the same conditions. An extensive study at the atomic level demonstrates that Z-scheme electron transfer is realized by improving the photoresponse of the oxidation semiconductor under visible light, when the difference between the Fermi levels of the two constituent semiconductors is not sufficiently large. Moreover, it is verified that a type II electron transfer pathway can be converted to the desired Z-scheme pathway by tuning the excitation wavelengths. This study demonstrates a feasible strategy for developing efficient Z-scheme photocatalysts by regulating photoresponses.

Advanced oxidation processes (AOPs) have been widely applied to the treatment of refractory organic pollutants in the environment (1, 2). However, strong oxidants such as hydrogen peroxide (H2O2) must be added to generate reactive oxygen species (ROS) for organic pollutant degradation, which results in soaring costs (3). Thus, the application of AOPs is severely restricted, and is scarcely used for in situ restoration of river courses. Moreover, the storage and transportation of strong oxidants pose safety risks (4).Recently, to reduce costs and avoid danger associated with the storage and transportation, in situ production of H2O2 from O2 reduction and/or H2O oxidation was proposed as a potential solution (4), which has been realized through electrocatalysis and photocatalysis (58). Although the yield of H2O2 through electrocatalysis is generally higher than that through photocatalysis, the high energy consumption of electrocatalysis renders its usability for controlling river pollution on site. Conversion of O2 and H2O into H2O2 through photocatalysis is a promising approach for addressing both the energy crisis and the related environmental problems (Fig. 1A) (5, 915). However, due to the rapid recombination of electrons and holes in photocatalysts at picosecond to nanosecond timescales and the weak redox potentials, the dosage of organic electron donors such as methanol and/or the continuous bubbling of pure O2 are often needed to promote the photocatalytic efficiencies. Nevertheless, the associated high costs and secondary pollution issues are apparently undesirable for on-site restoration of river courses.Open in a separate windowFig. 1.(A) Pathways for the photosynthesis of H2O2 from H2O and O2. (B) Electron transfer in ZnPPc-g-C3N4 under visible light illumination. (C) Electron transfer in ZnPPc-NBCN under visible light or monochromatic light illumination.The construction of heterojunctions by combining two semiconductors and controlling the electron transfer in a Z-scheme pathway is an invaluable strategy to promote the charge separation efficiency and improve the redox potentials simultaneously. Nevertheless, the construction of Z-scheme heterojunctions remains challenging, as the competitive charge transfer pathway—for example, the type II electron transfer pathway was frequently the dominant pathway, which sacrificed the high redox potentials (Fig. 1B) (16). Recently, the embedded mechanism becomes clear that choosing two constituent semiconductors with a considerable difference between their Fermi levels is an effective strategy for driving the electron transfer in a Z-scheme pathway, which is attributed to the formation of a restrictive internal electric field (16, 17).By contrast, even when it was demonstrated that Z-scheme heterojunctions could also be constructed from two semiconductors with close Fermi levels, their construction seemed more like trial and error, as it is difficult to predict the electron transfer directions (18, 19). Fortunately, previous studies showed that the prioritized excitation of the reduction semiconductor in a Z-scheme heterojunction resulted in the type II electron transfer pathway, which indicated that the sufficient excitation of the oxidation semiconductor is extremely important in a Z-scheme heterojunction, especially in the visible-light region, since all desirable artificial photosynthesis systems should be used under solar light (18).Here, to verify our assumption, carbon nitride is selected as the investigation target, as the unmodified carbon nitride cannot be efficiently excited in the visible-light region owing to its weak absorption in this region. After enhancing its absorption capacity to visible light by doping with boron, a Z-scheme heterojunction is constructed by decorating zinc polyphthalocyanine (ZnPPc) on this modified carbon nitride nanosheets via in situ polymerization, despite the close Fermi levels between ZnPPc and the modified carbon nitride. In comparison, the in situ growth of ZnPPc on unmodified carbon nitride can only produce a type II heterojunction (Fig. 1B). To the best of our knowledge, Z-scheme heterojunction has not been previously developed by using any metal poly phthalocyanine and modified carbon nitride. This Z-scheme heterojunction photocatalyst realizes the high-efficient photocatalytic production of H2O2 (114 μmol⋅g−1⋅h−1) in pure water and without bubbling pure oxygen, which even outperforms most photocatalysts that require the use of pure O2 (Fig. 1A). This study presents a general strategy for constructing Z-scheme heterojunctions when the Fermi-levels of the two constituent semiconductors are close (Fig. 1C) and paves an important step for the rational design of Z-scheme heterojunctions.  相似文献   

19.
Here we report complex supramolecular tessellations achieved by the directed self-assembly of amphiphilic platinum(II) complexes. Despite the twofold symmetry, these geometrically simple molecules exhibit complicated structural hierarchy in a columnar manner. A possible key to such an order increase is the topological transition into circular trimers, which are noncovalently interlocked by metal···metal and π–π interactions, thereby allowing for cofacial stacking in a prismatic assembly. Another key to success is to use the immiscibility of the tailored hydrophobic and hydrophilic sidechains. Their phase separation leads to the formation of columnar crystalline nanostructures homogeneously oriented on the substrate, featuring an unusual geometry analogous to a rhombitrihexagonal Archimedean tiling. Furthermore, symmetry lowering of regular motifs by design results in an orthorhombic lattice obtained by the coassembly of two different platinum(II) amphiphiles. These findings illustrate the potentials of supramolecular engineering in creating complex self-assembled architectures of soft materials.

Tessellation in two dimensions (2D) is a very old topic in geometry on how one or more shapes can be periodically arranged to fill a Euclidean plane without any gaps. Tessellation principles have been extensively applied in decorative art since the early times. In natural sciences, there has been a growing attention on creating ordered structures with increasingly complex architectures inspired by semi-regular Archimedean tilings (ATs) and quasicrystalline textures on account of their intriguing physical properties (15) and biological functions (6). Recent advances in this regard have been achieved in various fields of supramolecular science, including the programmable self-assembly of DNA molecules (7), coordination-driven assembly (810), supramolecular interfacial engineering (1113), crystallization of organic polygons (14, 15), colloidal particle superlattices (16), and other soft-matter systems (1720). Moreover, tessellation in 2D can overcome the topological frustration to generate complex semi- or non-regular patterns by using geometrically simple motifs. As exemplified by the self-templating assembly of spherical soft microparticles (21), a vast array of 2D micropatterns encoding non-regular tilings, such as rectangular, rhomboidal, hexagonal, and herringbone superlattices were obtained by layer-by-layer strategy at a liquid–liquid interface. Tessellation principles have also been extended to the self-assembly of giant molecules in three dimensions (3D). Superlattices with high space-group symmetry (Im3¯m, Pm3¯n, and P42/mnm) were reported in dendrimers and dendritic polymers by Percec and coworkers (2224). Recently, Cheng and coworkers identified the highly ordered Frank–Kasper phases obtained from giant amphiphiles containing molecular nanoparticles (2528). Despite such advancements made in the field of soft matter, an understanding of how structural ordering in supramolecular materials is influenced by the geometric factors of its constituent molecules has so far remained elusive.In light of these developments and the desire to explore the supramolecular systems, square-planar platinum(II) (PtII) polypyridine complexes may serve as an ideal candidate for model studies not only because of their intriguing spectroscopic and luminescence properties (29, 30), but also because of their propensity to form supramolecular polymers or oligomers via noncovalent Pt···Pt and π–π interactions (3139). Although rod-shaped and lamellar structures are the most commonly observed in the self-assembly of planar PtII complexes (3439), 2D-ordered nanostructures, such as the hexagonally packed columns (31, 40) and honeycomb-like networks (4143), were recently first demonstrated by us.Herein, we report a serendipitous discovery of a C2h-symmetric PtII amphiphile (Fig. 1A) that can hierarchically self-assemble into a 3D-ordered nanostructure with hexagonal geometry. Interestingly, this structurally anisotropic molecule possibly undergoes topological transition and interlocks to form its circular trimer by noncovalent Pt···Pt and π–π interactions (Fig. 1B). The resultant triangular motif is architecturally stabilized and preorganized for one-dimensional (1D) prismatic assembly (Fig. 1C). Together with the phase separation of the tailored hydrophobic and hydrophilic sidechains, an unusual and unique 3D hexagonal lattice is formed (Fig. 1D), in which the Pt centers adopt a rare rhombitrihexagonal AT-like order. Finally, the nanoarchitecture develops in a hierarchical manner on the substrate due to the homogeneous nucleation (Fig. 1E).Open in a separate windowFig. 1.Hierarchical self-assembly of PtII amphiphile into hexagonal ordering. (A) Space-filling (CPK) model of a C2h-symmetric PtII amphiphile (1). All of the hydrogen atoms and counterions are omitted for clarity. (B) CPK representations of possible models of regular triangular, tetragonal, pentagonal, and hexagonal motifs formed with Pt···Pt and π–π stacking. These motifs possess a hydrophilic core (red) with various diameters wrapped by a hydrophobic shell comprising long alkyl chains (gray). (C) CPK representation of a 1D prismatic structure consisting of circular trimers with long-range Pt···Pt and π–π stacking. (D) CPK representation of a 3D columnar lattice constructed by the prismatic assemblies adopting a rare rhombitrihexagonal AT-like order. With the assistance of the phase separation, the hydrophobic domain serves as a discrete column associated with six prismatic neighbors. (E) Schematic representation of the nanoarchitecture with homogeneous orientation.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号