首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
In matter, any spontaneous symmetry breaking induces a phase transition characterized by an order parameter, such as the magnetization vector in ferromagnets, or a macroscopic many-electron wave function in superconductors. Phase transitions with unknown order parameter are rare but extremely appealing, as they may lead to novel physics. An emblematic and still unsolved example is the transition of the heavy fermion compound URu2Si2 (URS) into the so-called hidden-order (HO) phase when the temperature drops below T0=17.5 K. Here, we show that the interaction between the heavy fermion and the conduction band states near the Fermi level has a key role in the emergence of the HO phase. Using angle-resolved photoemission spectroscopy, we find that while the Fermi surfaces of the HO and of a neighboring antiferromagnetic (AFM) phase of well-defined order parameter have the same topography, they differ in the size of some, but not all, of their electron pockets. Such a nonrigid change of the electronic structure indicates that a change in the interaction strength between states near the Fermi level is a crucial ingredient for the HO to AFM phase transition.

The transition of URu2Si2 from a high-temperature paramagnetic (PM) phase to the hidden-order (HO) phase below T0 is accompanied by anomalies in specific heat (13), electrical resistivity (1, 3), thermal expansion (4), and magnetic susceptibility (2, 3) that are all typical of magnetic ordering. However, the small associated antiferromagnetic (AFM) moment (5) is insufficient to explain the large entropy loss and was shown to be of extrinsic origin (6). Inelastic neutron scattering (INS) experiments revealed gapped magnetic excitations below T0 at commensurate and incommensurate wave vectors (79), while an instability and partial gapping of the Fermi surface was observed by angle-resolved photoemission spectroscopy (ARPES) (1016) and scanning tunneling microscopy/spectroscopy (17, 18). More recently, high-resolution, low-temperature ARPES experiments imaged the Fermi surface reconstruction across the HO transition, unveiling the nesting vectors between Fermi sheets associated with the gapped magnetic excitations seen in INS experiments (14, 19) and quantitatively explaining, from the changes in Fermi surface size and quasiparticle mass, the large entropy loss in the HO phase (19). Nonetheless, the nature of the HO parameter is still hotly debated (2023).The HO phase is furthermore unstable above a temperature-dependent critical pressure of about 0.7 GPa at T=0, at which it undergoes a first-order transition into a large moment AFM phase where the value of the magnetic moment per U atom exhibits a sharp increase, by a factor of 10 to 50 (6, 2430). When the system crosses the HO AFM phase boundary, the characteristic magnetic excitations of the HO phase are either suppressed or modified (8, 31), while resistivity and specific heat measurements suggest that the partial gapping of the Fermi surface is enhanced (24, 27).As the AFM phase has a well-defined order parameter, studying the evolution of the electronic structure across the HO/AFM transition would help develop an understanding of the HO state. So far, the experimental determination of the Fermi surface by Shubnikov de Haas (SdH) oscillations only showed minor changes across the HO AFM phase boundary (32). Here, we take advantage of the HO/AFM transition induced by chemical pressure in URu2Si2, through the partial substitution of Ru with Fe (3337), to directly probe its electronic structure in the AFM phase using ARPES. As we shall see, our results reveal that changes in the Ru 4d–U 5f hybridization across the HO/AFM phase boundary seem essential for a better understanding of the HO state.  相似文献   

2.
Socioeconomic viability of fluvial-deltaic systems is limited by natural processes of these dynamic landforms. An especially impactful occurrence is avulsion, whereby channels unpredictably shift course. We construct a numerical model to simulate artificial diversions, which are engineered to prevent channel avulsion, and direct sediment-laden water to the coastline, thus mitigating land loss. We provide a framework that identifies the optimal balance between river diversion cost and civil disruption by flooding. Diversions near the river outlet are not sustainable, because they neither reduce avulsion frequency nor effectively deliver sediment to the coast; alternatively, diversions located halfway to the delta apex maximize landscape stability while minimizing costs. We determine that delta urbanization generates a positive feedback: infrastructure development justifies sustainability and enhanced landform preservation vis-à-vis diversions.

Deltaic environments are critical for societal wellbeing because these landscapes provide an abundance of natural resources that promote human welfare (1, 2). However, the sustainability of deltas is uncertain due to sea-level rise (3, 4), sediment supply reduction (46), and land subsidence (7, 8). Additionally, river avulsion, the process of sudden channel relocation (9, 10), presents a dichotomy to delta sustainability: the unanticipated civil disruption associated with flooding brought by channel displacement is at odds with society’s desire for landscape stability, yet channel relocation is needed to deliver nutrients and sediment to various locations along the deltaic coastline (11, 12). Indeed, for many of the world’s megadeltas, channel engineering practices have sought to restrict channel mobility and limit floodplain connectivity (13, 14), which in turn prevents sediment dispersal that is necessary to sustain deltas; as a consequence, land loss has ensued (15). Despite providing near-term stability (1315), engineering of deltaic channels is a long-term detrimental practice (11, 1517).To maximize societal benefit, measures that promote delta sustainability must balance engineering infrastructure cost and impact on delta morphology with benefits afforded by maintaining and developing deltaic landscapes (1, 2, 11, 12, 16–19). For example, channel diversions, costing millions to billions of dollars (2022), are now planned worldwide to both prevent unintended avulsions and ensure coastal sustainability through enhanced sediment delivery (e.g., Fig. 1A) (20, 21, 2326).Open in a separate windowFig. 1.(A) Satellite image of Yellow River delta (Landsat, 1978) showing coastline response to a diversion in 1976 at the open circle, which changed the channel course from the north (Diaokou lobe) to the east (Qingshuigou lobe) and produced flooding over the stripe-hatched area (30). (B and C) Planform view (B) and along-channel cross-section view (C) of conceptual model for numerical simulations and societal benefit formulation. In the diagrams, a diversion at LD0.8Lb floods an area (af) defined by Lf and θ, diverting sediment away from the deltaic lobe (with length Ll). Aggradation of the former channel bed (dashed line) is variable; hence, diversion length influences the propensity for subsequent avulsion setup.In this article, we consider the benefits and costs of such engineered river diversions and determine how these practices most effectively sustain deltaic landscapes, by assessing optimal placement and timing for river diversions. Addressing these points requires combining two modeling frameworks: a morphodynamic approach—evolving the landscape over time and space by evaluating the interactions of river fluid flow and sediment transport—and a decision-making framework (21, 22, 27, 28). The former simulates deltaic channel diversions by assessing the nonlinear relationships between channel diversion length (LD) and the frequency (timing) of avulsions (TA), while the latter incorporates a societal benefit model that approximates urbanization by considering the cost of flooding a landscape that would otherwise generate revenue. The aim is to optimize timing and placement of channel diversions, by giving consideration to morphodynamic operations and societal wellbeing. Interestingly, optimal societal benefit indicates that urbanization justifies enhanced sustainability measures, which contradicts existing paradigms that label development and sustainability mutually exclusive (3, 7, 12). Ultimately, the societal benefit model should be an integrated component in decision-making frameworks. This will help locate diversions and promote sustainable and equitable decisions considering historical, ethical, and environmental contexts for river management decisions (29).  相似文献   

3.
Our study of cholesteric lyotropic chromonic liquid crystals in cylindrical confinement reveals the topological aspects of cholesteric liquid crystals. The double-twist configurations we observe exhibit discontinuous layering transitions, domain formation, metastability, and chiral point defects as the concentration of chiral dopant is varied. We demonstrate that these distinct layer states can be distinguished by chiral topological invariants. We show that changes in the layer structure give rise to a chiral soliton similar to a toron, comprising a metastable pair of chiral point defects. Through the applicability of the invariants we describe to general systems, our work has broad relevance to the study of chiral materials.

Chiral liquid crystals (LCs) are ubiquitous, useful, and rich systems (14). From the first discovery of the liquid crystalline phase to the variety of chiral structures formed by biomolecules (59), the twisted structure, breaking both mirror and continuous spatial symmetries, is omnipresent. The unique structure also makes the chiral nematic (cholesteric) LC, an essential material for applications utilizing the tunable, responsive, and periodic modulation of anisotropic properties.The cholesteric is also a popular model system to study the geometry and topology of partially ordered matter. The twisted ground state of the cholesteric is often incompatible with confinement and external fields, exhibiting a large variety of frustrated and metastable director configurations accompanying topological defects. Besides the classic example of cholesterics in a Grandjean−Cano wedge (10, 11), examples include cholesteric droplets (1216), colloids (1719), shells (2022), tori (23, 24), cylinders (2529), microfabricated structures (30, 31), and films between parallel plates with external fields (3240). These structures are typically understood using a combination of nematic (achiral) topology (41, 42) and energetic arguments, for example, the highly successful Landau−de Gennes approach (43). However, traditional extensions of the nematic topological approach to cholesterics are known to be conceptually incomplete and difficult to apply in regimes where the system size is comparable to the cholesteric pitch (41, 44).An alternative perspective, chiral topology, can give a deeper understanding of these structures (4547). In this approach, the key role is played by the twist density, given in terms of the director field n by n×n. This choice is not arbitrary; the Frank free energy prefers n×nq0=2π/p0 with a helical pitch p0, and, from a geometric perspective, n×n0 defines a contact structure (48). This allows a number of new integer-valued invariants of chiral textures to be defined (45). A configuration with a single sign of twist is chiral, and two configurations which cannot be connected by a path of chiral configurations are chirally distinct, and hence separated by a chiral energy barrier. Within each chiral class of configuration, additional topological invariants may be defined using methods of contact topology (4548), such as layer numbers. Changing these chiral topological invariants requires passing through a nonchiral configuration. Cholesterics serve as model systems for the exploration of chirality in ordered media, and the phenomena we describe here—metastability in chiral systems controlled by chiral topological invariants—has applicability to chiral order generally. This, in particular, includes chiral ferromagnets, where, for example, our results on chiral topological invariants apply to highly twisted nontopological Skyrmions (49, 50) (“Skyrmionium”).Our experimental model to explore the chiral topological invariants is the cholesteric phase of lyotropic chromonic LCs (LCLCs). The majority of experimental systems hitherto studied are based on thermotropic LCs with typical elastic and surface-anchoring properties. The aqueous LCLCs exhibiting unusual elastic properties, that is, very small twist modulus K2 and large saddle-splay modulus K24 (5156), often leading to chiral symmetry breaking of confined achiral LCLCs (53, 54, 5661), may enable us to access uncharted configurations and defects of topological interests. For instance, in the layer configuration by cholesteric LCLCs doped with chiral molecules, their small K2 provides energetic flexibility to the thickness of the cholesteric layer, that is, the repeating structure where the director n twists by π. The large K24 affords curvature-induced surface interactions in combination with a weak anchoring strength of the lyotropic LCs (6264).We present a systematic investigation of the director configuration of cholesteric LCLCs confined in cylinders with degenerate planar anchoring, depending on the chiral dopant concentration. We show that the structure of cholesteric configurations is controlled by higher-order chiral topological invariants. We focus on two intriguing phenomena observed in cylindrically confined cholesterics. First, the cylindrical symmetry renders multiple local minima to the energy landscape and induces discontinuous increase of twist angles, that is, a layering transition, upon the dopant concentration increase. Additionally, the director configurations of local minima coexist as metastable domains with point-like defects between them. We demonstrate that a chiral layer number invariant distinguishes these configurations, protects the distinct layer configurations (45), and explains the existence of the topological defect where the invariant changes.  相似文献   

4.
Experiments have shown that the families of cuprate superconductors that have the largest transition temperature at optimal doping also have the largest oxygen hole content at that doping [D. Rybicki et al., Nat. Commun. 7, 1–6 (2016)]. They have also shown that a large charge-transfer gap [W. Ruan et al., Sci. Bull. (Beijing) 61, 1826–1832 (2016)], a quantity accessible in the normal state, is detrimental to superconductivity. We solve the three-band Hubbard model with cellular dynamical mean-field theory and show that both of these observations follow from the model. Cuprates play a special role among doped charge-transfer insulators of transition metal oxides because copper has the largest covalent bonding with oxygen. Experiments [L. Wang et al., arXiv [Preprint] (2020). https://arxiv.org/abs/2011.05029 (Accessed 10 November 2020)] also suggest that superexchange is at the origin of superconductivity in cuprates. Our results reveal the consistency of these experiments with the above two experimental findings. Indeed, we show that covalency and a charge-transfer gap lead to an effective short-range superexchange interaction between copper spins that ultimately explains pairing and superconductivity in the three-band Hubbard model of cuprates.

Although several classes of high-temperature superconductors have been discovered, including pnictides, sulfur hydrides, and rare earth hydrides, cuprate high-temperature superconductors are still particularly interesting from a fundamental point of view because of the strong quantum effects expected from their doped charge-transfer insulator nature and single-band spin-one-half Fermi surface (1, 2).Among the most enduring mysteries of cuprate superconductivity is the experimental discovery, early on, that the hole content on oxygen plays a crucial role (25). Oxygen hole content (2np) is particularly relevant since NMR (5, 6) suggests a correlation between optimal Tc and 2np on the CuO2 planes: A higher oxygen hole content at the optimal doping of a given family of cuprates leads to a higher critical temperature. This is summarized in figure 2 of ref. 6. The charge-transfer gap also seems to play a central role for the value of Tc, as suggested by scanning tunneling spectroscopy (7) and by theory (8). Many studies have shown that doped holes primarily occupy oxygen orbitals (3, 911). This long unexplained role of oxygen hole content and charge-transfer gap on the strength of superconductivity in cuprates is addressed in this paper.The vast theoretical literature on the one-band Hubbard model in the strong-correlation limit shows that many of the qualitative experimental features of cuprate superconductors (12, 13) can be understood (14), but obviously not the above experimental facts regarding oxygen hole content. Furthermore, variational calculations (15) and various Monte Carlo approaches (16, 17) suggest that d-wave superconductivity in the one-band Hubbard model may not be the ground state, at least in certain parameter ranges (18, 19).It is thus important to investigate more realistic models, such as the three-band Emery-VSA (Varma–Schmitt-Rink–Abrahams) model that accounts for copper–oxygen hybridization of the single band that crosses the Fermi surface (20, 21). A variety of theoretical methods (8, 2227) revealed many similarities with the one-band Hubbard model, but also differences related to the role of oxygen (28, 29).Investigating the causes for the variation of the transition temperature Tc for various cuprates is a key scientific goal of the quantum materials roadmap (30).* We find and explain the above correlations found in NMR and in scanning tunnelling spectroscopy; highlight the importance of the difference between electron affinity of oxygen and ionization energy of copper (21, 31); and, finally, document how oxygen hole content, charge-transfer gap, and covalency conspire to create an effective superexchange interaction between copper spins that is ultimately responsible for superconductivity.We do not address questions related to intraunit-cell order (32, 33).  相似文献   

5.
A continuum of water populations can exist in nanoscale layered materials, which impacts transport phenomena relevant for separation, adsorption, and charge storage processes. Quantification and direct interrogation of water structure and organization are important in order to design materials with molecular-level control for emerging energy and water applications. Through combining molecular simulations with ambient-pressure X-ray photoelectron spectroscopy, X-ray diffraction, and diffuse reflectance infrared Fourier transform spectroscopy, we directly probe hydration mechanisms at confined and nonconfined regions in nanolayered transition-metal carbide materials. Hydrophobic (K+) cations decrease water mobility within the confined interlayer and accelerate water removal at nonconfined surfaces. Hydrophilic cations (Li+) increase water mobility within the confined interlayer and decrease water-removal rates at nonconfined surfaces. Solutes, rather than the surface terminating groups, are shown to be more impactful on the kinetics of water adsorption and desorption. Calculations from grand canonical molecular dynamics demonstrate that hydrophilic cations (Li+) actively aid in water adsorption at MXene interfaces. In contrast, hydrophobic cations (K+) weakly interact with water, leading to higher degrees of water ordering (orientation) and faster removal at elevated temperatures.

Geologic clays are minerals with variable amounts of water trapped within the bulk structure (1) and are routinely used as hydraulic barriers where water and contaminant transport must be controlled (2, 3). These layered materials can exhibit large degrees of swelling when intercalated with a hydrated cation (4). Fundamentally, water adsorption at exposed interfaces and transport in confined channels is dictated by geometry, morphology, and chemistry (e.g., surface chemistry, local solutes, etc.) (5). Understanding water adsorption and swelling in natural clay materials has significant implications for understanding water interactions in nanoscale layered materials. At the nanoscale, the ability to control the interlayer swelling and water adsorption can lead to more precise control over mass and reactant transport, resulting in enhancement in properties necessary for next-generation energy storage (power and capacity) (68), membranes (selectivity, salt rejection, and water permeability), catalysis (913), and adsorption (14).Two-dimensional (2D) and multilayered transition-metal carbides and nitrides (MXenes) are a recent addition to the few-atom-thick materials and have been widely studied in their applications to energy storage (6, 9, 15, 16), membranes (13), and adsorption (17). MXenes (Mn+1XnTx) are produced via selective etching of A elements from ceramic MAX (Mn+1 AXn) phase materials (11, 18). The removal of A element results in thin Mn+1 Xn nanosheets with negative termination groups (Tx). MXene’s hydrophilic and negatively charged surface properties promote spontaneous intercalation of a wide array of ions and compounds. Cation intercalation properties in MXenes have been vigorously explored due to their demonstrated high volumetric capacitance, which may enable high-rate energy storage (6, 19). In addition, their unique and rich surface chemistry may enable selective ion adsorption, making them promising candidates for water purification and catalytic applications (2022).Water and ion transport within multilayered MXenes is governed by the presence of a continuum of water populations. The configuration of water in confined (interlayer) and nonconfined state (surface) influences the material system’s physical properties (13, 2327). However, our current understanding of water–surface interactions and water structure at the molecular scale is incomplete due to limited characterization approaches (28). Most modern observations are limited to macroscopic measurements (e.g., transport measurement, contact angle, etc.), which do not capture the impact of local heterogeneity due to surface roughness, surface chemistry, solutes, etc. (29). Herein, we address this gap via combining theory with an ensemble of direct and indirect interrogation techniques. Water structure and sorption properties at MXene interfaces are directly probed by using ambient-pressure X-ray photoelectron spectroscopy (APXPS), X-ray diffraction (XRD), and diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS). APXPS enables detection of local chemically specific signatures and quantitative analysis at near-ambient pressures (30). This technique provides the ability to spatially resolve the impact of surface chemistry and solutes on water sorption/desorption at water–solid interfaces. Model hydrophobic (e.g., K+) and hydrophilic (e.g., Li+) cations were intercalated into the layers via ion exchange to systematically probe the impacts of charged solutes on water orientation and sorption. Prior reports suggest that water within the confined interlayer transforms from bulk-like to crystalline when intercalated with bulky cations (31, 32). Furthermore, it has been demonstrated that water ordering is correlated with ion size (33, 34). Here, we expand upon this early work and examine the role that solute hydrophobicity and hydrophilicity impacts water adsorption on solid interfaces. Water mobility within the interlayer is impacted by the hydration energy of that cation. Results shed light on the intertwined role that surface counterions and terminating groups play on the dynamics of hydration and dehydration.  相似文献   

6.
Quantum coherence, an essential feature of quantum mechanics allowing quantum superposition of states, is a resource for quantum information processing. Coherence emerges in a fundamentally different way for nonidentical and identical particles. For the latter, a unique contribution exists linked to indistinguishability that cannot occur for nonidentical particles. Here we experimentally demonstrate this additional contribution to quantum coherence with an optical setup, showing that its amount directly depends on the degree of indistinguishability and exploiting it in a quantum phase discrimination protocol. Furthermore, the designed setup allows for simulating fermionic particles with photons, thus assessing the role of exchange statistics in coherence generation and utilization. Our experiment proves that independent indistinguishable particles can offer a controllable resource of coherence and entanglement for quantum-enhanced metrology.

A quantum system can reside in coherent superpositions of states, which have a role in the interpretation of quantum mechanics (14), lead to nonclassicality (5, 6), and imply the intrinsically probabilistic nature of predictions in the quantum realm (7, 8). Besides this fundamental role, quantum coherence is also at the basis of quantum algorithms (914) and, from a modern information-theoretic perspective, constitutes a paradigmatic basis-dependent quantum resource (1517), providing a quantifiable advantage in certain quantum information protocols.For a single quantum particle, coherence manifests itself when the particle is found in a superposition of a reference basis, for instance, the computational basis of the Hilbert space. Formally, any quantum state whose density matrix contains nonzero diagonal elements when expressed in the reference basis is said to display quantum coherence (16). This is the definition of quantum coherence employed in our work. For multiparticle compound systems, the physics underlying the emergence of quantum coherence is richer and strictly connected to the nature of the particles, with fundamental differences for nonidentical and identical particles. A particularly intriguing observation is that the states of identical particle systems can manifest coherence even when no particle resides in superposition states, provided that the wave functions of the particles overlap (1820). In general, a special contribution to quantum coherence arises thanks to the spatial indistinguishability of identical particles, which cannot exist for nonidentical (or distinguishable) particles (18). Recently, it has been found that the spatial indistinguishability of identical particles can be exploited for entanglement generation (21), applicable even for spacelike-separated quanta (22) and against preparation and dynamical noises (2326). The presence of entanglement is a signature that the bipartite system as a whole carries coherence even when the individual particles do not, the amount of this coherence being dependent on the degree of indistinguishability. We name this specific contribution to quantumness of compound systems “indistinguishability-based coherence,” in contrast to the more familiar “single-particle superposition-based coherence.” Indistinguishability-based coherence qualifies in principle as an exploitable resource for quantum metrology (18). However, it requires sophisticated control techniques to be harnessed, especially in view of its nonlocal nature. Moreover, a crucial property of identical particles is the exchange statistics, while its experimental study requiring operating both bosons and fermions in the same setup is generally challenging.In the present work, we investigate the operational contribution of quantum coherence stemming from the spatial indistinguishability of identical particles. The main aim of our experiment is to prove that elementary states of two independent spatially indistinguishable particles can give rise to exploitable quantum coherence, with a measurable effect due to particle statistics. By utilizing our recently developed photonic architecture capable of tuning the indistinguishability of two uncorrelated photons (27), we observe the direct connection between the degree of indistinguishability and the amount of generated coherence and show that indistinguishability-based coherence can be concurrent with single-particle superposition-based coherence. In particular, we demonstrate its operational implications, namely, providing a quantifiable advantage in a phase discrimination task (28, 29), as depicted in Fig. 1. Furthermore, we design a setup capable of testing the impact of particle statistics in coherence production and phase discrimination for both bosons and fermions; this is accomplished by compensating for the exchange phase during state preparation, simulating fermionic states with photons, which leads to statistics-dependent efficiency of the quantum task.Open in a separate windowFig. 1.Illustration of the indistinguishability-activated phase discrimination task. A resource state ρin that contains coherence in a computational basis is generated from spatial indistinguishability. The state then enters a black box which implements a phase unitary U^k=eiG^ϕk,k{1,,n} on ρin. The goal is to determine the ϕk actually applied through the output state ρout: indistinguishability-based coherence provides an operational advantage in this task.  相似文献   

7.
Fault friction is central to understanding earthquakes, yet laboratory rock mechanics experiments are restricted to, at most, meter scale. Questions thus remain as to the applicability of measured frictional properties to faulting in situ. In particular, the slip-weakening distance dc strongly influences precursory slip during earthquake nucleation, but scales with fault roughness and is challenging to extrapolate to nature. The 2018 eruption of Kīlauea volcano, Hawaii, caused 62 repeatable collapse events in which the summit caldera dropped several meters, accompanied by MW 4.7 to 5.4 very long period (VLP) earthquakes. Collapses were exceptionally well recorded by global positioning system (GPS) and tilt instruments and represent unique natural kilometer-scale friction experiments. We model a piston collapsing into a magma reservoir. Pressure at the piston base and shear stress on its margin, governed by rate and state friction, balance its weight. Downward motion of the piston compresses the underlying magma, driving flow to the eruption. Monte Carlo estimation of unknowns validates laboratory friction parameters at the kilometer scale, including the magnitude of steady-state velocity weakening. The absence of accelerating precollapse deformation constrains dc to be 10 mm, potentially much less. These results support the use of laboratory friction laws and parameters for modeling earthquakes. We identify initial conditions and material and magma-system parameters that lead to episodic caldera collapse, revealing that small differences in eruptive vent elevation can lead to major differences in eruption volume and duration. Most historical basaltic caldera collapses were, at least partly, episodic, implying that the conditions for stick–slip derived here are commonly met in nature.

Our knowledge of rock friction comes from laboratory experiments on samples from centimeters to at most meter scale (1, 2). These experiments have led to rate- and state-dependent friction laws (3, 4), which together with continuum fault models explain many features of natural earthquakes (5, 6). Extrapolation of laboratory-derived constitutive parameters to faults in situ, however, has been challenging, particularly for the characteristic slip weakening distance, dc, the displacement scale over which friction degrades from nominally static to dynamic values. In the laboratory dc ranges from several to tens of micrometers, but scales with fault roughness (7). Some seismological estimates are up to five orders of magnitude larger (8), but are sensitive to the decrease in shear strength at earthquake rupture fronts, leading to weakening lengths that scale with dc, but can be much larger (9, 10). Understanding the magnitude of dc in situ is crucial because the amount of potentially observable precursory slip scales with dc (11). Significant insights have been gained from in situ fluid injection experiments into faults that induce aseismic slip and seismicity (1214), yet constraints on the parts of faults that actually generate earthquakes are rare.Collapse at basaltic shield volcanoes typically occurs in repeated discrete events, generating characteristic deformation transients and very long period (VLP) earthquakes (1517). Rapid outflow of magma causes the pressure in subcaldera magma reservoirs to decrease, leading to an increase in stress in the overlying crust. Collapse initiates if this stress reaches the crustal strength, forming ring faults bounding down-dropped block(s) (18). Once initiated, collapse transfers the weight of the overlying crust onto the magma reservoir, maintaining pressure necessary for the eruption to continue (19). Thus, caldera collapse is not simply a response to the rapid withdrawal of magma, but is also an essential process in sustaining these eruptions.The 2018 Kīlauea collapses were quasi-periodic and exceptionally well monitored by nearby global positioning system (GPS) and tilt stations, including GPS stations on the down-dropped block(s). These data can be used to infer stress changes on the caldera-bounding ring faults, making them effectively kilometer-scale stick–slip experiments. The highly repeatable nature of the collapses, as well as constraints on the changes in magma pressure prior to the onset of collapse (20), minimizes uncertainty due to otherwise difficult to constrain initial conditions.  相似文献   

8.
Optical cavities confine light on a small region in space, which can result in a strong coupling of light with materials inside the cavity. This gives rise to new states where quantum fluctuations of light and matter can alter the properties of the material altogether. Here we demonstrate, based on first-principles calculations, that such light–matter coupling induces a change of the collective phase from quantum paraelectric to ferroelectric in the SrTiO3 ground state, which has thus far only been achieved in out-of-equilibrium strongly excited conditions [X. Li et al., Science 364, 1079–1082 (2019) and T. F. Nova, A. S. Disa, M. Fechner, A. Cavalleri, Science 364, 1075–1079 (2019)]. This is a light–matter hybrid ground state which can only exist because of the coupling to the vacuum fluctuations of light, a photo ground state. The phase transition is accompanied by changes in the crystal structure, showing that fundamental ground state properties of materials can be controlled via strong light–matter coupling. Such a control of quantum states enables the tailoring of materials properties or even the design of novel materials purely by exposing them to confined light.

Engineering an out-of-equilibrium state of a material by means of strong light fields can drastically change its properties and even induce new phases altogether. This is considered a new paradigm of material design, especially when the collective behavior of particles in quantum materials can be controlled to provide novel functionalities (1, 2). Alternatively to the intense lasers necessary to reach such out-of-equilibrium states, one can achieve strong light–matter coupling by placing the material inside an optical cavity (311). A main advantage of this approach is that strong interaction can be achieved at equilibrium, opening up new possibilities for materials manipulation. Among the proposed effects are novel exciton insulator states (12), control of excitonic energy ordering (13), enhanced electron–phonon coupling (14), photon-mediated electron pairing (1518), enhanced ferroelectricity (19), and multi-quasi-particles hybridization (20). One enticing possibility is, however, to change the ground state of a material and to create a new phase not through excited quasi-particles but truly as the equilibrium state.Here we show that this can be achieved in the paraelectric SrTiO3 as a photo-correlated ferroelectric ground state. This ground state, which we refer to as photo ground state, is the result of the strong coupling between matter and quantum vacuum fluctuations of light. While similar materials of the perovskite family undergo a para- to ferroelectric phase transition at low temperatures, SrTiO3 remains paraelectric (21), because the nuclear quantum fluctuations prevent the emergence of a collective polarization that is characteristic of the ferroelectric phase (22, 23). Alterations to the material that overcome a relatively small activation energy, however, can induce ferroelectricity: for instance, through isotope substitution (24), strain (25, 26), and, most notably, nonlinear excitation of the lattice by strong and resonant terahertz laser pumping (27, 28). In the latter type of experiments, a transient broken symmetry of the structure as well as macroscopic polarization indicative of a transient ferroelectric phase have been observed.By using atomistic calculations, we show that the off-resonant dressing of the lattice of SrTiO3 with the vacuum fluctuations of the photons in a cavity can suppress the nuclear quantum fluctuations in a process that is analogous to the one of dynamical localization (29): As explained in Results and Discussion, the interaction with cavity photons effectively results in an enhancement of the effective mass of the ions, thus slowing them down and reducing the importance of their quantum fluctuations. We further demonstrate that the effect of cavity-induced localization extends to finite temperatures, even when thermal lattice fluctuations overcome the quantum ones. We thus introduce a revisited paraelectric to ferroelectric phase diagram, with the cavity coupling strength as a new dimension.  相似文献   

9.
Lyotropic chromonic liquid crystals are water-based materials composed of self-assembled cylindrical aggregates. Their behavior under flow is poorly understood, and quantitatively resolving the optical retardance of the flowing liquid crystal has so far been limited by the imaging speed of current polarization-resolved imaging techniques. Here, we employ a single-shot quantitative polarization imaging method, termed polarized shearing interference microscopy, to quantify the spatial distribution and the dynamics of the structures emerging in nematic disodium cromoglycate solutions in a microfluidic channel. We show that pure-twist disclination loops nucleate in the bulk flow over a range of shear rates. These loops are elongated in the flow direction and exhibit a constant aspect ratio that is governed by the nonnegligible splay-bend anisotropy at the loop boundary. The size of the loops is set by the balance between nucleation forces and annihilation forces acting on the disclination. The fluctuations of the pure-twist disclination loops reflect the tumbling character of nematic disodium cromoglycate. Our study, including experiment, simulation, and scaling analysis, provides a comprehensive understanding of the structure and dynamics of pressure-driven lyotropic chromonic liquid crystals and might open new routes for using these materials to control assembly and flow of biological systems or particles in microfluidic devices.

Lyotropic chromonic liquid crystals (LCLCs) are aqueous dispersions of organic disk-like molecules that self-assemble into cylindrical aggregates, which form nematic or columnar liquid crystal phases under appropriate conditions of concentration and temperature (16). These materials have gained increasing attention in both fundamental and applied research over the past decade, due to their distinct structural properties and biocompatibility (4, 714). Used as a replacement for isotropic fluids in microfluidic devices, nematic LCLCs have been employed to control the behavior of bacteria and colloids (13, 1520).Nematic liquid crystals form topological defects under flow, which gives rise to complex dynamical structures that have been extensively studied in thermotropic liquid crystals (TLCs) and liquid crystal polymers (LCPs) (2129). In contrast to lyotropic liquid crystals that are dispersed in a solvent and whose phase can be tuned by either concentration or temperature, TLCs do not need a solvent to possess a liquid-crystalline state and their phase depends only on temperature (30). Most TLCs are shear-aligned nematics, in which the director evolves toward an equilibrium out-of-plane polar angle. Defects nucleate beyond a critical Ericksen number due to the irreconcilable alignment of the directors from surface anchoring and shear alignment in the bulk flow (24, 3133). With an increase in shear rate, the defect type can transition from π-walls (domain walls that separate regions whose director orientation differs by an angle of π) to ordered disclinations and to a disordered chaotic regime (34). Recent efforts have aimed to tune and control the defect structures by understanding the relation between the selection of topological defect types and the flow field in flowing TLCs. Strategies to do so include tuning the geometry of microfluidic channels, inducing defect nucleation through the introduction of isotropic phases or designing inhomogeneities in the surface anchoring (3539). LCPs are typically tumbling nematics for which α2α3 < 0, where α2 and α3 are the Leslie viscosities. This leads to a nonzero viscous torque for any orientation of the director, which allows the director to rotate in the shear plane (22, 29, 30, 40). The tumbling character of LCPs facilitates the nucleation of singular topological defects (22, 40). Moreover, the molecular rotational relaxation times of LCPs are longer than those of TLCs, and they can exceed the timescales imposed by the shear rate. As a result, the rheological behavior of LCPs is governed not only by spatial gradients of the director field from the Frank elasticity, but also by changes in the molecular order parameter (25, 4143). With increasing shear rate, topological defects in LCPs have been shown to transition from disclinations to rolling cells and to worm-like patterns (25, 26, 43).Topological defects occurring in the flow of nematic LCLCs have so far received much more limited attention (44, 45). At rest, LCLCs exhibit unique properties distinct from those of TLCs and LCPs (1, 2, 46, 44). In particular, LCLCs have significant elastic anisotropy compared to TLCs; the twist Frank elastic constant, K2, is much smaller than the splay and bend Frank elastic constants, K1 and K3. The resulting relative ease with which twist deformations can occur can lead to a spontaneous symmetry breaking and the emergence of chiral structures in static LCLCs under spatial confinement, despite the achiral nature of the molecules (4, 4651). When driven out of equilibrium by an imposed flow, the average director field of LCLCs has been reported to align predominantly along the shear direction under strong shear but to reorient to an alignment perpendicular to the shear direction below a critical shear rate (5254). A recent study has revealed a variety of complex textures that emerge in simple shear flow in the nematic LCLC disodium cromoglycate (DSCG) (44). The tumbling nature of this liquid crystal leads to enhanced sensitivity to shear rate. At shear rates γ˙<1s1, the director realigns perpendicular to the flow direction adapting a so-called log-rolling state characteristic of tumbling nematics. For 1s1<γ˙<10s1, polydomain textures form due to the nucleation of pure-twist disclination loops, for which the rotation vector is parallel to the loop normal, and mixed wedge-twist disclination loops, for which the rotation vector is perpendicular to the loop normal (44, 55). Above γ˙>10s1, the disclination loops gradually transform into periodic stripes in which the director aligns predominantly along the flow direction (44).Here, we report on the structure and dynamics of topological defects occurring in the pressure-driven flow of nematic DSCG. A quantitative evaluation of such dynamics has so far remained challenging, in particular for fast flow velocities, due to the slow image acquisition rate of current quantitative polarization-resolved imaging techniques. Quantitative polarization imaging traditionally relies on three commonly used techniques: fluorescence confocal polarization microscopy, polarizing optical microscopy, and LC-Polscope imaging. Fluorescence confocal polarization microscopy can provide accurate maps of birefringence and orientation angle, but the fluorescent labeling may perturb the flow properties (56). Polarizing optical microscopy requires a mechanical rotation of the polarizers and multiple measurements, which severely limits the imaging speed. LC-Polscope, an extension of conventional polarization optical microscopy, utilizes liquid crystal universal compensators to replace the compensator used in conventional polarization microscopes (57). This leads to an enhanced imaging speed and better compensation for polarization artifacts of the optical system. The need for multiple measurements to quantify retardance, however, still limits the acquisition rate of LC-Polscopes.We overcome these challenges by using a single-shot quantitative polarization microscopy technique, termed polarized shearing interference microscopy (PSIM). PSIM combines circular polarization light excitation with off-axis shearing interferometry detection. Using a custom polarization retrieval algorithm, we achieve single-shot mapping of the retardance, which allows us to reach imaging speeds that are limited only by the camera frame rate while preserving a large field-of-view and micrometer spatial resolution. We provide a brief discussion of the optical design of PSIM in Materials and Methods; further details of the measurement accuracy and imaging performance of PSIM are reported in ref. 58.Using a combination of experiments, numerical simulations and scaling analysis, we show that in the pressure-driven flow of nematic DSCG solutions in a microfluidic channel, pure-twist disclination loops emerge for a certain range of shear rates. These loops are elongated in the flow with a fixed aspect ratio. We demonstrate that the disclination loops nucleate at the boundary between regions where the director aligns predominantly along the flow direction close to the channel walls and regions where the director aligns predominantly perpendicular to the flow direction in the center of the channel. The large elastic stresses of the director gradient at the boundary are then released by the formation of disclination loops. We show that both the characteristic size and the fluctuations of the pure-twist disclination loops can be tuned by controlling the flow rate.  相似文献   

10.
During the last decade, translational and rotational symmetry-breaking phases—density wave order and electronic nematicity—have been established as generic and distinct features of many correlated electron systems, including pnictide and cuprate superconductors. However, in cuprates, the relationship between these electronic symmetry-breaking phases and the enigmatic pseudogap phase remains unclear. Here, we employ resonant X-ray scattering in a cuprate high-temperature superconductor La1.6xNd0.4SrxCuO4 (Nd-LSCO) to navigate the cuprate phase diagram, probing the relationship between electronic nematicity of the Cu 3d orbitals, charge order, and the pseudogap phase as a function of doping. We find evidence for a considerable decrease in electronic nematicity beyond the pseudogap phase, either by raising the temperature through the pseudogap onset temperature T* or increasing doping through the pseudogap critical point, p*. These results establish a clear link between electronic nematicity, the pseudogap, and its associated quantum criticality in overdoped cuprates. Our findings anticipate that electronic nematicity may play a larger role in understanding the cuprate phase diagram than previously recognized, possibly having a crucial role in the phenomenology of the pseudogap phase.

There is a growing realization that the essential physics of the cuprate high-temperature superconductors, and perhaps other strongly correlated materials, involves a rich interplay between different electronic symmetry-breaking phases (13) like superconductivity, spin or charge density wave (SDW or CDW) order (47), antiferromagnetism, electronic nematicity (814), and possibly other orders such as pair density wave order (15) or orbital current order (16).One or more of these orders may also be linked with the existence of a zero-temperature quantum critical point (QCP) in the superconducting state of the cuprates, similar to heavy-fermion, organic, pnictide, and iron-based superconductors (1719). The significance of the QCP in describing the properties of the cuprates, as a generic organizing principle where quantum fluctuations in the vicinity of the QCP impact a wide swath of the cuprate phase diagram, remains an open question. Evidence for such a QCP and its influence include a linear in temperature resistivity extending to low temperature, strong mass enhancement via quantum oscillation studies (20), and an enhancement in the specific heat (21) in the field induced normal state, with some of the more-direct evidence for a QCP in the cuprates coming from measurements in the material La1.6xNd0.4SrxCuO4 (Nd-LSCO). Moreover, the QCP also appears to be the endpoint of the pseudogap phase (21) that is marked, among other features, by transition of the electronic structure from small Fermi surface that is folded or truncated by the antiferromagnetic zone boundary in the pseudogap phase to a large Fermi surface at higher doping (22, 23) that is consistent with band structure calculations (24). However, in the cuprates, neither the QCP nor the change in the electronic structure have been definitively associated with a particular symmetry-breaking phase.In this article, we interrogate the possibility that the cuprates exhibit a connection between electronic nematic order, the pseudogap, and its associated QCP. In the pnictide superconductors, which are similar in many respects to the cuprates, electronic nematic order is more clearly established experimentally, and there have been reports of nematic fluctuations (25), non-Fermi liquid transport (26), and a change in the topology of the Fermi surface associated with a nematic QCP (27). Electronic nematicity refers to a breaking of rotational symmetry of the electronic structure in a manner that is not a straightforward result of crystalline symmetry, such that an additional electronic nematic order parameter beyond the structure would be required to describe the resulting phase. The manifestation of nematic order may therefore depend on the details of the crystal structure of the materials, such as whether the structure is tetragonal or orthorhombic. However, such a state can be difficult to identify in materials that have orthorhombic structures, which would naturally couple to any electronic nematic order and vice versa. Despite these challenges, experimental evidence for electronic nematic order that is distinct from the crystal structure include reports of electronic nematicity from bulk transport (810) and magnetometry measurements (11) in YBa2Cu3Oy (YBCO), scanning tunneling microscopy (STM) (13, 14, 28) in Bi2Sr2CaCu2O8+δ (Bi2212), inelastic neutron scattering (12) in YBCO, and resonant X-ray scattering (29) in (La,Nd,Ba,Sr,Eu)2CuO4. Moreover, STM studies in Bi2212 have reported intraunit cell nematicity disappearing around the pseudogap endpoint (30), which also seems to be a region of enhanced electronic nematic fluctuations (31, 32). In YBCO, there have also been reports of association between nematicity and the pseudogap onset temperature (9, 11).Here, we use resonant X-ray scattering to measure electronic nematic order in the cuprate Nd-LSCO as a function of doping and temperature to explore the relationship of electronic nematicity with the pseudogap phase. While evidence that a quantum critical point governs a wide swath of the phase diagram in hole-doped cuprates and is generic to many material systems remains unclear, investigation of Nd-LSCO provides the opportunity to probe the evolution of electronic nematicity over a wide range of doping in the same material system where some of the most compelling signatures of quantum criticality and electronic structure evolution have been observed. These include a divergence in the heat capacity (21), a change in the electronic structure from angle-dependent magnetoresistance (ADMR) measurements (24) in the vicinity of the QCP at x = 0.23, and the onset of the pseudogap (23). Our main result is that we observe a vanishing of the electronic nematic order in Nd-LSCO as hole doping is either increased above x = 0.23, which has been identified as the QCP doping for this system (21), or when temperature is increased above the pseudogap onset temperature T* (23). These observations indicate that electronic nematicity in Nd-LSCO is intimately linked to the pseudogap phase.  相似文献   

11.
The intracellular milieu differs from the dilute conditions in which most biophysical and biochemical studies are performed. This difference has led both experimentalists and theoreticians to tackle the challenging task of understanding how the intracellular environment affects the properties of biopolymers. Despite a growing number of in-cell studies, there is a lack of quantitative, residue-level information about equilibrium thermodynamic protein stability under nonperturbing conditions. We report the use of NMR-detected hydrogen–deuterium exchange of quenched cell lysates to measure individual opening free energies of the 56-aa B1 domain of protein G (GB1) in living Escherichia coli cells without adding destabilizing cosolutes or heat. Comparisons to dilute solution data (pH 7.6 and 37 °C) show that opening free energies increase by as much as 1.14 ± 0.05 kcal/mol in cells. Importantly, we also show that homogeneous protein crowders destabilize GB1, highlighting the challenge of recreating the cellular interior. We discuss our findings in terms of hard-core excluded volume effects, charge–charge GB1-crowder interactions, and other factors. The quenched lysate method identifies the residues most important for folding GB1 in cells, and should prove useful for quantifying the stability of other globular proteins in cells to gain a more complete understanding of the effects of the intracellular environment on protein chemistry.Proteins function in a heterogeneous and crowded intracellular environment. Macromolecules comprise 20–30% of the volume of an Escherichia coli cell and reach concentrations of 300–400 g/L (1, 2). Theory predicts that the properties of proteins and nucleic acids can be significantly altered in cells compared with buffer alone (3, 4). Nevertheless, most biochemical and biophysical studies are conducted under dilute (<10 g/L macromolecules) conditions. Here, we augment the small but growing list of reports probing the equilibrium thermodynamic stability of proteins in living cells (59), and provide, to our knowledge, the first measurement of residue-level stability under nonperturbing conditions.Until recently, the effects of macromolecular crowding on protein stability were thought to be caused solely by hard-core, steric repulsions arising from the impenetrability of matter (4, 10, 11). The expectation was that crowding enhances stability by favoring the compact native state over the ensemble of denatured states. Increased attention to transient, nonspecific protein-protein interactions (1215) has led both experimentalists (1619) and theoreticians (2022) to recognize the effects of chemical interactions between crowder and test protein when assessing the net effect of macromolecular crowding. These weak, nonspecific interactions can reinforce or oppose the effect of hard-core repulsions, resulting in increased or decreased stability depending on the chemical nature of the test protein and crowder (2326).We chose the B1 domain of streptococcal protein G (GB1) (27) as our test protein because its structure, stability and folding kinetics have been extensively studied in dilute solution (2838). Its small size (56 aa; 6.2 kDa) and high thermal stability make GB1 well suited for studies by NMR spectroscopy.Quantifying the equilibrium thermodynamic stability of proteins relies on determining the relative populations of native and denatured states. Because the denatured state ensemble of a stable protein is sparsely populated under native conditions, stability is usually probed by adding heat or a cosolute to promote unfolding so that the concentration ratio of the two states can be determined (39). However, stability can be measured without these perturbations by exploiting the phenomenon of backbone amide H/D exchange (40) detected by NMR spectroscopy (41). The observed rate of amide proton (N–H) exchange, kobs, is related to equilibrium stability by considering a protein in which each N–H exists in an open (exposed, exchange-competent) state, or a closed (protected, exchange-incompetent) state (40, 42):closed(NH)kclkopopen(NH)kintopen(ND)kopkclclosed(ND).[1]Each position opens and closes with rate constants, kop and kcl (where Kop = kop/kcl), and exchange from the open state occurs with intrinsic rate constant, kint. Values for kint are based on exchange data from unstructured peptides (43, 44). If the test protein is stable (i.e., kcl >> kop), the observed rate becomes:kobs=kopkintkcl+kint.[2]Exchange occurs within two limits (42). At the EX1 limit, closing is rate determining, and kobs = kop. This limit is usually observed for less stable proteins and at basic pH (45). Most globular proteins undergo EX2 kinetics, where exchange from the open state is rate limiting (i.e., kcl >> kint), and kobs values can be converted to equilibrium opening free energies, ΔGop° (46):kobs=kopkclkint=Kopkint[3]ΔGop°=RTlnkobskint,[4]where RT is the molar gas constant multiplied by the absolute temperature.The backbone amides most strongly involved in H-bonded regions of secondary structure exchange only from the fully unfolded state, yielding a maximum value of ΔGop° (4749). For these residues ΔGop° approximates the free energy of denaturation, ΔGden°, providing information on global stability. Lower amplitude fluctuations of the native state can give rise to partially unfolded forms (50), resulting in residues with ΔGop° values less than those of the global unfolders.In summary, NMR-detected H/D exchange can measure equilibrium thermodynamic stability of a protein at the level of individual amino acid residues under nonperturbing conditions. Inomata et al. (51) used this technique to measure kobs values in human cells for four residues in ubiquitin, but experiments confirming the exchange mechanism were not reported and opening free energies were not quantified. Our results fill this void and provide quantitative residue-level protein stability measurements in living cells under nonperturbing conditions.  相似文献   

12.
Fluids are known to trigger a broad range of slip events, from slow, creeping transients to dynamic earthquake ruptures. Yet, the detailed mechanics underlying these processes and the conditions leading to different rupture behaviors are not well understood. Here, we use a laboratory earthquake setup, capable of injecting pressurized fluids, to compare the rupture behavior for different rates of fluid injection, slow (megapascals per hour) versus fast (megapascals per second). We find that for the fast injection rates, dynamic ruptures are triggered at lower pressure levels and over spatial scales much smaller than the quasistatic theoretical estimates of nucleation sizes, suggesting that such fast injection rates constitute dynamic loading. In contrast, the relatively slow injection rates result in gradual nucleation processes, with the fluid spreading along the interface and causing stress changes consistent with gradually accelerating slow slip. The resulting dynamic ruptures propagating over wetted interfaces exhibit dynamic stress drops almost twice as large as those over the dry interfaces. These results suggest the need to take into account the rate of the pore-pressure increase when considering nucleation processes and motivate further investigation on how friction properties depend on the presence of fluids.

The close connection between fluids and faulting has been revealed by a large number of observations, both in tectonic settings and during human activities, such as wastewater disposal associated with oil and gas extraction, geothermal energy production, and CO2 sequestration (111). On and around tectonic faults, fluids also naturally exist and are added at depths due to rock-dehydration reactions (1215) Fluid-induced slip behavior can range from earthquakes to slow, creeping motion. It has long been thought that creeping and seismogenic fault zones have little to no spatial overlap. Nonetheless, growing evidence suggests that the same fault areas can exhibit both slow and dynamic slip (1619). The existence of large-scale slow slip in potentially seismogenic areas has been revealed by the presence of transient slow-slip events in subduction zones (16, 18) and proposed by studies investigating the physics of foreshocks (2022).Numerical and laboratory modeling has shown that such complex fault behavior can result from the interaction of fluid-related effects with the rate-and-state frictional properties (9, 14, 19, 23, 24); other proposed rheological explanations for complexities in fault stability include combinations of brittle and viscous rheology (25) and friction-to-flow transitions (26). The interaction of frictional sliding and fluids results in a number of coupled and competing mechanisms. The fault shear resistance τres is typically described by a friction model that linearly relates it to the effective normal stress σ^n via a friction coefficient f:τres=fσ^n=f(σnp),[1]where σn is the normal stress acting across the fault and p is the pore pressure. Clearly, increasing pore pressure p would reduce the fault frictional resistance, promoting the insurgence of slip. However, such slip need not be fast enough to radiate seismic waves, as would be characteristic of an earthquake, but can be slow and aseismic. In fact, the critical spatial scale h* for the slipping zone to reach in order to initiate an unstable, dynamic event is inversely proportional to the effective normal stress (27, 28) and hence increases with increasing pore pressure, promoting stable slip. This stabilizing effect of increasing fluid pressure holds for both linear slip-weakening and rate-and-state friction; it occurs because lower effective normal stress results in lower fault weakening during slip for the same friction properties. For example, the general form for two-dimensional (2D) theoretical estimates of this so-called nucleation size, h*, on rate-and-state faults with steady-state, velocity-weakening friction is given by:h*=(μ*DRS)/[F(a,b)(σnp)],[2]where μ*=μ/(1ν) for modes I and II, and μ*=μ for mode III (29); DRS is the characteristic slip distance; and F(a, b) is a function of the rate-and-state friction parameters a and b. The function F(a, b) depends on the specific assumptions made to obtain the estimate: FRR(a,b)=4(ba)/π (ref. 27, equation 40) for a linearized stability analysis of steady sliding, or FRA(a,b)=[π(ba)2]/2b, with a/b>1/2 for quasistatic crack-like expansion of the nucleation zone (ref. 30, equation 42).Hence, an increase in pore pressure induces a reduction in the effective normal stress, which both promotes slip due to lower frictional resistance and increases the critical length scale h*, potentially resulting in slow, stable fault slip instead of fast, dynamic rupture. Indeed, recent field and laboratory observations suggest that fluid injection triggers slow slip first (4, 9, 11, 31). Numerical modeling based on these effects, either by themselves or with an additional stabilizing effect of shear-layer dilatancy and the associated drop in fluid pressure, have been successful in capturing a number of properties of slow-slip events observed on natural faults and in field fluid-injection experiments (14, 24, 3234). However, understanding the dependence of the fault response on the specifics of pore-pressure increase remains elusive. Several studies suggest that the nucleation size can depend on the loading rate (3538), which would imply that the nucleation size should also depend on the rate of friction strength change and hence on the rate of change of the pore fluid pressure. The dependence of the nucleation size on evolving pore fluid pressure has also been theoretically investigated (39). However, the commonly used estimates of the nucleation size (Eq. 2) have been developed for faults under spatially and temporally uniform effective stress, which is clearly not the case for fluid-injection scenarios. In addition, the friction properties themselves may change in the presence of fluids (4042). The interaction between shear and fluid effects can be further affected by fault-gauge dilation/compaction (40, 4345) and thermal pressurization of pore fluids (42, 4648).Recent laboratory investigations have been quite instrumental in uncovering the fundamentals of the fluid-faulting interactions (31, 45, 4957). Several studies have indicated that fluid-pressurization rate, rather than injection volume, controls slip, slip rate, and stress drop (31, 49, 57). Rapid fluid injection may produce pressure heterogeneities, influencing the onset of slip. The degree of heterogeneity depends on the balance between the hydraulic diffusion rate and the fluid-injection rate, with higher injection rates promoting the transition from drained to locally undrained conditions (31). Fluid pressurization can also interact with friction properties and produce dynamic slip along rate-strengthening faults (50, 51).In this study, we investigate the relation between the rate of pressure increase on the fault and spontaneous rupture nucleation due to fluid injection by laboratory experiments in a setup that builds on and significantly develops the previous generations of laboratory earthquake setup of Rosakis and coworkers (58, 59). The previous versions of the setup have been used to study key features of dynamic ruptures, including sub-Rayleigh to supershear transition (60); rupture directionality and limiting speeds due to bimaterial effects (61); pulse-like versus crack-like behavior (62); opening of thrust faults (63); and friction evolution (64). A recent innovation in the diagnostics, featuring ultrahigh-speed photography in conjunction with digital image correlation (DIC) (65), has enabled the quantification of the full-field behavior of dynamic ruptures (6668), as well as the characterization of the local evolution of dynamic friction (64, 69). In these prior studies, earthquake ruptures were triggered by the local pressure release due to an electrical discharge. This nucleation procedure produced only dynamic ruptures, due to the nearly instantaneous normal stress reduction.To study fault slip triggered by fluid injection, we have developed a laboratory setup featuring a hydraulic circuit capable of injecting pressurized fluid onto the fault plane of a specimen and a set of experimental diagnostics that enables us to detect both slow and fast fault slip and stress changes. The range of fluid-pressure time histories produced by this setup results in both quasistatic and dynamic rupture nucleation; the diagnostics allows us to capture the nucleation processes, as well as the resulting dynamic rupture propagation. In particular, here, we explore two injection techniques: procedure 1, a gradual, and procedure 2, a sharp fluid-pressure ramp-up. An array of strain gauges, placed on the specimen’s surface along the fault, can capture the strain (translated into stress) time histories over a wide range of temporal scales, spanning from microseconds to tens of minutes. Once dynamic ruptures nucleate, an ultrahigh-speed camera records images of the propagating ruptures, which are turned into maps of full-field displacements, velocities, and stresses by a tailored DIC) analysis. One advantage of using a specimen made of an analog material, such as poly(methyl meth-acrylate) (PMMA) used in this study, is its transparency, which allows us to look at the interface through the bulk and observe fluid diffusion over the interface. Another important advantage of using PMMA is that its much lower shear modulus results in much smaller nucleation sizes h* than those for rocks, allowing the experiments to produce both slow and fast slip in samples of manageable sizes.We start by describing the laboratory setup and the diagnostics monitoring the pressure evolution and the slip behavior. We then present and discuss the different slip responses measured as a result of slow versus fast fluid injection and interpret our measurements by using the rate-and-state friction framework and a pressure-diffusion model.  相似文献   

13.
The transacting activator of transduction (TAT) protein plays a key role in the progression of AIDS. Studies have shown that a +8 charged sequence of amino acids in the protein, called the TAT peptide, enables the TAT protein to penetrate cell membranes. To probe mechanisms of binding and translocation of the TAT peptide into the cell, investigators have used phospholipid liposomes as cell membrane mimics. We have used the method of surface potential sensitive second harmonic generation (SHG), which is a label-free and interface-selective method, to study the binding of TAT to anionic 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-1′-rac-glycerol (POPG) and neutral 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) liposomes. It is the SHG sensitivity to the electrostatic field generated by a charged interface that enabled us to obtain the interfacial electrostatic potential. SHG together with the Poisson–Boltzmann equation yielded the dependence of the surface potential on the density of adsorbed TAT. We obtained the dissociation constants Kd for TAT binding to POPC and POPG liposomes and the maximum number of TATs that can bind to a given liposome surface. For POPC Kd was found to be 7.5 ± 2 μM, and for POPG Kd was 29.0 ± 4.0 μM. As TAT was added to the liposome solution the POPC surface potential changed from 0 mV to +37 mV, and for POPG it changed from −57 mV to −37 mV. A numerical calculation of Kd, which included all terms obtained from application of the Poisson–Boltzmann equation to the TAT liposome SHG data, was shown to be in good agreement with an approximated solution.The HIV type 1 (HIV-1) transacting activator of transduction (TAT) is an important regulatory protein for viral gene expression (13). It has been established that the TAT protein has a key role in the progression of AIDS and is a potential target for anti-HIV vaccines (4). For the TAT protein to carry out its biological functions, it needs to be readily imported into the cell. Studies on the cellular internalization of TAT have led to the discovery of the TAT peptide, a highly cationic 11-aa region (protein transduction domain) of the 86-aa full-length protein that is responsible for the TAT protein translocating across phospholipid membranes (58). The TAT peptide is a member of a class of peptides called cell-penetrating peptides (CPPs) that have generated great interest for drug delivery applications (ref. 9 and references therein). The exact mechanism by which the TAT peptide enters cells is not fully understood, but it is likely to involve a combination of energy-independent penetration and endocytosis pathways (8, 10). The first step in the process is high-affinity binding of the peptide to phospholipids and other components on the cell surface such as proteins and glycosaminoglycans (1, 9).The binding of the TAT peptide to liposomes has been investigated using a variety of techniques, each of which has its own advantages and limitations. Among the techniques are isothermal titration calorimetry (9, 11), fluorescence spectroscopy (12, 13), FRET (12, 14), single-molecule fluorescence microscopy (15, 16), and solid-state NMR (17). Second harmonic generation (SHG), as an interface-selective technique (1824), does not require a label, and because SHG is sensitive to the interface potential, it is an attractive method to selectively probe the binding of the highly charged (+8) TAT peptide to liposome surfaces. Although coherent SHG is forbidden in centrosymmetric and isotropic bulk media for reasons of symmetry, it can be generated by a centrosymmetric structure, e.g., a sphere, provided that the object is centrosymmetric over roughly the length scale of the optical coherence, which is a function of the particle size, the wavelength of the incident light, and the refractive indexes at ω and 2ω (2530). As a second-order nonlinear optical technique SHG has symmetry restrictions such that coherent SHG is not generated by the randomly oriented molecules in the bulk liquid, but can be generated coherently by the much smaller population of oriented interfacial species bound to a particle or planar surfaces. As a consequence the SHG signal from the interface is not overwhelmed by SHG from the much larger populations in the bulk media (2528).The total second harmonic electric field, E2ω, originating from a charged interface in contact with water can be expressed as (3133)E2ωiχc,i(2)EωEω+jχinc,j(2)EωEω+χH2O(3)EωEωΦ,[1]where χc,i(2) represents the second-order susceptibility of the species i present at the interface; χinc,j(2) represents the incoherent contribution of the second-order susceptibility, arising from density and orientational fluctuations of the species j present in solution, often referred to as hyper-Rayleigh scattering; χH2O(3) is the third-order susceptibility originating chiefly from the polarization of the bulk water molecules polarized by the charged interface; Φ is the potential at the interface that is created by the surface charge; and Eω is the electric field of the incident light at the fundamental frequency ω. The second-order susceptibility, χc,i(2), can be written as the product of the number of molecules, N, at the surface and the orientational ensemble average of the hyperpolarizability αi(2) of surface species i, yielding χc,i(2)=Nαi(2) (18). The bracket ?? indicates an orientational average over the interfacial molecules. The third term in Eq. 1 depicts a third-order process by which a second harmonic field is generated by a charged interface. This term is the focus of our work. The SHG signal is dependent on the surface potential created by the electrostatic field of the surface charges, often called the χ(3) contribution to the SHG signal. The χ(3) method has been used to extract the surface charge density of charged planar surfaces and microparticle surfaces, e.g., liposomes, polymer beads, and oil droplets in water (21, 25, 3439).In this work, the χ(3) SHG method is used to explore a biomedically relevant process. The binding of the highly cationic HIV-1 TAT peptide to liposome membranes changes the surface potential, thereby enabling the use of the χ(3) method to study the binding process in a label-free manner. Two kinds of liposomes, neutral 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and anionic 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-1′-rac-glycerol (POPG), were investigated. The chemical structures of TAT, POPC, and POPG lipids are shown in Scheme 1.Open in a separate windowScheme 1.Chemical structures of HIV-1 TAT (47–57) peptide and the POPC and POPG lipids.  相似文献   

14.
Anaerobic microbial respiration in suboxic and anoxic environments often involves particulate ferric iron (oxyhydr-)oxides as terminal electron acceptors. To ensure efficient respiration, a widespread strategy among iron-reducing microorganisms is the use of extracellular electron shuttles (EES) that transfer two electrons from the microbial cell to the iron oxide surface. Yet, a fundamental understanding of how EES–oxide redox thermodynamics affect rates of iron oxide reduction remains elusive. Attempts to rationalize these rates for different EES, solution pH, and iron oxides on the basis of the underlying reaction free energy of the two-electron transfer were unsuccessful. Here, we demonstrate that broadly varying reduction rates determined in this work for different iron oxides and EES at varying solution chemistry as well as previously published data can be reconciled when these rates are instead related to the free energy of the less exergonic (or even endergonic) first of the two electron transfers from the fully, two-electron reduced EES to ferric iron oxide. We show how free energy relationships aid in identifying controls on microbial iron oxide reduction by EES, thereby advancing a more fundamental understanding of anaerobic respiration using iron oxides.

The use of iron oxides as terminal electron acceptors in anaerobic microbial respiration is central to biogeochemical element cycling and pollutant transformations in many suboxic and anoxic environments (16). To ensure efficient electron transfer to solid-phase ferric iron, Fe(III), at circumneutral pH, metal-reducing microorganisms from diverse phylae use dissolved extracellular electron shuttle (EES), including quinones (79), flavins (1016), and phenazines (1719), to transfer two electrons per EES molecule from the respiratory chain proteins in the outer membrane of the microbial cell to the iron oxide (17, 20, 21). The oxidized EES can diffuse back to the cell surface for rereduction, thereby completing the catalytic redox cycle involving the EES.The electron transfer from the reduced EES to Fe(III) is considered a key step in overall microbial Fe(III) respiration. Several lines of evidence suggest that the free energy of the electron transfer reaction, ΔrG, controls Fe(III) reduction rates (15, 17, 22, 23). For instance, microbial Fe(III) oxide reduction by dissolved model quinones as EES was accelerated only for quinones with standard two-electron reduction potentials, EH,1,20, that fell into a relatively narrow range of 180±80 mV at pH 7 (24). Furthermore, in abiotic experiments, Fe(III) reduction rates by EES decreased with increasing ΔrG that resulted from increasing either EH,1,20 of the EES (25, 26), the concentration of Fe(II) in the system (27), or solution pH (25, 26, 28). However, substantial efforts to relate Fe(III) reduction rates for different EES species, iron oxides, and pH to the EH,1,20 averaged over both electrons transferred from the EES to the iron oxides were only partially successful (25, 28). Reaction free energies of complex redox processes involving the transfer of multiple electrons can readily be calculated using differences in the reduction potentials averaged over all electrons transferred, and this approach is well established in biogeochemistry and microbial ecology. For kinetic considerations, however, the use of averaged reduction potentials is inappropriate.Herein, we posit that rates of Fe(III) reduction by EES instead relate to the ΔrG of the less exergonic first one-electron transfer from the two-electron reduced EES species to the iron oxide, following the general notion that reaction rates scale with reaction free energies (29). Our hypothesis is based on the fact that, at circumneutral to acidic pH and for many EES, the reduction potential of the first electron transferred to the fully oxidized EES to form the one-electron reduced intermediate semiquinone species, EH,1, is lower than the reduction potential of the second electron transferred to the semiquinone to form the fully two-electron reduced EES species, EH,2 [i.e., EH,1<EH,2 (3033)]. This difference in one-electron reduction potentials implies that the two-electron reduced EES (i.e., the hydroquinone) is the weaker one-electron reductant for Fe(III) as compared to the semiquinone species. We therefore expect that rates of iron oxide reduction relate to the ΔrG of the first electron transferred from the hydroquinone to Fe(III). The ΔrG of this first electron transfer may even be endergonic provided that the two-electron transfer is exergonic.We verified our hypothesis in abiotic model systems by demonstrating that reduction rates of two geochemically important crystalline iron oxides, goethite and hematite, by two-electron reduced quinone- and flavin-based EES over a wide pH range, and therefore thermodynamic driving force for Fe(III) reduction, correlate with the ΔrG of the first electron transferred from the fully reduced EES to Fe(III). We further show that rates of goethite and hematite reduction by EES reported in the literature are in excellent agreement with our rate data when comparing rates on the basis of the thermodynamics of the less exergonic first of the two electron transfers.  相似文献   

15.
Molecular, polymeric, colloidal, and other classes of liquids can exhibit very large, spatially heterogeneous alterations of their dynamics and glass transition temperature when confined to nanoscale domains. Considerable progress has been made in understanding the related problem of near-interface relaxation and diffusion in thick films. However, the origin of “nanoconfinement effects” on the glassy dynamics of thin films, where gradients from different interfaces interact and genuine collective finite size effects may emerge, remains a longstanding open question. Here, we combine molecular dynamics simulations, probing 5 decades of relaxation, and the Elastically Cooperative Nonlinear Langevin Equation (ECNLE) theory, addressing 14 decades in timescale, to establish a microscopic and mechanistic understanding of the key features of altered dynamics in freestanding films spanning the full range from ultrathin to thick films. Simulations and theory are in qualitative and near-quantitative agreement without use of any adjustable parameters. For films of intermediate thickness, the dynamical behavior is well predicted to leading order using a simple linear superposition of thick-film exponential barrier gradients, including a remarkable suppression and flattening of various dynamical gradients in thin films. However, in sufficiently thin films the superposition approximation breaks down due to the emergence of genuine finite size confinement effects. ECNLE theory extended to treat thin films captures the phenomenology found in simulation, without invocation of any critical-like phenomena, on the basis of interface-nucleated gradients of local caging constraints, combined with interfacial and finite size-induced alterations of the collective elastic component of the structural relaxation process.

Spatially heterogeneous dynamics in glass-forming liquids confined to nanoscale domains (17) play a major role in determining the properties of molecular, polymeric, colloidal, and other glass-forming materials (8), including thin films of polymers (9, 10) and small molecules (1115), small-molecule liquids in porous media (2, 4, 16, 17), semicrystalline polymers (18, 19), polymer nanocomposites (2022), ionomers (2325), self-assembled block and layered (2633) copolymers, and vapor-deposited ultrastable molecular glasses (3436). Intense interest in this problem over the last 30 y has also been motivated by the expectation that its understanding could reveal key insights concerning the mechanism of the bulk glass transition.Considerable progress has been made for near-interface altered dynamics in thick films, as recently critically reviewed (1). Large amplitude gradients of the structural relaxation time, τ(z,T), converge to the bulk value, τbulk(T), in an intriguing double-exponential manner with distance, z, from a solid or vapor interface (13, 3742). This implies that the corresponding effective activation barrier, Ftotal(z,T,H) (where H is film thickness), varies exponentially with z, as does the glass transition temperature, Tg (37). Thus the fractional reduction in activation barrier, ε(z,H), obeys the equation ε(z,H)1Ftotal(z,T,H)/Ftotal,bulk(T)=ε0exp(z/ξF), where Ftotal,bulk(T) is the bulk temperature-dependent barrier and ξF a length scale of modest magnitude. Although the gradient of reduction in absolute activation barriers becomes stronger with cooling, the amplitude of the fractional reduction of the barrier gradient, quantified by ε0, and the range ξF of this gradient, exhibit a weak or absent temperature dependence at the lowest temperatures accessed by simulations (typically with the strength of temperature dependence of ξF decreasing rather than increasing on cooling), which extend to relaxation timescales of order 105 ps. This finding raises questions regarding the relevance of critical-phenomena–like ideas for nanoconfinement effects (1). Partially due to this temperature invariance, coarse-grained and all-atom simulations (1, 37, 42, 43) have found a striking empirical fractional power law decoupling relation between τ(z,T) and τbulk(T):τ(T,z)τbulk(T)(τbulk(T))ε(z).[1]Recent theoretical analysis suggests (44) that this behavior is consistent with a number of experimental data sets as well (45, 46). Eq. 1 also corresponds to a remarkable factorization of the temperature and spatial location dependences of the barrier:Ftotal(z,T)=[1ε(z)]Ftotal,bulk(T).[2]This finding indicates that the activation barrier for near-interface relaxation can be factored into two contributions: a z-dependent, but T-independent, “decoupling exponent,” ε(z), and a temperature-dependent, but position-insensitive, bulk activation barrier, Ftotal,bulk(T). Eq. 2 further emphasizes that ε(z) is equivalent to an effective fractional barrier reduction factor (for a vapor interface), 1Ftotal(z,T,H)/Ftotal,bulk(T), that can be extracted from relaxation data.In contrast, the origin of “nanoconfinement effects” in thin films, and how much of the rich thick-film physics survives when dynamic gradients from two interfaces overlap, is not well understood. The distinct theoretical efforts for aspects of the thick-film phenomenology (44, 4750) mostly assume an additive summation of one-interface effects in thin films, thereby ignoring possibly crucial cooperative and whole film finite size confinement effects. If the latter involve phase-transition–like physics as per recent speculations (14, 51), one can ask the following: do new length scales emerge that might be truncated by finite film size? Alternatively, does ultrathin film phenomenology arise from a combination of two-interface superposition of the thick-film gradient physics and noncritical cooperative effects, perhaps in a property-, temperature-, and/or thickness-dependent manner?Here, we answer these questions and establish a mechanistic understanding of thin-film dynamics for the simplest and most universal case: a symmetric freestanding film with two vapor interfaces. We focus on small molecules (modeled theoretically as spheres) and low to medium molecular weight unentangled polymers, which empirically exhibit quite similar alterations in dynamics under “nanoconfinement.” We do not address anomalous phenomena [e.g., much longer gradient ranges (29), sporadic observation of two distinct glass transition temperatures (52, 53)] that are sometimes reported in experiments with very high molecular weight polymers and which may be associated with poorly understood chain connectivity effects that are distinct from general glass formation physics (5456).We employ a combination of molecular dynamics simulations with a zero-parameter extension to thin films of the Elastically Cooperative Nonlinear Langevin Equation (ECNLE) theory (57, 58). This theory has previously been shown to predict well both bulk activated relaxation over up to 14 decades (4446) and the full single-gradient phenomenology in thick films (1). Here, we extend this theory to treat films of finite thickness, accounting for coupled interface and geometric confinement effects. We compare predictions of ECNLE theory to our previously reported (37, 43) and new simulations, which focus on translational dynamics of films comprised of a standard Kremer–Grest-like bead-spring polymer model (see SI Appendix). These simulations cover a wide range of film thicknesses (H, from 4 to over 90 segment diameters σ) and extend to low temperatures where the bulk alpha time is ∼0.1 μs (105 Lennard Jones time units τLJ).The generalized ECNLE theory is found to be in agreement with simulation for all levels of nanoconfinement. We emphasize that this theory does not a priori assume any of the empirically established behaviors discovered using simulation (e.g., fractional power law decoupling, double-exponential barrier gradient, gradient flattening) but rather predicts these phenomena based upon interfacial modifications of the two coupled contributions to the underlying activation barrier– local caging constraints and a long-ranged collective elastic field. It is notable that this strong agreement is found despite the fact the dynamical ideas are approximate, and a simple hard sphere fluid model is employed in contrast to the bead-spring polymers employed in simulation. The basic unit of length in simulation (bead size σ) and theory (hard sphere diameter d) are expected to be proportional to within a prefactor of order unity, which we neglect in making comparisons.As an empirical matter, we find from simulation that many features of thin-film behavior can be described to leading order by a linear superposition of the thick-film gradients in activation barrier, that is:ε(z,H)=1Ftotal(z,T,H)/Ftotal,bulk(T)ε0[exp(z/ξF)+exp((Hz)/ξF)],[3]where the intrinsic decay length ξF is unaltered from its thick-film value and where ε0 is a constant that, in the hypothesis of literal gradient additivity, is invariant to temperature and film thickness. We employ this functional form [originally suggested by Binder and coworkers (59)], which is based on a simple superposition of the two single-interface gradients, as a null hypothesis throughout this study: this form is what one expects if no new finite-size physics enters the thin-film problem relative to the thick film.However, we find that the superposition approximation progressively breaks down, and eventually entirely fails, in ultrathin films as a consequence of the emergence of a finite size confinement effect. The ECNLE theory predicts that this failure is not tied to a phase-transition–like mechanism but rather is a consequence of two key coupled physical effects: 1) transfer of surface-induced reduction of local caging constraints into the film, and 2) interfacial truncation and nonadditive modifications of the collective elastic contribution to the activation barrier.  相似文献   

16.
Despite its importance for forest regeneration, food webs, and human economies, changes in tree fecundity with tree size and age remain largely unknown. The allometric increase with tree diameter assumed in ecological models would substantially overestimate seed contributions from large trees if fecundity eventually declines with size. Current estimates are dominated by overrepresentation of small trees in regression models. We combined global fecundity data, including a substantial representation of large trees. We compared size–fecundity relationships against traditional allometric scaling with diameter and two models based on crown architecture. All allometric models fail to describe the declining rate of increase in fecundity with diameter found for 80% of 597 species in our analysis. The strong evidence of declining fecundity, beyond what can be explained by crown architectural change, is consistent with physiological decline. A downward revision of projected fecundity of large trees can improve the next generation of forest dynamic models.

“Belgium, Luxembourg, and The Netherlands are characterized by “young” apple orchards, where over 60% of the trees are under 10 y old. In comparison, Estonia and the Czech Republic have relatively “old” orchard[s] with almost 60% and 43% over 25 y old” (1).
“The useful lives for fruit and nut trees range from 16 years (peach trees) to 37 years (almond trees)…. The Depreciation Analysis Division believes that 61 years is the best estimate of the class life of fruit and nut trees based on the information available” (2).
When mandated by the 1986 Tax Reform Act to depreciate aging orchards, the Office of the US Treasury found so little information that they ultimately resorted to interviews with individual growers (2). One thing is clear from the age distributions of fruit and nut orchards throughout the world (1, 3, 4): Standard practice often replaces trees long before most ecologists would view them to be in physiological decline, despite the interruption of profits borne by growers as transplants establish and mature. Although seed establishment represents the dominant mode for forest regeneration globally, and the seeds, nuts, and fruits of woody plants make up to 3% of the human diet (5, 6), change in fecundity with tree size and age is still poorly understood. We examine here the relationship between tree fecundity and diameter, which is related to tree age in the sense that trees do not shrink in diameter (cambial layers typically add a new increment annually), but growth rates can range widely. Still, it is important not to ignore the evidence that declines with size may also be caused by aging. Although most analyses do not separate effects of size from age (because age is often unknown and confounded with size), both may contribute to size–fecundity relationships (7). Grafting experiments designed to isolate extrinsic influences (size and/or environment) from age-related gene expression suggest that size alone can sometimes explain declines in growth rate and physiological performance (810), consistent with pruning/coppicing practice to extend the reproductive life of commercial fruit trees. Hydraulic limitation can affect physiological function, including reduced photosynthetic gain that might contribute to loss of apical dominance, or “flattening” of the crown with increasing height (1116). The slowing of height growth relative to diameter growth in large trees is observed in many species (12, 17). At least one study suggests that age by itself may not lead to decline in fecundity of open-grown, generally small-statured bristlecone pine (Pinus longaeva) (18). By contrast, some studies provide evidence of tree senescence, including age-related genetic changes in meristems of grafted scions that cause declines in physiological function (1922). Koenig et al. (23) found that fecundity declined in the 5 y preceding death in eight Quercus species, although cause of death here, as in most cases, is hard to identify. Fielding (24) found that cone size of Pinus radiata declines with tree age and smaller cones produce fewer seeds (25). Some studies support age-related fecundity declines in herbaceous species (2628). Thus, there is evidence to suggest the fecundity schedules might show declines with size, age, or both.The reproductive potential of trees as they grow and age is of special concern to ecologists because, despite being relatively rare, large trees can contribute disproportionately to forest biomass due to the allometric scaling that amplifies linear growth in diameter to a volume increase that is more closely related to biomass (29, 30). Understanding the role of large trees can also benefit management in recovering forests (31). If allometric scaling applies to fecundity, then these large individuals might determine the species and genetic composition of seeds that compete for dominance in future forests.Unfortunately, underrepresentation of big trees in forests frustrates efforts to infer how fecundity changes with size. Simple allometric relationships between seed production and tree diameter can offer useful predictions for the small- to intermediate-size trees that dominate observational data, so it is not surprising that modeling began with the assumption of allometric scaling (3236). Extrapolation from these models would predict that seed production by the small trees from which most observations come may be overwhelmed by big trees. Despite the increase with tree size assumed by ecologists (37), evidence for declining reproduction in large trees has continued to accumulate from horticultural practice (3, 4, 38, 39) and at least some ecological (4045) and forestry literature (46, 47). However, we are unaware of studies that evaluate changes in fecundity that include substantial numbers of large trees.Understanding the role of size and age is further complicated by the fact that tree fecundity ranges over orders of magnitude from tree to tree of the same species and within the same tree from year to year—a phenomenon known as “masting.” The variation in seed-production data requires large sample sizes not only to infer the effects of size, but also to account for local habitat and interannual climate variation. For example, a one-time destructive harvest to count seeds in felled trees (48, 49) misses the fact that the same trees would offer a different picture had they been harvested in a different year. An oak that produces 100 acorns this year may produce 10,000 next year. A pine that produces 500 cones this year can produce zero next year. Few datasets offer the sample sizes of trees and tree years needed to estimate effects of size and habitat conditions in the face of this high intertree and interyear variability (43).We begin this analysis by extending allometric scaling to better reflect the geometry of fecundity with tree size. We then reexamine the size–fecundity relationship using data from the Masting Inference and Forecasting (MASTIF) project (50), which includes substantial representation of large trees, and a modeling framework that allows for the possibility that fecundity plateaus or even declines in large trees. Unlike previous studies, we account for the nonallometric influences that come through competition and climate. We demonstrate that fecundity–diameter relationships depart substantially from allometric scaling in ways that are consistent with physiological senescence.Continuous increase with size has been assumed in most models of tree fecundity, supported in part by allometric regressions against diameter, typically of the formlogMf=β0+βDlogD[1]for fecundity mass Mf=m×f (48, 51), where D is tree diameter, m is mass per seed, and fecundity f is seeds per tree per year. Of course, this model cannot be used to determine whether or how fecundity changes with tree diameter unless expanded to include additional quadratic or higher-order terms (52).The assumption of continual increase in fecundity was interpreted from early seed-trap studies, which initially assumed that βD=2, i.e., fecundity proportional to stem basal area (3334, 51). Models subsequently became more flexible, first with βD values fitted, rather than fixed, yielding estimates in the range (0.3, 0.9) in one study (ref. 52, 18 species) and (0, 4.1) in another (ref. 56, 4 species). However, underrepresentation of large trees in typical datasets means that model fitting is dominated by the abundant small size classes.To understand why data and models could fail to accurately represent change in fecundity with size, consider that allometric scaling in Eq. 1 can be maintained dynamically only if change in both adheres to a strict proportionality1fdfdt1DdDdt[2](57). For allometric scaling, any variable that affects diameter growth has to simultaneously affect change in fecundity and in the same, proportionate way. In other words, allometric scaling cannot hold if there are selective forces on fecundity that do not operate through diameter growth and vice versa.On top of this awkward constraint that demands proportionate responses of growth and fecundity, consider further that standard arguments for allometric scaling are not directly relevant for tree fecundity. Allometry is invoked for traits that maintain relationships between body parts as an organism changes size (29). For example, a diameter increment translates to an increase in volume throughout the tree (58, 59). Because the cambial layer essentially blankets the tree, a volume increment cannot depart much from a simple allometric relationship with diameter. However, the same cannot be said for all plant parts, many of which clearly do not allometrically scale; for example, seed size does not scale with leaf size (60), presumably because structural constraints are not the dominant forces that relate them (61).To highlight why selective forces might not generate strict allometric scaling for reproduction, consider that a tree allocates a small fraction of potential buds to reproduction in a given year (62, 63). Still, if the number of buds on a tree bears some direct relationship to crown dimensions and, thus, diameter, there might be allometric scaling. However, the fraction of buds allocated to reproduction and their subsequent development to seed is affected by interannual weather and other selective forces (e.g., bud abortion, pollen limitation) in ways that diameter growth is not (6466). In fact, weather might have opposing effects on growth and reproduction (67). Furthermore, resources can change the relationship between diameter and fecundity, including light levels (52, 6870) and atmospheric CO2 (71).Some arguments based on carbon balance anticipate a decline in fecundity with tree size (72). Increased stomatal limitation (11) and reduced leaf turgor pressure (14, 73) from increasing hydraulic path length could reduce carbon gains in large trees. Assimilation rates on a leaf area basis can decline with tree size (74), while respiration rate per leaf area can increase [Sequoia sempervirens (75), Liquidambar styraciflua (76), and Pinus sylvestris (77)], consistent with the notion that whole-plant respiration rate may roughly scale with biomass (78). Maintenance respiration costs scale with diameter in some tropical species (79) but perhaps not in Pinus contorta and Picea engelmannii (80). Self-pruning of lower branches can reduce maintenance costs (81), but the ratio of carbon gain to respiration cost can still decline with size, especially where leaf area plateaus and per-area assimilation rates of leaves decline in large trees.The question of size–fecundity relationships is related indirectly to the large literature on interannual variation in growth–fecundity allocation (3, 4, 43, 67, 8287). The frequency and timing of mast years and species differences in the volatility of seed production can be related to short-term changes in physiological state and pollen limitation that might not predict the long-term relationships between size and reproductive effort. The interannual covariance in diameter growth and reproductive effort can range from strong in some species to weak in others (70, 87, 88). Understanding the relationships between short-term allocation and size–fecundity differences will be an important focus of future research.Estimating effects of size on fecundity depends on the distribution of diameter data, [D], where the bracket notation indicates a distribution or density. For some early-successional species, the size distribution changes from dominance by small trees in young stands to absence of small trees in old stands. If our goal was to describe the population represented by a forest inventory plot, we would typically think about the joint distribution of fecundity and diameter values, [f,D]=[f|D][D], that is represented by the sample. The size–fecundity relationship estimated for a stand at different successional stages would diverge simply due to the distribution of diameters, i.e., differences in [D]. For example, application of Eq. 1 to harvested trees selected to balance size classes (uniform [D]) (48) overpredicts fecundity for large trees (49), but the relevance of such regressions for natural stands, where large trees are often rare, is unclear. Studies that expand Eq. 1 to allow for changing relationships with tree size now provide increasing evidence for a departure from allometric scaling in large trees (43, 70), despite dominance by small- to intermediate-size trees in these datasets. Here our goal is to understand the size–fecundity relationship [f|D] as an attribute of a species, i.e., not tied to a specific distribution of size classes observed in a particular stand.The well-known weak relationship between tree size and age that comes from variable growth histories makes it important to clarify the implications of any finding of fecundity that declines with tree size: Can it happen if there are not also fecundity declines with tree age? The only argument for continuing increase in fecundity with age in the face of observed decreases with size would have to assume that the biggest trees are also the youngest trees. Of course, a large individual can be younger than a small individual. However, at the species level, integrating over populations sampled widely, mean diameter increases with age; at the species level, declines with size also imply declines with age. Estimating accurate species-level size effects requires distributed data and large sample sizes. The analysis here fits species-level parameters, with 585,670 trees and 10,542,239 tree years across 597 species.Phylogenetic analysis might provide insight into the pervasiveness of fecundity declines with size. Inferring change in fecundity with size necessarily requires more information than is needed to fit a single slope parameter βD in the simple allometric model. The noisier the data, the more difficult it becomes to estimate the additional parameters that are needed to describe changes in the fecundity relationship with size. We thus expect that noise alone will preclude finding size-related change in some species, depending on sample size and non–size-related variation. If the vagaries of noisy data and the distribution of diameters preclude estimation of declines in some species, then we do not expect that phylogeny will explain which species do and do not show these declines. Rather than phylogeny, this explanation would instead be tied to sample size and the distribution of diameter data. Conversely, phylogenetic conservatism, i.e., a tendency for declines to be clustered in related species, could suggest that fecundity declines are real.To understand how seed production changes with tree size, our approach combines theory and data to evaluate allometric scaling and the alternative that fecundity may decline in large trees, consistent with physiological decline and senescence. We exploit two advances that are needed to determine how fecundity scales with tree size. First, datasets are needed with large trees, because studies in the literature often include few or none (85, 89, 90). Second, methods are introduced that are flexible to the possibility that fecundity continues to increase with size or not. We begin with a reformulation of allometric scaling, recognizing that change in fecundity could be regulated by size, without taking the form of Eq. 1 (Materials and Methods and SI Appendix, section S2). In other words, there could be allometric scaling with diameter, but it is not the relationship that has been used for structural quantities like biomass. We then analyze the relationships in data using a model that not only allows for potential changes in fecundity with size, but at the same time accounts for self-shading and shading by neighbors and for environmental variables that can affect fecundity and growth (Materials and Methods and SI Appendix, section S3). The fitted model is compared with our expanded allometric model to identify potential agreement. Finally, we examined phylogenetic trends in the species that do and do not show declines.  相似文献   

17.
Thermoresponsive microgels are one of the most investigated types of soft colloids, thanks to their ability to undergo a Volume Phase Transition (VPT) close to ambient temperature. However, this fundamental phenomenon still lacks a detailed microscopic understanding, particularly regarding the presence and the role of charges in the deswelling process. This is particularly important for the widely used poly(N-isopropylacrylamide)–based microgels, where the constituent monomers are neutral but charged groups arise due to the initiator molecules used in the synthesis. Here, we address this point combining experiments with state-of-the-art simulations to show that the microgel collapse does not happen in a homogeneous fashion, but through a two-step mechanism, entirely attributable to electrostatic effects. The signature of this phenomenon is the emergence of a minimum in the ratio between gyration and hydrodynamic radii at the VPT. Thanks to simulations of microgels with different cross-linker concentrations, charge contents, and charge distributions, we provide evidence that peripheral charges arising from the synthesis are responsible for this behavior and we further build a universal master curve able to predict the two-step deswelling. Our results have direct relevance on fundamental soft condensed matter science and on applications where microgels are involved, ranging from materials to biomedical technologies.

Responsive particles have recently captured the interest of scientists working under many diverse fields (13). Indeed, their ability to adapt to the environmental conditions has enormous advantages for potential applications from biochemistry to nanomedicine (47), but also as smart sensors for various analytes (8, 9). The versatility of these soft objects lies in the manifold routes in which the chemical components can be synthesized and in the transfer of the single-particle properties to the mesoscopic and macroscopic level.In particular, most of these responsive particles are macromolecular colloids, whose inner structure relies on a polymeric system that controls the behavior at the colloidal scale. The prototypical example, that is most actively studied in the literature nowadays, is that of microgel particles, i.e., colloidal-scale realizations of a cross-linked polymer network (10, 11). In their most elementary version, these microgels are composed of a single monomeric component. Among all possible compounds, poly(N-isopropylacrylamide) (pNIPAM) is thermoresponsive and undergoes a solubility transition from good to bad solvent conditions at a temperature Tc32°C. For responsive microgels, this phenomenon is called volume phase transition (VPT), by which particles are able to reversibly swell and deswell across Tc. Microgels can be routinely synthesized in a wide range of sizes roughly going from 50 nm to 100 μm in diameter, a reason for which they are applicable to a variety of purposes and can be investigated with different experimental techniques, from neutron (12) and X-ray scattering (13) up to optical methods and microfluidics (14). In addition, their complex internal structure and collective behavior, involving particle deformation and interpenetration, can nowadays be resolved with single-particle detail thanks to recent advancements in superresolution microscopy (1518). The possibility to be studied with these fascinating tools makes them also one of the favorite model systems for fundamental science both in bulk suspensions (11, 19) and adsorbed at interfaces (2022).For all the above reasons, it is legitimate to say that the VPT occurring in pNIPAM microgels is one of the most studied phenomena in soft condensed matter. Despite the huge amount of experimental and theoretical work on this topic, which is witnessed by the large number of recent reviews (2328), there are still fundamental aspects of the VPT that remain poorly understood. In particular, pNIPAM microgels are often treated as neutral systems, since electrostatic interactions are usually thought not to play an important role in their behavior, apart from the stabilization against aggregation given to the suspension, especially at high temperatures. However, the typical batch synthesis procedure of pNIPAM microgels usually includes charged compounds, in particular those from the initiators of the polymerization process. While their presence may be effectively neglected or screened out by the addition of salt (13), recent works pointed out a relevant effect of peripheral charges in concentrated suspensions (29). At present, the influence of these charges on the VPT has not been clarified yet.To address these gaps, we recently developed a computational method (30) to assemble disordered networks with desired cross-linker concentration and a core–corona structure that closely reproduces experimental behavior (13, 31). After imposing the correct internal structure, we extended our method to properly include the presence of charged monomers with explicit counterions (32), again validating our results in the presence of explicit solvent and comparing with available experiments (33). For these reasons, we are now in the condition to carefully assess the effect of initiator charges on the deswelling mechanism of pNIPAM microgels across the VPT.By combining simulations with static and dynamic light scattering experiments, here we show that the presence of these charges strongly affects, from a qualitative point of view, the deswelling transition, inducing an inhomogeneous two-step collapse of the microgels with increasing temperature. This is due to the different solvophobicity of pNIPAM and charged groups, respectively, which manifests in the emergence of a minimum in the ratio between the gyration Rg and hydrodynamic RH radii at the VPT. First of all, we show that such a minimum is absent for neutral microgels. Second, we analyze in detail the role of the charge distribution throughout the microgel network to assess whether the initiator groups are preferentially located on the surface of the microgels, as previously hypothesized (29), but never effectively proven so far. In order to be able to predict the onset of the two-step deswelling, we further study different combinations of cross-linking ratio, charge content, and charge distribution, establishing clear trends in the occurrence of the minimum in Rg/RH. Notably, we obtain a master curve for the observed minimum for all simulated microgels when we plot it as a function of the average (effective) charge content per chain on the microgel surface, which turns out to be the simplest indicator of the presence of the two-step collapse.Our work sheds light on the fundamental electrostatic interactions influencing microgel deswelling, which are crucial to correctly describe their assembly and collective behavior at high temperatures. In addition, it opens up the possibility to a priori design microgels with desired characteristics and tunable onset of two-step deswelling, which could be exploited to enhance or adjust the potential applications of microgels as smart micro-objects.  相似文献   

18.
Advances in polymer chemistry over the last decade have enabled the synthesis of molecularly precise polymer networks that exhibit homogeneous structure. These precise polymer gels create the opportunity to establish true multiscale, molecular to macroscopic, relationships that define their elastic and failure properties. In this work, a theory of network fracture that accounts for loop defects is developed by drawing on recent advances in network elasticity. This loop-modified Lake–Thomas theory is tested against both molecular dynamics (MD) simulations and experimental fracture measurements on model gels, and good agreement between theory, which does not use an enhancement factor, and measurement is observed. Insight into the local and global contributions to energy dissipated during network failure and their relation to the bond dissociation energy is also provided. These findings enable a priori estimates of fracture energy in swollen gels where chain scission becomes an important failure mechanism.

Models that link materials structure to macroscopic behavior can account for multiple levels of molecular structure. For example, the statistical, affine deformation model connects the elastic modulus E to the molecular structure of a polymer chain,Eaff=3νkbT(ϕo13Roϕ13R)2,[1]where ν is density of chains, ϕ is polymer volume fraction, R is end-to-end distance, ϕo and Ro represent the parameters taken in the reference state that is assumed to be the reaction concentration in this work, and kbT is the available thermal energy where kb is Boltzmann’s constant and T is temperature (16). Refinements to this model that account for network-level structure, such as the presence of trapped entanglements or number of connections per junction, have been developed (711). Further refinements to the theory of network elasticity have been developed to account for dynamic processes such as chain relaxation and solvent transport (1217). Together these refinements link network elasticity to chain-level molecular structure, network-level structure, and the dynamic processes that occur at both size scales.While elasticity has been connected to multiple levels of molecular structure, models for network fracture have not developed to a similar extent. The fracture energy Gc typically relies upon the large strain deformation behavior of polymer networks, making it experimentally difficult to separate the elastic energy released upon fracture from that dissipated through dynamic processes (1826). In fact, most fracture theories have been developed at the continuum scale and have focused on modeling dynamic dissipation processes (27). An exception to this is the theory of Lake and Thomas that connects the elastic energy released during chain scission to chain-level structure,Gc,LT=ChainsArea×EnergyDissipatedChain=νRoNU,[2]where NU is the total energy released when a chain ruptures in which N represents the number of monomer segments in the chain and U the energy released per monomer (26).While this model was first introduced in 1967, experimental attempts to verify Lake–Thomas theory as an explicit model, as summarized in SI Appendix, have been unsuccessful. Ahagon and Gent (28) and Gent and Tobias (29) attempted to do this on highly swollen networks at elevated temperature but found that, while the scalings from Eq. 2 work well, an enhancement factor was necessary to observe agreement between theory and experiment. This led many researchers to conclude that Lake–Thomas theory worked only as a scaling argument. In 2008, Sakai et al. (30) introduced a series of end-linked tetrafunctional, star-like poly(ethylene glycol) (PEG) gels. Scattering measurements indicated a lack of nanoscale heterogeneities that are characteristic of most polymer networks (3032). Fracture measurements on these well-defined networks were performed and it was again observed that an enhancement factor was necessary to realize explicit agreement between experiment and theory (33). Arora et al. (34) recently attempted to address this discrepancy by accounting for loop defects; however, different assumptions were used when inputting U to calculate Lake–Thomas theory values that again required the use of an enhancement factor to achieve quantitative agreement. In this work we demonstrate that refining the Lake–Thomas theory to account for loop defects while using the full bond dissociation energy to represent U yields excellent agreement between the theory and both simulation and experimental data without the use of any adjustable parameters.PEG gels synthesized via telechelic end-linking reactions create the opportunity to build upon previous theory to establish true multiscale, molecular to macroscopic relationships that define the fracture response of polymer networks. This paper combines pure shear notch tests, molecular dynamics (MD) simulations, and theory to quantitatively extend the concept of network fracture without the use of an enhancement factor. First, the control of molecular-level structure in end-linked gel systems is discussed. Then, the choice of molecular parameters used to estimate chain- and network-level properties is discussed. Experimental and MD simulation methods used when fracturing model end-linked networks are then presented. A theory of network fracture that accounts for loop defects is developed, in the context of other such models that have emerged recently, and tested against data from experiments and MD simulations. Finally, a discussion of the local and global energy dissipated during failure of the network is presented.  相似文献   

19.
Temperate bacteriophages lyse or lysogenize host cells depending on various parameters of infection, a key one being the ratio of the number of free viruses to the number of host cells. However, the effect of different propensities of phages for lysis and lysogeny on phage fitness remains an open problem. We explore a nonlinear dynamic evolution model of competition between two phages, one of which is disadvantaged in both the lytic and lysogenic phases. We show that the disadvantaged phage can win the competition by alternating between the lytic and lysogenic phases, each of which individually is a “loser.” This counterintuitive result is analogous to Parrondo’s paradox in game theory, whereby individually losing strategies combine to produce a winning outcome. The results suggest that evolution of phages optimizes the ratio between the lysis and lysogeny propensities rather than the phage burst size in any individual phase. These findings are likely to broadly apply to the evolution of host–parasite interactions.

Bacteriophages outnumber all other reproducing biological entities in the biosphere combined, reaching an estimated instantaneous total of about 1031 virus across all biomes (1, 2). Bacteriophages attain this hyperastronomical abundance using two basic strategies of infection that are traditionally classified as lytic and temperate. Lytic phages enter host cells and immediately take over the cellular machinery to produce progeny virions, followed by a programmed burst of the cell (lysis), which releases progeny virions into the environment where they can initiate subsequent rounds of infection (3). In contrast, temperate phages “decide” to follow the lytic or lysogenic strategy at the onset of infection. Under the lysogenic strategy, the phage genome stably integrates into the host genome, becoming a prophage that is inherited by the daughter cells during cell division and thus, propagates vertically with the host, without lysis of host cells. Phage lambda, one of the best-studied models of genetics and molecular biology, is a classic example of lysogeny. Upon sensing an appropriate signal, such as DNA damage, a prophage decides to end lysogeny and reproduce through the lytic pathway (4, 5). Given that an estimated 1023 infections of bacteria by bacteriophages occur on Earth every second (6), with profound effects on the global ecology as well as human health (1, 7, 8), the evolutionary processes that shape phage replication strategies are of fundamental biological interest and importance.The ability of temperate phages to decide between lysis or lysogeny has drawn considerable attention of theorists, resulting in the development of models aiming to quantify the conditions in which one strategy prevails over the other, or in other words, deciphering the rules of phage lysis vs. lysogeny decisions. Temperate viruses that choose lysogeny are constrained by cellular binary fission, whereas lytic replication can produce large bursts of progeny virions from a single cell. A foundational theoretical study asked the question simply. Why be temperate (9)? The potential benefits of a nonlytic strategy are realized when the host cell density is too low to support lytic growth that would otherwise cause collapse of one or both populations. Furthermore, under these conditions, the frequency of encounters of the phage particles that are released upon host lysis with uninfected host cells is low, such that vertical propagation with the host becomes advantageous for the virus. In essence and put as simply as possible, lysogeny is advantageous in hard times (9). Several recent formal model analyses agree that lysogeny is favored at low host cell density (1012). However, somewhat paradoxically, lysogeny appears to be the dominant behavior at very high host cell density as well (13, 14). The mechanisms driving viruses toward lysogeny at both low and high host cell densities are not well understood, but differential cellular growth rates, viral adsorption rates, and the structure of the host–phage interaction network all appear to contribute (14, 15). Collectively, these studies underscore the importance of density-dependent dynamics for infection outcomes.The paradigm for the decisions temperate phages make on the lytic vs. lysogenic pathway upon infection is phage lambda. Seminal work on lambda has demonstrated that lysogeny is favored at high virus/host ratios, when multiple lambda virions coinfect the same cell (16). The standard interpretation of these findings is that the coinfection rate is a proxy for host cell density, which drives lambda toward lysogeny at high coinfection rates: that is, low density (17). The genetic circuitry underlying lambda’s lysogenic response has been meticulously dissected over decades of research (5, 18, 19), and additional mechanistic determinants have been identified in later studies (2024). Directed evolution of lambda yields mutants with different thresholds for switching from lysogeny to lysis (induction), and such heterogeneity has been observed in numerous lambda-like phages (2527). Moreover, the vast genomic diversity of phages implies a commensurately diverse repertoire of lysis–lysogeny circuits, and indeed, experiments with phages unrelated to lambda have revealed a variety of ways evolution constructed these genetic switches (2830). In general, how the different propensities for lysis or lysogeny impact phage fitness at different host cell densities (virus/host ratios) and in particular, when in competition with other phages remains an open problem.Inspired by the previous theoretical and experimental studies, we developed a population evolution model to investigate the competition between two phages that differ in their rates of establishing lysogeny based on the ratio of the number of free virions to the number of host cells. In this model, the first phage (P1) has a higher mortality rate, a lower burst size, and a lower infection rate during both lysis and lysogeny compared with the second competing phage (P2). From a game-theoretic perspective, P1 is burdened by two losing strategies. Unexpectedly, analysis of our model shows that, by alternating between these two losing strategies, P1 outcompetes P2 within a large domain of the parameter space. This counterintuitive result is analogous to a phenomenon known as Parrondo’s paradox in game theory (31). Parrondo’s paradox was first conceptualized as an abstraction of flashing Brownian ratchets (32, 33), wherein diffusive particles exhibit unexpected drift when exposed to alternating periodic potentials. The sustained interest in the paradox has since fostered a synergistic interdisciplinary effort. Indeed, manifestations of Parrondo’s paradox have been studied in various biological systems, such as nomadic and colonial lifestyles (34), activity and dormancy in predator–prey systems (35), and unicellular and multicellular phases in organismal life history (36). The fact that the paradox can occur when the game sequence is completely or partially random appears compatible with the inherent stochasticity in biological systems as manifested, for example, in environmental or demographic noise (37).Here, we examine the evolution of different strategies of bacteriophage–host interaction within the framework of Parrondo’s paradox. The analysis of the model developed in this work suggests that alternating between lysis and lysogeny is intrinsically beneficial for a phage within a broad range of model parameters. This conclusion has implications for understanding the evolution of parasite–host interactions in diverse biological contexts.  相似文献   

20.
When aged below the glass transition temperature, Tg, the density of a glass cannot exceed that of the metastable supercooled liquid (SCL) state, unless crystals are nucleated. The only exception is when another polyamorphic SCL state exists, with a density higher than that of the ordinary SCL. Experimentally, such polyamorphic states and their corresponding liquid–liquid phase transitions have only been observed in network-forming systems or those with polymorphic crystalline states. In otherwise simple liquids, such phase transitions have not been observed, either in aged or vapor-deposited stable glasses, even near the Kauzmann temperature. Here, we report that the density of thin vapor-deposited films of N,N′-bis(3-methylphenyl)-N,N′-diphenylbenzidine (TPD) can exceed their corresponding SCL density by as much as 3.5% and can even exceed the crystal density under certain deposition conditions. We identify a previously unidentified high-density supercooled liquid (HD-SCL) phase with a liquid–liquid phase transition temperature (TLL) 35 K below the nominal glass transition temperature of the ordinary SCL. The HD-SCL state is observed in glasses deposited in the thickness range of 25 to 55 nm, where thin films of the ordinary SCL have exceptionally enhanced surface mobility with large mobility gradients. The enhanced mobility enables vapor-deposited thin films to overcome kinetic barriers for relaxation and access the HD-SCL state. The HD-SCL state is only thermodynamically favored in thin films and transforms rapidly to the ordinary SCL when the vapor deposition is continued to form films with thicknesses more than 60 nm.

Glasses are formed when the structural relaxations in supercooled liquids (SCLs) become too slow, causing the system to fall out of equilibrium at the glass transition temperature (Tg). The resulting out-of-equilibrium glass state has a thermodynamic driving force to evolve toward the SCL state through physical aging (1). At temperatures just below Tg, the extent of equilibration is limited by the corresponding SCL state, while at much lower temperatures, equilibration is limited by the kinetic barriers for relaxation. As such, the degree of thermodynamic stability achieved through physical aging is limited (2).Physical vapor deposition (PVD) is an effective technique to overcome kinetic barriers for relaxation to produce thermodynamically stable glasses (310). The accelerated equilibration in these systems is due to their enhanced surface mobility (1114). During PVD, when the substrate temperature is held below Tg, molecules or atoms can undergo rearrangements and adopt more stable configurations at the free surface and proximate layers underneath (13). After the molecules are buried deeper into the film, their relaxation dynamics significantly slow down, which prevents further equilibration. Through this surface-mediated equilibration process, stable glasses can achieve low-energy states on the potential energy landscape that would otherwise require thousands or millions of years of physical aging (2, 3, 15, 16).As such, the degree of enhanced surface mobility and mobility gradients are critical factors in the formation of stable glasses (3, 11, 17, 18). While the effect of film thickness on the surface mobility and gradients of liquid-quenched (LQ) glasses has been studied in the past (19, 20), there are limited data on the role of film thickness in the stability of vapor-deposited glasses. In vapor-deposited toluene, it has been shown that decreasing the film thickness from 70 to 5 nm can increase the thermodynamic stability but decrease the apparent kinetic stability (5, 6). In contrast, thin films covered with a top layer of another material do not show a significant evidence of reduced kinetic stability (21), indicating the nontrivial role of mobility gradients in thermal and kinetic stability.Stable glasses of most organic molecules, with short-range intramolecular interactions, have properties that are indicative of the same corresponding metastable SCL state as LQ and aged glasses, without any evidence of the existence of generic liquid–liquid phase transitions that can potentially provide a resolution for the Kauzmann entropy crisis (22). The Kauzmann crisis occurs at the Kauzmann temperature (TK), where the extrapolated SCL has the same structural entropy as the crystal, producing thermodynamically impossible states just below this temperature. Recently, Beasley et al. (16) showed that near-equilibrium states of ethylbenzene can be produced using PVD down to 2 K above TK and hypothesized that any phase transition to an “ideal glass” state to avoid the Kauzmann crisis must occur at TK.In some glasses of elemental substances (23, 24) and hydrogen-bonding compounds (25, 26), liquid–liquid phase transitions can occur between polyamorphic states with distinct local packing structures that correspond to polymorphic crystalline phases. For example, at high pressures, high- and low-density supercooled water phases are interconvertible through a first-order phase transition (27, 28). Recent studies have demonstrated that such polyamorphic states can also be accessed through PVD in hydrogen-bonding systems with polymorphic crystal states at depositions above the nominal Tg (29, 30). However, these structure-specific transitions do not provide a general resolution for the Kauzmann crisis.Here, we report the observation of a liquid–liquid phase transition in vapor-deposited thin films of N,N′-bis(3-methylphenyl)-N,N′-diphenylbenzidine (TPD). TPD is a molecular glass former with only short-range intermolecular interactions. When thin films of TPD are vapor deposited onto substrates held at deposition temperatures (Tdep) below the nominal glass transition temperature of bulk TPD, Tg (bulk), films in the thickness range of 25nm<h<55nm achieve a high-density supercooled liquid (HD-SCL) state, which has not been previously observed. The liquid–liquid phase transition temperature (TLL) between the ordinary SCL and HD-SCL states is measured to be TLLTg(bulk)35K. The density of thin films deposited below TLL tangentially follows the HD-SCL line, which has a stronger temperature dependence than the ordinary SCL. When vapor deposition is continued to produce thicker films (h>60nm), the HD-SCL state transforms into the ordinary SCL state, indicating that the HD-SCL is only thermodynamically favored in the thin-film geometry. This transition is qualitatively different from the previously reported liquid–liquid phase transitions, as it is not related to a specific structural motif in TPD crystals, and it can only be observed in thin films, indicating that the energy landscape of thin films is favoring this high-density state.We observe an apparent correlation between enhanced mobility gradients in LQ thin films of TPD and the thickness range where HD-SCL states are produced during PVD. We hypothesize that enhanced mobility gradients are essential in providing access to regions of the energy landscape corresponding to the HD-SCL state, which are otherwise kinetically inaccessible. This hypothesis should be further investigated to better understand the origin of this phenomenon.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号