首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Our study of cholesteric lyotropic chromonic liquid crystals in cylindrical confinement reveals the topological aspects of cholesteric liquid crystals. The double-twist configurations we observe exhibit discontinuous layering transitions, domain formation, metastability, and chiral point defects as the concentration of chiral dopant is varied. We demonstrate that these distinct layer states can be distinguished by chiral topological invariants. We show that changes in the layer structure give rise to a chiral soliton similar to a toron, comprising a metastable pair of chiral point defects. Through the applicability of the invariants we describe to general systems, our work has broad relevance to the study of chiral materials.

Chiral liquid crystals (LCs) are ubiquitous, useful, and rich systems (14). From the first discovery of the liquid crystalline phase to the variety of chiral structures formed by biomolecules (59), the twisted structure, breaking both mirror and continuous spatial symmetries, is omnipresent. The unique structure also makes the chiral nematic (cholesteric) LC, an essential material for applications utilizing the tunable, responsive, and periodic modulation of anisotropic properties.The cholesteric is also a popular model system to study the geometry and topology of partially ordered matter. The twisted ground state of the cholesteric is often incompatible with confinement and external fields, exhibiting a large variety of frustrated and metastable director configurations accompanying topological defects. Besides the classic example of cholesterics in a Grandjean−Cano wedge (10, 11), examples include cholesteric droplets (1216), colloids (1719), shells (2022), tori (23, 24), cylinders (2529), microfabricated structures (30, 31), and films between parallel plates with external fields (3240). These structures are typically understood using a combination of nematic (achiral) topology (41, 42) and energetic arguments, for example, the highly successful Landau−de Gennes approach (43). However, traditional extensions of the nematic topological approach to cholesterics are known to be conceptually incomplete and difficult to apply in regimes where the system size is comparable to the cholesteric pitch (41, 44).An alternative perspective, chiral topology, can give a deeper understanding of these structures (4547). In this approach, the key role is played by the twist density, given in terms of the director field n by n×n. This choice is not arbitrary; the Frank free energy prefers n×nq0=2π/p0 with a helical pitch p0, and, from a geometric perspective, n×n0 defines a contact structure (48). This allows a number of new integer-valued invariants of chiral textures to be defined (45). A configuration with a single sign of twist is chiral, and two configurations which cannot be connected by a path of chiral configurations are chirally distinct, and hence separated by a chiral energy barrier. Within each chiral class of configuration, additional topological invariants may be defined using methods of contact topology (4548), such as layer numbers. Changing these chiral topological invariants requires passing through a nonchiral configuration. Cholesterics serve as model systems for the exploration of chirality in ordered media, and the phenomena we describe here—metastability in chiral systems controlled by chiral topological invariants—has applicability to chiral order generally. This, in particular, includes chiral ferromagnets, where, for example, our results on chiral topological invariants apply to highly twisted nontopological Skyrmions (49, 50) (“Skyrmionium”).Our experimental model to explore the chiral topological invariants is the cholesteric phase of lyotropic chromonic LCs (LCLCs). The majority of experimental systems hitherto studied are based on thermotropic LCs with typical elastic and surface-anchoring properties. The aqueous LCLCs exhibiting unusual elastic properties, that is, very small twist modulus K2 and large saddle-splay modulus K24 (5156), often leading to chiral symmetry breaking of confined achiral LCLCs (53, 54, 5661), may enable us to access uncharted configurations and defects of topological interests. For instance, in the layer configuration by cholesteric LCLCs doped with chiral molecules, their small K2 provides energetic flexibility to the thickness of the cholesteric layer, that is, the repeating structure where the director n twists by π. The large K24 affords curvature-induced surface interactions in combination with a weak anchoring strength of the lyotropic LCs (6264).We present a systematic investigation of the director configuration of cholesteric LCLCs confined in cylinders with degenerate planar anchoring, depending on the chiral dopant concentration. We show that the structure of cholesteric configurations is controlled by higher-order chiral topological invariants. We focus on two intriguing phenomena observed in cylindrically confined cholesterics. First, the cylindrical symmetry renders multiple local minima to the energy landscape and induces discontinuous increase of twist angles, that is, a layering transition, upon the dopant concentration increase. Additionally, the director configurations of local minima coexist as metastable domains with point-like defects between them. We demonstrate that a chiral layer number invariant distinguishes these configurations, protects the distinct layer configurations (45), and explains the existence of the topological defect where the invariant changes.  相似文献   

2.
Molecular, polymeric, colloidal, and other classes of liquids can exhibit very large, spatially heterogeneous alterations of their dynamics and glass transition temperature when confined to nanoscale domains. Considerable progress has been made in understanding the related problem of near-interface relaxation and diffusion in thick films. However, the origin of “nanoconfinement effects” on the glassy dynamics of thin films, where gradients from different interfaces interact and genuine collective finite size effects may emerge, remains a longstanding open question. Here, we combine molecular dynamics simulations, probing 5 decades of relaxation, and the Elastically Cooperative Nonlinear Langevin Equation (ECNLE) theory, addressing 14 decades in timescale, to establish a microscopic and mechanistic understanding of the key features of altered dynamics in freestanding films spanning the full range from ultrathin to thick films. Simulations and theory are in qualitative and near-quantitative agreement without use of any adjustable parameters. For films of intermediate thickness, the dynamical behavior is well predicted to leading order using a simple linear superposition of thick-film exponential barrier gradients, including a remarkable suppression and flattening of various dynamical gradients in thin films. However, in sufficiently thin films the superposition approximation breaks down due to the emergence of genuine finite size confinement effects. ECNLE theory extended to treat thin films captures the phenomenology found in simulation, without invocation of any critical-like phenomena, on the basis of interface-nucleated gradients of local caging constraints, combined with interfacial and finite size-induced alterations of the collective elastic component of the structural relaxation process.

Spatially heterogeneous dynamics in glass-forming liquids confined to nanoscale domains (17) play a major role in determining the properties of molecular, polymeric, colloidal, and other glass-forming materials (8), including thin films of polymers (9, 10) and small molecules (1115), small-molecule liquids in porous media (2, 4, 16, 17), semicrystalline polymers (18, 19), polymer nanocomposites (2022), ionomers (2325), self-assembled block and layered (2633) copolymers, and vapor-deposited ultrastable molecular glasses (3436). Intense interest in this problem over the last 30 y has also been motivated by the expectation that its understanding could reveal key insights concerning the mechanism of the bulk glass transition.Considerable progress has been made for near-interface altered dynamics in thick films, as recently critically reviewed (1). Large amplitude gradients of the structural relaxation time, τ(z,T), converge to the bulk value, τbulk(T), in an intriguing double-exponential manner with distance, z, from a solid or vapor interface (13, 3742). This implies that the corresponding effective activation barrier, Ftotal(z,T,H) (where H is film thickness), varies exponentially with z, as does the glass transition temperature, Tg (37). Thus the fractional reduction in activation barrier, ε(z,H), obeys the equation ε(z,H)1Ftotal(z,T,H)/Ftotal,bulk(T)=ε0exp(z/ξF), where Ftotal,bulk(T) is the bulk temperature-dependent barrier and ξF a length scale of modest magnitude. Although the gradient of reduction in absolute activation barriers becomes stronger with cooling, the amplitude of the fractional reduction of the barrier gradient, quantified by ε0, and the range ξF of this gradient, exhibit a weak or absent temperature dependence at the lowest temperatures accessed by simulations (typically with the strength of temperature dependence of ξF decreasing rather than increasing on cooling), which extend to relaxation timescales of order 105 ps. This finding raises questions regarding the relevance of critical-phenomena–like ideas for nanoconfinement effects (1). Partially due to this temperature invariance, coarse-grained and all-atom simulations (1, 37, 42, 43) have found a striking empirical fractional power law decoupling relation between τ(z,T) and τbulk(T):τ(T,z)τbulk(T)(τbulk(T))ε(z).[1]Recent theoretical analysis suggests (44) that this behavior is consistent with a number of experimental data sets as well (45, 46). Eq. 1 also corresponds to a remarkable factorization of the temperature and spatial location dependences of the barrier:Ftotal(z,T)=[1ε(z)]Ftotal,bulk(T).[2]This finding indicates that the activation barrier for near-interface relaxation can be factored into two contributions: a z-dependent, but T-independent, “decoupling exponent,” ε(z), and a temperature-dependent, but position-insensitive, bulk activation barrier, Ftotal,bulk(T). Eq. 2 further emphasizes that ε(z) is equivalent to an effective fractional barrier reduction factor (for a vapor interface), 1Ftotal(z,T,H)/Ftotal,bulk(T), that can be extracted from relaxation data.In contrast, the origin of “nanoconfinement effects” in thin films, and how much of the rich thick-film physics survives when dynamic gradients from two interfaces overlap, is not well understood. The distinct theoretical efforts for aspects of the thick-film phenomenology (44, 4750) mostly assume an additive summation of one-interface effects in thin films, thereby ignoring possibly crucial cooperative and whole film finite size confinement effects. If the latter involve phase-transition–like physics as per recent speculations (14, 51), one can ask the following: do new length scales emerge that might be truncated by finite film size? Alternatively, does ultrathin film phenomenology arise from a combination of two-interface superposition of the thick-film gradient physics and noncritical cooperative effects, perhaps in a property-, temperature-, and/or thickness-dependent manner?Here, we answer these questions and establish a mechanistic understanding of thin-film dynamics for the simplest and most universal case: a symmetric freestanding film with two vapor interfaces. We focus on small molecules (modeled theoretically as spheres) and low to medium molecular weight unentangled polymers, which empirically exhibit quite similar alterations in dynamics under “nanoconfinement.” We do not address anomalous phenomena [e.g., much longer gradient ranges (29), sporadic observation of two distinct glass transition temperatures (52, 53)] that are sometimes reported in experiments with very high molecular weight polymers and which may be associated with poorly understood chain connectivity effects that are distinct from general glass formation physics (5456).We employ a combination of molecular dynamics simulations with a zero-parameter extension to thin films of the Elastically Cooperative Nonlinear Langevin Equation (ECNLE) theory (57, 58). This theory has previously been shown to predict well both bulk activated relaxation over up to 14 decades (4446) and the full single-gradient phenomenology in thick films (1). Here, we extend this theory to treat films of finite thickness, accounting for coupled interface and geometric confinement effects. We compare predictions of ECNLE theory to our previously reported (37, 43) and new simulations, which focus on translational dynamics of films comprised of a standard Kremer–Grest-like bead-spring polymer model (see SI Appendix). These simulations cover a wide range of film thicknesses (H, from 4 to over 90 segment diameters σ) and extend to low temperatures where the bulk alpha time is ∼0.1 μs (105 Lennard Jones time units τLJ).The generalized ECNLE theory is found to be in agreement with simulation for all levels of nanoconfinement. We emphasize that this theory does not a priori assume any of the empirically established behaviors discovered using simulation (e.g., fractional power law decoupling, double-exponential barrier gradient, gradient flattening) but rather predicts these phenomena based upon interfacial modifications of the two coupled contributions to the underlying activation barrier– local caging constraints and a long-ranged collective elastic field. It is notable that this strong agreement is found despite the fact the dynamical ideas are approximate, and a simple hard sphere fluid model is employed in contrast to the bead-spring polymers employed in simulation. The basic unit of length in simulation (bead size σ) and theory (hard sphere diameter d) are expected to be proportional to within a prefactor of order unity, which we neglect in making comparisons.As an empirical matter, we find from simulation that many features of thin-film behavior can be described to leading order by a linear superposition of the thick-film gradients in activation barrier, that is:ε(z,H)=1Ftotal(z,T,H)/Ftotal,bulk(T)ε0[exp(z/ξF)+exp((Hz)/ξF)],[3]where the intrinsic decay length ξF is unaltered from its thick-film value and where ε0 is a constant that, in the hypothesis of literal gradient additivity, is invariant to temperature and film thickness. We employ this functional form [originally suggested by Binder and coworkers (59)], which is based on a simple superposition of the two single-interface gradients, as a null hypothesis throughout this study: this form is what one expects if no new finite-size physics enters the thin-film problem relative to the thick film.However, we find that the superposition approximation progressively breaks down, and eventually entirely fails, in ultrathin films as a consequence of the emergence of a finite size confinement effect. The ECNLE theory predicts that this failure is not tied to a phase-transition–like mechanism but rather is a consequence of two key coupled physical effects: 1) transfer of surface-induced reduction of local caging constraints into the film, and 2) interfacial truncation and nonadditive modifications of the collective elastic contribution to the activation barrier.  相似文献   

3.
The intracellular milieu differs from the dilute conditions in which most biophysical and biochemical studies are performed. This difference has led both experimentalists and theoreticians to tackle the challenging task of understanding how the intracellular environment affects the properties of biopolymers. Despite a growing number of in-cell studies, there is a lack of quantitative, residue-level information about equilibrium thermodynamic protein stability under nonperturbing conditions. We report the use of NMR-detected hydrogen–deuterium exchange of quenched cell lysates to measure individual opening free energies of the 56-aa B1 domain of protein G (GB1) in living Escherichia coli cells without adding destabilizing cosolutes or heat. Comparisons to dilute solution data (pH 7.6 and 37 °C) show that opening free energies increase by as much as 1.14 ± 0.05 kcal/mol in cells. Importantly, we also show that homogeneous protein crowders destabilize GB1, highlighting the challenge of recreating the cellular interior. We discuss our findings in terms of hard-core excluded volume effects, charge–charge GB1-crowder interactions, and other factors. The quenched lysate method identifies the residues most important for folding GB1 in cells, and should prove useful for quantifying the stability of other globular proteins in cells to gain a more complete understanding of the effects of the intracellular environment on protein chemistry.Proteins function in a heterogeneous and crowded intracellular environment. Macromolecules comprise 20–30% of the volume of an Escherichia coli cell and reach concentrations of 300–400 g/L (1, 2). Theory predicts that the properties of proteins and nucleic acids can be significantly altered in cells compared with buffer alone (3, 4). Nevertheless, most biochemical and biophysical studies are conducted under dilute (<10 g/L macromolecules) conditions. Here, we augment the small but growing list of reports probing the equilibrium thermodynamic stability of proteins in living cells (59), and provide, to our knowledge, the first measurement of residue-level stability under nonperturbing conditions.Until recently, the effects of macromolecular crowding on protein stability were thought to be caused solely by hard-core, steric repulsions arising from the impenetrability of matter (4, 10, 11). The expectation was that crowding enhances stability by favoring the compact native state over the ensemble of denatured states. Increased attention to transient, nonspecific protein-protein interactions (1215) has led both experimentalists (1619) and theoreticians (2022) to recognize the effects of chemical interactions between crowder and test protein when assessing the net effect of macromolecular crowding. These weak, nonspecific interactions can reinforce or oppose the effect of hard-core repulsions, resulting in increased or decreased stability depending on the chemical nature of the test protein and crowder (2326).We chose the B1 domain of streptococcal protein G (GB1) (27) as our test protein because its structure, stability and folding kinetics have been extensively studied in dilute solution (2838). Its small size (56 aa; 6.2 kDa) and high thermal stability make GB1 well suited for studies by NMR spectroscopy.Quantifying the equilibrium thermodynamic stability of proteins relies on determining the relative populations of native and denatured states. Because the denatured state ensemble of a stable protein is sparsely populated under native conditions, stability is usually probed by adding heat or a cosolute to promote unfolding so that the concentration ratio of the two states can be determined (39). However, stability can be measured without these perturbations by exploiting the phenomenon of backbone amide H/D exchange (40) detected by NMR spectroscopy (41). The observed rate of amide proton (N–H) exchange, kobs, is related to equilibrium stability by considering a protein in which each N–H exists in an open (exposed, exchange-competent) state, or a closed (protected, exchange-incompetent) state (40, 42):closed(NH)kclkopopen(NH)kintopen(ND)kopkclclosed(ND).[1]Each position opens and closes with rate constants, kop and kcl (where Kop = kop/kcl), and exchange from the open state occurs with intrinsic rate constant, kint. Values for kint are based on exchange data from unstructured peptides (43, 44). If the test protein is stable (i.e., kcl >> kop), the observed rate becomes:kobs=kopkintkcl+kint.[2]Exchange occurs within two limits (42). At the EX1 limit, closing is rate determining, and kobs = kop. This limit is usually observed for less stable proteins and at basic pH (45). Most globular proteins undergo EX2 kinetics, where exchange from the open state is rate limiting (i.e., kcl >> kint), and kobs values can be converted to equilibrium opening free energies, ΔGop° (46):kobs=kopkclkint=Kopkint[3]ΔGop°=RTlnkobskint,[4]where RT is the molar gas constant multiplied by the absolute temperature.The backbone amides most strongly involved in H-bonded regions of secondary structure exchange only from the fully unfolded state, yielding a maximum value of ΔGop° (4749). For these residues ΔGop° approximates the free energy of denaturation, ΔGden°, providing information on global stability. Lower amplitude fluctuations of the native state can give rise to partially unfolded forms (50), resulting in residues with ΔGop° values less than those of the global unfolders.In summary, NMR-detected H/D exchange can measure equilibrium thermodynamic stability of a protein at the level of individual amino acid residues under nonperturbing conditions. Inomata et al. (51) used this technique to measure kobs values in human cells for four residues in ubiquitin, but experiments confirming the exchange mechanism were not reported and opening free energies were not quantified. Our results fill this void and provide quantitative residue-level protein stability measurements in living cells under nonperturbing conditions.  相似文献   

4.
Fluids are known to trigger a broad range of slip events, from slow, creeping transients to dynamic earthquake ruptures. Yet, the detailed mechanics underlying these processes and the conditions leading to different rupture behaviors are not well understood. Here, we use a laboratory earthquake setup, capable of injecting pressurized fluids, to compare the rupture behavior for different rates of fluid injection, slow (megapascals per hour) versus fast (megapascals per second). We find that for the fast injection rates, dynamic ruptures are triggered at lower pressure levels and over spatial scales much smaller than the quasistatic theoretical estimates of nucleation sizes, suggesting that such fast injection rates constitute dynamic loading. In contrast, the relatively slow injection rates result in gradual nucleation processes, with the fluid spreading along the interface and causing stress changes consistent with gradually accelerating slow slip. The resulting dynamic ruptures propagating over wetted interfaces exhibit dynamic stress drops almost twice as large as those over the dry interfaces. These results suggest the need to take into account the rate of the pore-pressure increase when considering nucleation processes and motivate further investigation on how friction properties depend on the presence of fluids.

The close connection between fluids and faulting has been revealed by a large number of observations, both in tectonic settings and during human activities, such as wastewater disposal associated with oil and gas extraction, geothermal energy production, and CO2 sequestration (111). On and around tectonic faults, fluids also naturally exist and are added at depths due to rock-dehydration reactions (1215) Fluid-induced slip behavior can range from earthquakes to slow, creeping motion. It has long been thought that creeping and seismogenic fault zones have little to no spatial overlap. Nonetheless, growing evidence suggests that the same fault areas can exhibit both slow and dynamic slip (1619). The existence of large-scale slow slip in potentially seismogenic areas has been revealed by the presence of transient slow-slip events in subduction zones (16, 18) and proposed by studies investigating the physics of foreshocks (2022).Numerical and laboratory modeling has shown that such complex fault behavior can result from the interaction of fluid-related effects with the rate-and-state frictional properties (9, 14, 19, 23, 24); other proposed rheological explanations for complexities in fault stability include combinations of brittle and viscous rheology (25) and friction-to-flow transitions (26). The interaction of frictional sliding and fluids results in a number of coupled and competing mechanisms. The fault shear resistance τres is typically described by a friction model that linearly relates it to the effective normal stress σ^n via a friction coefficient f:τres=fσ^n=f(σnp),[1]where σn is the normal stress acting across the fault and p is the pore pressure. Clearly, increasing pore pressure p would reduce the fault frictional resistance, promoting the insurgence of slip. However, such slip need not be fast enough to radiate seismic waves, as would be characteristic of an earthquake, but can be slow and aseismic. In fact, the critical spatial scale h* for the slipping zone to reach in order to initiate an unstable, dynamic event is inversely proportional to the effective normal stress (27, 28) and hence increases with increasing pore pressure, promoting stable slip. This stabilizing effect of increasing fluid pressure holds for both linear slip-weakening and rate-and-state friction; it occurs because lower effective normal stress results in lower fault weakening during slip for the same friction properties. For example, the general form for two-dimensional (2D) theoretical estimates of this so-called nucleation size, h*, on rate-and-state faults with steady-state, velocity-weakening friction is given by:h*=(μ*DRS)/[F(a,b)(σnp)],[2]where μ*=μ/(1ν) for modes I and II, and μ*=μ for mode III (29); DRS is the characteristic slip distance; and F(a, b) is a function of the rate-and-state friction parameters a and b. The function F(a, b) depends on the specific assumptions made to obtain the estimate: FRR(a,b)=4(ba)/π (ref. 27, equation 40) for a linearized stability analysis of steady sliding, or FRA(a,b)=[π(ba)2]/2b, with a/b>1/2 for quasistatic crack-like expansion of the nucleation zone (ref. 30, equation 42).Hence, an increase in pore pressure induces a reduction in the effective normal stress, which both promotes slip due to lower frictional resistance and increases the critical length scale h*, potentially resulting in slow, stable fault slip instead of fast, dynamic rupture. Indeed, recent field and laboratory observations suggest that fluid injection triggers slow slip first (4, 9, 11, 31). Numerical modeling based on these effects, either by themselves or with an additional stabilizing effect of shear-layer dilatancy and the associated drop in fluid pressure, have been successful in capturing a number of properties of slow-slip events observed on natural faults and in field fluid-injection experiments (14, 24, 3234). However, understanding the dependence of the fault response on the specifics of pore-pressure increase remains elusive. Several studies suggest that the nucleation size can depend on the loading rate (3538), which would imply that the nucleation size should also depend on the rate of friction strength change and hence on the rate of change of the pore fluid pressure. The dependence of the nucleation size on evolving pore fluid pressure has also been theoretically investigated (39). However, the commonly used estimates of the nucleation size (Eq. 2) have been developed for faults under spatially and temporally uniform effective stress, which is clearly not the case for fluid-injection scenarios. In addition, the friction properties themselves may change in the presence of fluids (4042). The interaction between shear and fluid effects can be further affected by fault-gauge dilation/compaction (40, 4345) and thermal pressurization of pore fluids (42, 4648).Recent laboratory investigations have been quite instrumental in uncovering the fundamentals of the fluid-faulting interactions (31, 45, 4957). Several studies have indicated that fluid-pressurization rate, rather than injection volume, controls slip, slip rate, and stress drop (31, 49, 57). Rapid fluid injection may produce pressure heterogeneities, influencing the onset of slip. The degree of heterogeneity depends on the balance between the hydraulic diffusion rate and the fluid-injection rate, with higher injection rates promoting the transition from drained to locally undrained conditions (31). Fluid pressurization can also interact with friction properties and produce dynamic slip along rate-strengthening faults (50, 51).In this study, we investigate the relation between the rate of pressure increase on the fault and spontaneous rupture nucleation due to fluid injection by laboratory experiments in a setup that builds on and significantly develops the previous generations of laboratory earthquake setup of Rosakis and coworkers (58, 59). The previous versions of the setup have been used to study key features of dynamic ruptures, including sub-Rayleigh to supershear transition (60); rupture directionality and limiting speeds due to bimaterial effects (61); pulse-like versus crack-like behavior (62); opening of thrust faults (63); and friction evolution (64). A recent innovation in the diagnostics, featuring ultrahigh-speed photography in conjunction with digital image correlation (DIC) (65), has enabled the quantification of the full-field behavior of dynamic ruptures (6668), as well as the characterization of the local evolution of dynamic friction (64, 69). In these prior studies, earthquake ruptures were triggered by the local pressure release due to an electrical discharge. This nucleation procedure produced only dynamic ruptures, due to the nearly instantaneous normal stress reduction.To study fault slip triggered by fluid injection, we have developed a laboratory setup featuring a hydraulic circuit capable of injecting pressurized fluid onto the fault plane of a specimen and a set of experimental diagnostics that enables us to detect both slow and fast fault slip and stress changes. The range of fluid-pressure time histories produced by this setup results in both quasistatic and dynamic rupture nucleation; the diagnostics allows us to capture the nucleation processes, as well as the resulting dynamic rupture propagation. In particular, here, we explore two injection techniques: procedure 1, a gradual, and procedure 2, a sharp fluid-pressure ramp-up. An array of strain gauges, placed on the specimen’s surface along the fault, can capture the strain (translated into stress) time histories over a wide range of temporal scales, spanning from microseconds to tens of minutes. Once dynamic ruptures nucleate, an ultrahigh-speed camera records images of the propagating ruptures, which are turned into maps of full-field displacements, velocities, and stresses by a tailored DIC) analysis. One advantage of using a specimen made of an analog material, such as poly(methyl meth-acrylate) (PMMA) used in this study, is its transparency, which allows us to look at the interface through the bulk and observe fluid diffusion over the interface. Another important advantage of using PMMA is that its much lower shear modulus results in much smaller nucleation sizes h* than those for rocks, allowing the experiments to produce both slow and fast slip in samples of manageable sizes.We start by describing the laboratory setup and the diagnostics monitoring the pressure evolution and the slip behavior. We then present and discuss the different slip responses measured as a result of slow versus fast fluid injection and interpret our measurements by using the rate-and-state friction framework and a pressure-diffusion model.  相似文献   

5.
Quantum coherence, an essential feature of quantum mechanics allowing quantum superposition of states, is a resource for quantum information processing. Coherence emerges in a fundamentally different way for nonidentical and identical particles. For the latter, a unique contribution exists linked to indistinguishability that cannot occur for nonidentical particles. Here we experimentally demonstrate this additional contribution to quantum coherence with an optical setup, showing that its amount directly depends on the degree of indistinguishability and exploiting it in a quantum phase discrimination protocol. Furthermore, the designed setup allows for simulating fermionic particles with photons, thus assessing the role of exchange statistics in coherence generation and utilization. Our experiment proves that independent indistinguishable particles can offer a controllable resource of coherence and entanglement for quantum-enhanced metrology.

A quantum system can reside in coherent superpositions of states, which have a role in the interpretation of quantum mechanics (14), lead to nonclassicality (5, 6), and imply the intrinsically probabilistic nature of predictions in the quantum realm (7, 8). Besides this fundamental role, quantum coherence is also at the basis of quantum algorithms (914) and, from a modern information-theoretic perspective, constitutes a paradigmatic basis-dependent quantum resource (1517), providing a quantifiable advantage in certain quantum information protocols.For a single quantum particle, coherence manifests itself when the particle is found in a superposition of a reference basis, for instance, the computational basis of the Hilbert space. Formally, any quantum state whose density matrix contains nonzero diagonal elements when expressed in the reference basis is said to display quantum coherence (16). This is the definition of quantum coherence employed in our work. For multiparticle compound systems, the physics underlying the emergence of quantum coherence is richer and strictly connected to the nature of the particles, with fundamental differences for nonidentical and identical particles. A particularly intriguing observation is that the states of identical particle systems can manifest coherence even when no particle resides in superposition states, provided that the wave functions of the particles overlap (1820). In general, a special contribution to quantum coherence arises thanks to the spatial indistinguishability of identical particles, which cannot exist for nonidentical (or distinguishable) particles (18). Recently, it has been found that the spatial indistinguishability of identical particles can be exploited for entanglement generation (21), applicable even for spacelike-separated quanta (22) and against preparation and dynamical noises (2326). The presence of entanglement is a signature that the bipartite system as a whole carries coherence even when the individual particles do not, the amount of this coherence being dependent on the degree of indistinguishability. We name this specific contribution to quantumness of compound systems “indistinguishability-based coherence,” in contrast to the more familiar “single-particle superposition-based coherence.” Indistinguishability-based coherence qualifies in principle as an exploitable resource for quantum metrology (18). However, it requires sophisticated control techniques to be harnessed, especially in view of its nonlocal nature. Moreover, a crucial property of identical particles is the exchange statistics, while its experimental study requiring operating both bosons and fermions in the same setup is generally challenging.In the present work, we investigate the operational contribution of quantum coherence stemming from the spatial indistinguishability of identical particles. The main aim of our experiment is to prove that elementary states of two independent spatially indistinguishable particles can give rise to exploitable quantum coherence, with a measurable effect due to particle statistics. By utilizing our recently developed photonic architecture capable of tuning the indistinguishability of two uncorrelated photons (27), we observe the direct connection between the degree of indistinguishability and the amount of generated coherence and show that indistinguishability-based coherence can be concurrent with single-particle superposition-based coherence. In particular, we demonstrate its operational implications, namely, providing a quantifiable advantage in a phase discrimination task (28, 29), as depicted in Fig. 1. Furthermore, we design a setup capable of testing the impact of particle statistics in coherence production and phase discrimination for both bosons and fermions; this is accomplished by compensating for the exchange phase during state preparation, simulating fermionic states with photons, which leads to statistics-dependent efficiency of the quantum task.Open in a separate windowFig. 1.Illustration of the indistinguishability-activated phase discrimination task. A resource state ρin that contains coherence in a computational basis is generated from spatial indistinguishability. The state then enters a black box which implements a phase unitary U^k=eiG^ϕk,k{1,,n} on ρin. The goal is to determine the ϕk actually applied through the output state ρout: indistinguishability-based coherence provides an operational advantage in this task.  相似文献   

6.
The transacting activator of transduction (TAT) protein plays a key role in the progression of AIDS. Studies have shown that a +8 charged sequence of amino acids in the protein, called the TAT peptide, enables the TAT protein to penetrate cell membranes. To probe mechanisms of binding and translocation of the TAT peptide into the cell, investigators have used phospholipid liposomes as cell membrane mimics. We have used the method of surface potential sensitive second harmonic generation (SHG), which is a label-free and interface-selective method, to study the binding of TAT to anionic 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-1′-rac-glycerol (POPG) and neutral 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) liposomes. It is the SHG sensitivity to the electrostatic field generated by a charged interface that enabled us to obtain the interfacial electrostatic potential. SHG together with the Poisson–Boltzmann equation yielded the dependence of the surface potential on the density of adsorbed TAT. We obtained the dissociation constants Kd for TAT binding to POPC and POPG liposomes and the maximum number of TATs that can bind to a given liposome surface. For POPC Kd was found to be 7.5 ± 2 μM, and for POPG Kd was 29.0 ± 4.0 μM. As TAT was added to the liposome solution the POPC surface potential changed from 0 mV to +37 mV, and for POPG it changed from −57 mV to −37 mV. A numerical calculation of Kd, which included all terms obtained from application of the Poisson–Boltzmann equation to the TAT liposome SHG data, was shown to be in good agreement with an approximated solution.The HIV type 1 (HIV-1) transacting activator of transduction (TAT) is an important regulatory protein for viral gene expression (13). It has been established that the TAT protein has a key role in the progression of AIDS and is a potential target for anti-HIV vaccines (4). For the TAT protein to carry out its biological functions, it needs to be readily imported into the cell. Studies on the cellular internalization of TAT have led to the discovery of the TAT peptide, a highly cationic 11-aa region (protein transduction domain) of the 86-aa full-length protein that is responsible for the TAT protein translocating across phospholipid membranes (58). The TAT peptide is a member of a class of peptides called cell-penetrating peptides (CPPs) that have generated great interest for drug delivery applications (ref. 9 and references therein). The exact mechanism by which the TAT peptide enters cells is not fully understood, but it is likely to involve a combination of energy-independent penetration and endocytosis pathways (8, 10). The first step in the process is high-affinity binding of the peptide to phospholipids and other components on the cell surface such as proteins and glycosaminoglycans (1, 9).The binding of the TAT peptide to liposomes has been investigated using a variety of techniques, each of which has its own advantages and limitations. Among the techniques are isothermal titration calorimetry (9, 11), fluorescence spectroscopy (12, 13), FRET (12, 14), single-molecule fluorescence microscopy (15, 16), and solid-state NMR (17). Second harmonic generation (SHG), as an interface-selective technique (1824), does not require a label, and because SHG is sensitive to the interface potential, it is an attractive method to selectively probe the binding of the highly charged (+8) TAT peptide to liposome surfaces. Although coherent SHG is forbidden in centrosymmetric and isotropic bulk media for reasons of symmetry, it can be generated by a centrosymmetric structure, e.g., a sphere, provided that the object is centrosymmetric over roughly the length scale of the optical coherence, which is a function of the particle size, the wavelength of the incident light, and the refractive indexes at ω and 2ω (2530). As a second-order nonlinear optical technique SHG has symmetry restrictions such that coherent SHG is not generated by the randomly oriented molecules in the bulk liquid, but can be generated coherently by the much smaller population of oriented interfacial species bound to a particle or planar surfaces. As a consequence the SHG signal from the interface is not overwhelmed by SHG from the much larger populations in the bulk media (2528).The total second harmonic electric field, E2ω, originating from a charged interface in contact with water can be expressed as (3133)E2ωiχc,i(2)EωEω+jχinc,j(2)EωEω+χH2O(3)EωEωΦ,[1]where χc,i(2) represents the second-order susceptibility of the species i present at the interface; χinc,j(2) represents the incoherent contribution of the second-order susceptibility, arising from density and orientational fluctuations of the species j present in solution, often referred to as hyper-Rayleigh scattering; χH2O(3) is the third-order susceptibility originating chiefly from the polarization of the bulk water molecules polarized by the charged interface; Φ is the potential at the interface that is created by the surface charge; and Eω is the electric field of the incident light at the fundamental frequency ω. The second-order susceptibility, χc,i(2), can be written as the product of the number of molecules, N, at the surface and the orientational ensemble average of the hyperpolarizability αi(2) of surface species i, yielding χc,i(2)=Nαi(2) (18). The bracket ?? indicates an orientational average over the interfacial molecules. The third term in Eq. 1 depicts a third-order process by which a second harmonic field is generated by a charged interface. This term is the focus of our work. The SHG signal is dependent on the surface potential created by the electrostatic field of the surface charges, often called the χ(3) contribution to the SHG signal. The χ(3) method has been used to extract the surface charge density of charged planar surfaces and microparticle surfaces, e.g., liposomes, polymer beads, and oil droplets in water (21, 25, 3439).In this work, the χ(3) SHG method is used to explore a biomedically relevant process. The binding of the highly cationic HIV-1 TAT peptide to liposome membranes changes the surface potential, thereby enabling the use of the χ(3) method to study the binding process in a label-free manner. Two kinds of liposomes, neutral 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and anionic 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-1′-rac-glycerol (POPG), were investigated. The chemical structures of TAT, POPC, and POPG lipids are shown in Scheme 1.Open in a separate windowScheme 1.Chemical structures of HIV-1 TAT (47–57) peptide and the POPC and POPG lipids.  相似文献   

7.
In matter, any spontaneous symmetry breaking induces a phase transition characterized by an order parameter, such as the magnetization vector in ferromagnets, or a macroscopic many-electron wave function in superconductors. Phase transitions with unknown order parameter are rare but extremely appealing, as they may lead to novel physics. An emblematic and still unsolved example is the transition of the heavy fermion compound URu2Si2 (URS) into the so-called hidden-order (HO) phase when the temperature drops below T0=17.5 K. Here, we show that the interaction between the heavy fermion and the conduction band states near the Fermi level has a key role in the emergence of the HO phase. Using angle-resolved photoemission spectroscopy, we find that while the Fermi surfaces of the HO and of a neighboring antiferromagnetic (AFM) phase of well-defined order parameter have the same topography, they differ in the size of some, but not all, of their electron pockets. Such a nonrigid change of the electronic structure indicates that a change in the interaction strength between states near the Fermi level is a crucial ingredient for the HO to AFM phase transition.

The transition of URu2Si2 from a high-temperature paramagnetic (PM) phase to the hidden-order (HO) phase below T0 is accompanied by anomalies in specific heat (13), electrical resistivity (1, 3), thermal expansion (4), and magnetic susceptibility (2, 3) that are all typical of magnetic ordering. However, the small associated antiferromagnetic (AFM) moment (5) is insufficient to explain the large entropy loss and was shown to be of extrinsic origin (6). Inelastic neutron scattering (INS) experiments revealed gapped magnetic excitations below T0 at commensurate and incommensurate wave vectors (79), while an instability and partial gapping of the Fermi surface was observed by angle-resolved photoemission spectroscopy (ARPES) (1016) and scanning tunneling microscopy/spectroscopy (17, 18). More recently, high-resolution, low-temperature ARPES experiments imaged the Fermi surface reconstruction across the HO transition, unveiling the nesting vectors between Fermi sheets associated with the gapped magnetic excitations seen in INS experiments (14, 19) and quantitatively explaining, from the changes in Fermi surface size and quasiparticle mass, the large entropy loss in the HO phase (19). Nonetheless, the nature of the HO parameter is still hotly debated (2023).The HO phase is furthermore unstable above a temperature-dependent critical pressure of about 0.7 GPa at T=0, at which it undergoes a first-order transition into a large moment AFM phase where the value of the magnetic moment per U atom exhibits a sharp increase, by a factor of 10 to 50 (6, 2430). When the system crosses the HO AFM phase boundary, the characteristic magnetic excitations of the HO phase are either suppressed or modified (8, 31), while resistivity and specific heat measurements suggest that the partial gapping of the Fermi surface is enhanced (24, 27).As the AFM phase has a well-defined order parameter, studying the evolution of the electronic structure across the HO/AFM transition would help develop an understanding of the HO state. So far, the experimental determination of the Fermi surface by Shubnikov de Haas (SdH) oscillations only showed minor changes across the HO AFM phase boundary (32). Here, we take advantage of the HO/AFM transition induced by chemical pressure in URu2Si2, through the partial substitution of Ru with Fe (3337), to directly probe its electronic structure in the AFM phase using ARPES. As we shall see, our results reveal that changes in the Ru 4d–U 5f hybridization across the HO/AFM phase boundary seem essential for a better understanding of the HO state.  相似文献   

8.
Lyotropic chromonic liquid crystals are water-based materials composed of self-assembled cylindrical aggregates. Their behavior under flow is poorly understood, and quantitatively resolving the optical retardance of the flowing liquid crystal has so far been limited by the imaging speed of current polarization-resolved imaging techniques. Here, we employ a single-shot quantitative polarization imaging method, termed polarized shearing interference microscopy, to quantify the spatial distribution and the dynamics of the structures emerging in nematic disodium cromoglycate solutions in a microfluidic channel. We show that pure-twist disclination loops nucleate in the bulk flow over a range of shear rates. These loops are elongated in the flow direction and exhibit a constant aspect ratio that is governed by the nonnegligible splay-bend anisotropy at the loop boundary. The size of the loops is set by the balance between nucleation forces and annihilation forces acting on the disclination. The fluctuations of the pure-twist disclination loops reflect the tumbling character of nematic disodium cromoglycate. Our study, including experiment, simulation, and scaling analysis, provides a comprehensive understanding of the structure and dynamics of pressure-driven lyotropic chromonic liquid crystals and might open new routes for using these materials to control assembly and flow of biological systems or particles in microfluidic devices.

Lyotropic chromonic liquid crystals (LCLCs) are aqueous dispersions of organic disk-like molecules that self-assemble into cylindrical aggregates, which form nematic or columnar liquid crystal phases under appropriate conditions of concentration and temperature (16). These materials have gained increasing attention in both fundamental and applied research over the past decade, due to their distinct structural properties and biocompatibility (4, 714). Used as a replacement for isotropic fluids in microfluidic devices, nematic LCLCs have been employed to control the behavior of bacteria and colloids (13, 1520).Nematic liquid crystals form topological defects under flow, which gives rise to complex dynamical structures that have been extensively studied in thermotropic liquid crystals (TLCs) and liquid crystal polymers (LCPs) (2129). In contrast to lyotropic liquid crystals that are dispersed in a solvent and whose phase can be tuned by either concentration or temperature, TLCs do not need a solvent to possess a liquid-crystalline state and their phase depends only on temperature (30). Most TLCs are shear-aligned nematics, in which the director evolves toward an equilibrium out-of-plane polar angle. Defects nucleate beyond a critical Ericksen number due to the irreconcilable alignment of the directors from surface anchoring and shear alignment in the bulk flow (24, 3133). With an increase in shear rate, the defect type can transition from π-walls (domain walls that separate regions whose director orientation differs by an angle of π) to ordered disclinations and to a disordered chaotic regime (34). Recent efforts have aimed to tune and control the defect structures by understanding the relation between the selection of topological defect types and the flow field in flowing TLCs. Strategies to do so include tuning the geometry of microfluidic channels, inducing defect nucleation through the introduction of isotropic phases or designing inhomogeneities in the surface anchoring (3539). LCPs are typically tumbling nematics for which α2α3 < 0, where α2 and α3 are the Leslie viscosities. This leads to a nonzero viscous torque for any orientation of the director, which allows the director to rotate in the shear plane (22, 29, 30, 40). The tumbling character of LCPs facilitates the nucleation of singular topological defects (22, 40). Moreover, the molecular rotational relaxation times of LCPs are longer than those of TLCs, and they can exceed the timescales imposed by the shear rate. As a result, the rheological behavior of LCPs is governed not only by spatial gradients of the director field from the Frank elasticity, but also by changes in the molecular order parameter (25, 4143). With increasing shear rate, topological defects in LCPs have been shown to transition from disclinations to rolling cells and to worm-like patterns (25, 26, 43).Topological defects occurring in the flow of nematic LCLCs have so far received much more limited attention (44, 45). At rest, LCLCs exhibit unique properties distinct from those of TLCs and LCPs (1, 2, 46, 44). In particular, LCLCs have significant elastic anisotropy compared to TLCs; the twist Frank elastic constant, K2, is much smaller than the splay and bend Frank elastic constants, K1 and K3. The resulting relative ease with which twist deformations can occur can lead to a spontaneous symmetry breaking and the emergence of chiral structures in static LCLCs under spatial confinement, despite the achiral nature of the molecules (4, 4651). When driven out of equilibrium by an imposed flow, the average director field of LCLCs has been reported to align predominantly along the shear direction under strong shear but to reorient to an alignment perpendicular to the shear direction below a critical shear rate (5254). A recent study has revealed a variety of complex textures that emerge in simple shear flow in the nematic LCLC disodium cromoglycate (DSCG) (44). The tumbling nature of this liquid crystal leads to enhanced sensitivity to shear rate. At shear rates γ˙<1s1, the director realigns perpendicular to the flow direction adapting a so-called log-rolling state characteristic of tumbling nematics. For 1s1<γ˙<10s1, polydomain textures form due to the nucleation of pure-twist disclination loops, for which the rotation vector is parallel to the loop normal, and mixed wedge-twist disclination loops, for which the rotation vector is perpendicular to the loop normal (44, 55). Above γ˙>10s1, the disclination loops gradually transform into periodic stripes in which the director aligns predominantly along the flow direction (44).Here, we report on the structure and dynamics of topological defects occurring in the pressure-driven flow of nematic DSCG. A quantitative evaluation of such dynamics has so far remained challenging, in particular for fast flow velocities, due to the slow image acquisition rate of current quantitative polarization-resolved imaging techniques. Quantitative polarization imaging traditionally relies on three commonly used techniques: fluorescence confocal polarization microscopy, polarizing optical microscopy, and LC-Polscope imaging. Fluorescence confocal polarization microscopy can provide accurate maps of birefringence and orientation angle, but the fluorescent labeling may perturb the flow properties (56). Polarizing optical microscopy requires a mechanical rotation of the polarizers and multiple measurements, which severely limits the imaging speed. LC-Polscope, an extension of conventional polarization optical microscopy, utilizes liquid crystal universal compensators to replace the compensator used in conventional polarization microscopes (57). This leads to an enhanced imaging speed and better compensation for polarization artifacts of the optical system. The need for multiple measurements to quantify retardance, however, still limits the acquisition rate of LC-Polscopes.We overcome these challenges by using a single-shot quantitative polarization microscopy technique, termed polarized shearing interference microscopy (PSIM). PSIM combines circular polarization light excitation with off-axis shearing interferometry detection. Using a custom polarization retrieval algorithm, we achieve single-shot mapping of the retardance, which allows us to reach imaging speeds that are limited only by the camera frame rate while preserving a large field-of-view and micrometer spatial resolution. We provide a brief discussion of the optical design of PSIM in Materials and Methods; further details of the measurement accuracy and imaging performance of PSIM are reported in ref. 58.Using a combination of experiments, numerical simulations and scaling analysis, we show that in the pressure-driven flow of nematic DSCG solutions in a microfluidic channel, pure-twist disclination loops emerge for a certain range of shear rates. These loops are elongated in the flow with a fixed aspect ratio. We demonstrate that the disclination loops nucleate at the boundary between regions where the director aligns predominantly along the flow direction close to the channel walls and regions where the director aligns predominantly perpendicular to the flow direction in the center of the channel. The large elastic stresses of the director gradient at the boundary are then released by the formation of disclination loops. We show that both the characteristic size and the fluctuations of the pure-twist disclination loops can be tuned by controlling the flow rate.  相似文献   

9.
We present transport measurements of bilayer graphene with a 1.38 interlayer twist. As with other devices with twist angles substantially larger than the magic angle of 1.1, we do not observe correlated insulating states or band reorganization. However, we do observe several highly unusual behaviors in magnetotransport. For a large range of densities around half filling of the moiré bands, magnetoresistance is large and quadratic. Over these same densities, the magnetoresistance minima corresponding to gaps between Landau levels split and bend as a function of density and field. We reproduce the same splitting and bending behavior in a simple tight-binding model of Hofstadter’s butterfly on a triangular lattice with anisotropic hopping terms. These features appear to be a generic class of experimental manifestations of Hofstadter’s butterfly and may provide insight into the emergent states of twisted bilayer graphene.

The mesmerizing Hofstadter butterfly spectrum arises when electrons in a two-dimensional periodic potential are immersed in an out-of-plane magnetic field. When the magnetic flux Φ through a unit cell is a rational multiple p / q of the magnetic flux quantum Φ0=h/e, each Bloch band splits into q subbands (1). The carrier densities corresponding to gaps between these subbands follow straight lines when plotted as a function of normalized density n/ns and magnetic field (2). Here, ns is the density of carriers required to fill the (possibly degenerate) Bloch band. These lines can be described by the Diophantine equation (n/ns)=t(Φ/Φ0)+s for integers s and t. In experiments, they appear as minima or zeros in longitudinal resistivity coinciding with Hall conductivity quantized at σxy=te2/h (3, 4). Hofstadter originally studied magnetosubbands emerging from a single Bloch band on a square lattice. In the following decades, other authors considered different lattices (57), the effect of anisotropy (6, 810), next-nearest-neighbor hopping (1115), interactions (16, 17), density wave states (9), and graphene moirés (18, 19).It took considerable ingenuity to realize clean systems with unit cells large enough to allow conventional superconducting magnets to reach Φ/Φ01. The first successful observation of the butterfly in electrical transport measurements was in GaAs/AlGaAs heterostructures with lithographically defined periodic potentials (2022). These experiments demonstrated the expected quantized Hall conductance in a few of the largest magnetosubband gaps. In 2013, three groups mapped out the full butterfly spectrum in both density and field in heterostructures based on monolayer (23, 24) and bilayer (25) graphene. In all three cases, the authors made use of the 2% lattice mismatch between their graphene and its encapsulating hexagonal boron nitride (hBN) dielectric. With these layers rotationally aligned, the resulting moiré pattern was large enough in area that gated structures studied in available high-field magnets could simultaneously approach normalized carrier densities and magnetic flux ratios of 1. Later work on hBN-aligned bilayer graphene showed that, likely because of electron–electron interactions, the gaps could also follow lines described by fractional s and t (26).In twisted bilayer graphene (TBG), a slight interlayer rotation creates a similar-scale moiré pattern. Unlike with graphene–hBN moirés, in TBG there is a gap between lowest and neighboring moiré subbands (27). As the twist angle approaches the magic angle of 1.1 the isolated moiré bands become flat (28, 29), and strong correlations lead to fascinating insulating (3037), superconducting (3133, 3537), and magnetic (34, 35, 38) states. The strong correlations tend to cause moiré subbands within a fourfold degenerate manifold to move relative to each other as one tunes the density, leading to Landau levels that project only toward higher magnitude of density from charge neutrality and integer filling factors (37, 39). This correlated behavior obscures the single-particle Hofstadter physics that would otherwise be present.In this work, we present measurements from a TBG device twisted to 1.38. When we apply a perpendicular magnetic field, a complicated and beautiful fan diagram emerges. In a broad range of densities on either side of charge neutrality, the device displays large, quadratic magnetoresistance. Within the magnetoresistance regions, each Landau level associated with ν=±8,±12,±16, appears to split into a pair, and these pairs follow complicated paths in field and density, very different from those predicted by the usual Diophantine equation. Phenomenology similar in all qualitative respects appears in measurements on several regions of this same device with similar twist angles and in two separate devices, one at 1.59 and the other at 1.70 (see SI Appendix for details).We reproduce the unusual features of the Landau levels (LLs) in a simple tight-binding model on a triangular lattice with anisotropy and a small energetic splitting between two species of fermions. At first glance, this is surprising, because that model does not represent the symmetries of the experimental moiré structure. We speculate that the unusual LL features we experimentally observe can generically emerge from spectra of Hofstadter models that include the same ingredients we added to the triangular lattice model. With further theoretical work it may be possible to use our measurements to gain insight into the underlying Hamiltonian of TBG near the magic angle.  相似文献   

10.
Knowledge of the dynamical behavior of proteins, and in particular their conformational fluctuations, is essential to understanding the mechanisms underlying their reactions. Here, transient enhancement of the isothermal partial molar compressibility, which is directly related to the conformational fluctuation, during a chemical reaction of a blue light sensor protein from the thermophilic cyanobacterium Thermosynechococcus elongatus BP-1 (TePixD, Tll0078) was investigated in a time-resolved manner. The UV-Vis absorption spectrum of TePixD did not change with the application of high pressure. Conversely, the transient grating signal intensities representing the volume change depended significantly on the pressure. This result implies that the compressibility changes during the reaction. From the pressure dependence of the amplitude, the compressibility change of two short-lived intermediate (I1 and I2) states were determined to be +(5.6 ± 0.6) × 10−2 cm3⋅mol−1⋅MPa−1 for I1 and +(6.6 ± 0.7)×10−2 cm3⋅mol−1⋅MPa−1 for I2. This result showed that the structural fluctuation of intermediates was enhanced during the reaction. To clarify the relationship between the fluctuation and the reaction, the compressibility of multiply excited TePixD was investigated. The isothermal compressibility of I1 and I2 intermediates of TePixD showed a monotonic decrease with increasing excitation laser power, and this tendency correlated with the reactivity of the protein. This result indicates that the TePixD decamer cannot react when its structural fluctuation is small. We concluded that the enhanced compressibility is an important factor for triggering the reaction of TePixD. To our knowledge, this is the first report showing enhanced fluctuations of intermediate species during a protein reaction, supporting the importance of fluctuations.Proteins often transfer information through changes in domain–domain (or intermolecular) interactions. Photosensor proteins are an important example. They have light-sensing domains and function by using the light-driven changes in domain–domain interactions (1). The sensor of blue light using FAD (BLUF) domain is a light-sensing module found widely among the bacterial kingdom (2). The BLUF domain initiates its photoreaction by the light excitation of the flavin moiety inside the protein, which changes the domain–domain interaction, causing a quaternary structural change and finally transmitting biological signals (3, 4). It has been an important research topic to elucidate how the initial photochemistry occurring in the vicinity of the chromophore leads to the subsequent large conformation change in other domains, which are generally apart from the chromophore.It may be reasonable to consider that the conformation change in the BLUF domain is the driving force in its subsequent reaction; that is, the change in domain–domain interaction. However, sometimes, clear conformational changes have not been observed for the BLUF domain; its conformation is very similar before and after photo-excitation (513). The circular dichroism (CD) spectra of BLUF proteins AppA and PixD from thermophilic cyanobacterium Thermosynechococcus elongatus BP-1 (TePixD) did not change on illumination (5, 13). Similarly, solution NMR studies of AppA and BlrB showed only small chemical shifts on excitation (9, 10). The solution NMR structure of BlrP1 showed a clear change, but this was limited in its C-terminal extension region and not core BLUF (11). Furthermore, the diffusion coefficient (D) of the BLUF domain of YcgF was not changed by photo-excitation (12), although D is sensitive to global conformational changes. These results imply that a minor structural change occurs in the BLUF domain. In such cases, how does the BLUF domain control its interdomain interaction? Recently, a molecular dynamics (MD) simulation on another light-sensing domain, the light-oxygen-voltage (LOV) sensing domain, suggested that fluctuation of the LOV core structure could be a key to understanding the mechanism of information transfer (1416).Because proteins work at room temperature, they are exposed to thermal fluctuations. The importance of such structural fluctuations for biomolecular reactions has been also pointed out: for example, enzymatic activity (1720). Experimental detections of such conformation fluctuations using single molecular detection (21) or NMR techniques such as the hydrogen-deuterium (H-D) exchange, relaxation dispersion method, and high-pressure NMR (2224) have succeeded. However, these techniques could not detect the fluctuation of short-lived transient species. Indeed, single molecule spectroscopy can trace the fluctuation in real time, but it is still rather difficult to detect rapid fluctuations for a short-lived intermediate during a reaction. Therefore, information about the fluctuation of intermediates is thus far limited.A thermodynamic measurement is another way to characterize the fluctuation of proteins. In particular, the partial molar isothermal compressibility [K¯T=(V¯/P)T] is essential, because this property is directly linked to the mean-square fluctuations of the protein partial molar volume by (V¯V¯)2δV¯2=kBTK¯T (25). (Here, <X> means the averaged value of a quantity of X.) Therefore, isothermal compressibility is thought to reflect the structural fluctuation of molecules (26). However, experimental measurement of this parameter of proteins in a dilute solution is quite difficult. Indeed, this quantity has been determined indirectly from the theoretical equation using the adiabatic compressibility of a protein solution, which was determined by the sound velocity in the solution (2631). Although the relation between volume fluctuations and isothermal compressibility is rigorously correct only with respect to the intrinsic part of the volume compressibility, and not the partial molar volume compressibility (32), we considered that this partial molar volume compressibility is still useful for characterizing the fluctuation of the protein structure including its interacting water molecules. In fact, the relationship between β¯T and the volume fluctuation has been often used to discuss the fluctuation of proteins (17, 2628), and the strong correlation of β¯T of reactants with the functioning for some enzymes (17, 33, 34) has been reported. These studies show the functional importance of the structural fluctuation represented by β¯T. However, thermodynamic techniques lack time resolution, and it has been impossible to measure the fluctuations of short-lived intermediate species.Recently, we developed a time-resolving method for assessing thermodynamic properties using the pulsed laser induced transient grating (TG) method. Using this method, we thus far succeeded in measuring the enthalpy change (ΔH) (3538), partial molar volume change (ΔV¯) (12, 35, 37), thermal expansion change (Δα¯th) (12, 37), and heat capacity change (ΔCp) (3638) for short-lived species. Therefore, in principle, the partial molar isothermal compressibility change (ΔK¯T) of a short-lived intermediate become observable if we conduct the TG experiment under the high-pressure condition and detect ΔV¯ with varying external pressure.There are several difficulties in applying the traditional high-pressure cell to the TG method to measure thermodynamic parameters quantitatively. The most serious problem is ensuring the quantitative performance of the intensity of TG signals measured under the high-pressure condition. On this point, our group has developed a new high-pressure cell specially designed for TG spectroscopy (39) and overcome this problem. In this paper, by applying this high-pressure TG system to the BLUF protein TePixD, we report the first measurement, to our knowledge, of ΔK¯T of short-lived intermediates to investigate the mechanism underlying signal transmission by BLUF proteins, from the view point of the transient fluctuation.TePixD is a homolog of the BLUF protein PixD, which regulates the phototaxis of cyanobacterium (40) and exists in a thermophilic cyanobacterium Thermocynechococcus elongates BP-1 (Tll0078). TePixD is a relatively small (17 kDa) protein that consists only of the BLUF domain with two extended helices in the C-terminal region. In crystals and solutions, it forms a decamer that consists of two pentameric rings (41). The photochemistry of TePixD is typical among BLUF proteins (4245); on blue light illumination, the absorption spectrum shifts toward red by about 10 nm within a nanosecond. The absorption spectrum does not change further, and the dark state is recovered with a time constant of ∼5 s at room temperature (40, 43). The spectral red shift was explained by the rearrangement of the hydrogen bond network around the chromophore (6, 4648). The TG method has revealed the dynamic photoreaction mechanism, which cannot be detected by conventional spectroscopic methods. The TG signal of TePixD (Fig. S1) showed that there are two spectrally silent reaction phases: a partial molar volume expansion with the time constant of ∼40 μs and the diffusion coefficient (D) change with a time constant of ∼4 ms. Furthermore, it was reported that the pentamer and decamer states of TePixD are in equilibrium and that the final photoproduct of the decamer is pentamers generated by its dissociation (13, 49). On the basis of these studies, the reaction scheme has been identified as shown in Fig. 1. Here, I1 is the intermediate of the spectrally red-shifted species (generated within a nanosecond) and I2 is the one created on the subsequent volume expansion process of +4 cm3⋅mol−1 (∼40 μs). Furthermore, an experiment of the excitation laser power dependence of its TG signal revealed that the TePixD decamer undergoes the original dissociation reaction when only one monomer in the decamer is excited (50). In this study, we investigated the transient compressibility of the intermediates I1 and I2 of the photoreaction of TePixD and found a direct link between their fluctuation and reactivity.Open in a separate windowFig. 1.Schematic illustration of the photoreaction of TePixD. Yellow circles represent the TePixD monomer in the ground state, which constructs the decamer and pentamer states. In the dark state, these two forms are in equilibrium. The excited, spectral red-shifted state of the TePixD monomer is indicated by a red circle. The square represents the I2 state of the monomer, which is created by the volume expansion process.  相似文献   

11.
Advances in polymer chemistry over the last decade have enabled the synthesis of molecularly precise polymer networks that exhibit homogeneous structure. These precise polymer gels create the opportunity to establish true multiscale, molecular to macroscopic, relationships that define their elastic and failure properties. In this work, a theory of network fracture that accounts for loop defects is developed by drawing on recent advances in network elasticity. This loop-modified Lake–Thomas theory is tested against both molecular dynamics (MD) simulations and experimental fracture measurements on model gels, and good agreement between theory, which does not use an enhancement factor, and measurement is observed. Insight into the local and global contributions to energy dissipated during network failure and their relation to the bond dissociation energy is also provided. These findings enable a priori estimates of fracture energy in swollen gels where chain scission becomes an important failure mechanism.

Models that link materials structure to macroscopic behavior can account for multiple levels of molecular structure. For example, the statistical, affine deformation model connects the elastic modulus E to the molecular structure of a polymer chain,Eaff=3νkbT(ϕo13Roϕ13R)2,[1]where ν is density of chains, ϕ is polymer volume fraction, R is end-to-end distance, ϕo and Ro represent the parameters taken in the reference state that is assumed to be the reaction concentration in this work, and kbT is the available thermal energy where kb is Boltzmann’s constant and T is temperature (16). Refinements to this model that account for network-level structure, such as the presence of trapped entanglements or number of connections per junction, have been developed (711). Further refinements to the theory of network elasticity have been developed to account for dynamic processes such as chain relaxation and solvent transport (1217). Together these refinements link network elasticity to chain-level molecular structure, network-level structure, and the dynamic processes that occur at both size scales.While elasticity has been connected to multiple levels of molecular structure, models for network fracture have not developed to a similar extent. The fracture energy Gc typically relies upon the large strain deformation behavior of polymer networks, making it experimentally difficult to separate the elastic energy released upon fracture from that dissipated through dynamic processes (1826). In fact, most fracture theories have been developed at the continuum scale and have focused on modeling dynamic dissipation processes (27). An exception to this is the theory of Lake and Thomas that connects the elastic energy released during chain scission to chain-level structure,Gc,LT=ChainsArea×EnergyDissipatedChain=νRoNU,[2]where NU is the total energy released when a chain ruptures in which N represents the number of monomer segments in the chain and U the energy released per monomer (26).While this model was first introduced in 1967, experimental attempts to verify Lake–Thomas theory as an explicit model, as summarized in SI Appendix, have been unsuccessful. Ahagon and Gent (28) and Gent and Tobias (29) attempted to do this on highly swollen networks at elevated temperature but found that, while the scalings from Eq. 2 work well, an enhancement factor was necessary to observe agreement between theory and experiment. This led many researchers to conclude that Lake–Thomas theory worked only as a scaling argument. In 2008, Sakai et al. (30) introduced a series of end-linked tetrafunctional, star-like poly(ethylene glycol) (PEG) gels. Scattering measurements indicated a lack of nanoscale heterogeneities that are characteristic of most polymer networks (3032). Fracture measurements on these well-defined networks were performed and it was again observed that an enhancement factor was necessary to realize explicit agreement between experiment and theory (33). Arora et al. (34) recently attempted to address this discrepancy by accounting for loop defects; however, different assumptions were used when inputting U to calculate Lake–Thomas theory values that again required the use of an enhancement factor to achieve quantitative agreement. In this work we demonstrate that refining the Lake–Thomas theory to account for loop defects while using the full bond dissociation energy to represent U yields excellent agreement between the theory and both simulation and experimental data without the use of any adjustable parameters.PEG gels synthesized via telechelic end-linking reactions create the opportunity to build upon previous theory to establish true multiscale, molecular to macroscopic relationships that define the fracture response of polymer networks. This paper combines pure shear notch tests, molecular dynamics (MD) simulations, and theory to quantitatively extend the concept of network fracture without the use of an enhancement factor. First, the control of molecular-level structure in end-linked gel systems is discussed. Then, the choice of molecular parameters used to estimate chain- and network-level properties is discussed. Experimental and MD simulation methods used when fracturing model end-linked networks are then presented. A theory of network fracture that accounts for loop defects is developed, in the context of other such models that have emerged recently, and tested against data from experiments and MD simulations. Finally, a discussion of the local and global energy dissipated during failure of the network is presented.  相似文献   

12.
Reliable forecasts for the dispersion of oceanic contamination are important for coastal ecosystems, society, and the economy as evidenced by the Deepwater Horizon oil spill in the Gulf of Mexico in 2010 and the Fukushima nuclear plant incident in the Pacific Ocean in 2011. Accurate prediction of pollutant pathways and concentrations at the ocean surface requires understanding ocean dynamics over a broad range of spatial scales. Fundamental questions concerning the structure of the velocity field at the submesoscales (100 m to tens of kilometers, hours to days) remain unresolved due to a lack of synoptic measurements at these scales. Using high-frequency position data provided by the near-simultaneous release of hundreds of accurately tracked surface drifters, we study the structure of submesoscale surface velocity fluctuations in the Northern Gulf of Mexico. Observed two-point statistics confirm the accuracy of classic turbulence scaling laws at 200-m to 50-km scales and clearly indicate that dispersion at the submesoscales is local, driven predominantly by energetic submesoscale fluctuations. The results demonstrate the feasibility and utility of deploying large clusters of drifting instruments to provide synoptic observations of spatial variability of the ocean surface velocity field. Our findings allow quantification of the submesoscale-driven dispersion missing in current operational circulation models and satellite altimeter-derived velocity fields.The Deepwater Horizon (DwH) incident was the largest accidental oil spill into marine waters in history with some 4.4 million barrels released into the DeSoto Canyon of the northern Gulf of Mexico (GoM) from a subsurface pipe over ∼84 d in the spring and summer of 2010 (1). Primary scientific questions, with immediate practical implications, arising from such catastrophic pollutant injection events are the path, speed, and spreading rate of the pollutant patch. Accurate prediction requires knowledge of the ocean flow field at all relevant temporal and spatial scales. Whereas ocean general circulation models were widely used during and after the DwH incident (26), such models only capture the main mesoscale processes (spatial scale larger than 10 km) in the GoM. The main factors controlling surface dispersion in the DeSoto Canyon region remain unclear. The region lies between the mesoscale eddy-driven deep water GoM (7) and the wind-driven shelf (8) while also being subject to the buoyancy input of the Mississippi River plume during the spring and summer months (9). Images provided by the large amounts of surface oil produced in the DwH incident revealed a rich array of flow patterns (10) showing organization of surface oil not only by mesoscale straining into the loop current “Eddy Franklin,” but also by submesoscale processes. Such processes operate at spatial scales and involve physics not currently captured in operational circulation models. Submesoscale motions, where they exist, can directly influence the local transport of biogeochemical tracers (11, 12) and provide pathways for energy transfer from the wind-forced mesoscales to the dissipative microscales (1315). Dynamics at the submesoscales have been the subject of recent research (1620). However, the investigation of their effect on ocean transport has been predominantly modeling based (13, 2123) and synoptic observations, at adequate spatial and temporal resolutions, are rare (24, 25). The mechanisms responsible for the establishment, maintenance, and energetics of such features in the Gulf of Mexico remain unclear.Instantaneous measurement of all representative spatiotemporal scales of the ocean state is notoriously difficult (26). As previously reviewed (27), traditional observing systems are not ideal for synoptic sampling of near-surface flows at the submesoscale. Owing to the large spacing between ground tracks (28) and along-track signal contamination from high-frequency motions (29), gridded altimeter-derived sea level anomalies only resolve the largest submesoscale motions. Long time-series ship-track current measurements attain similar, larger than 2 km, spatial resolutions, and require averaging the observations over evolving ocean states (30). Simultaneous, two-point accoustic Doppler current profiler measurements from pairs of ships (25) provide sufficient resolution to show the existence of energetic submesoscale fluctuations in the mixed layer, but do not explicitly quantify the scale-dependent transport induced by such motions at the surface. Lagrangian experiments, centered on tracking large numbers of water-following instruments, provide the most feasible means of obtaining spatially distributed, simultaneous measurements of the structure of the ocean’s surface velocity field on 100-m to 10-km length scales.Denoting a trajectory by x(a, t), where x(a, t0) = a, the relative separation of a particle pair is given by D(t,D0)=x(a1,t)x(a2,t)=D0+t0tΔv(t,D0)dt, where the Lagrangian velocity difference is defined by Δv(t, D0) = v(a1, t) − v(a2, t). The statistical quantities of interest, both practically and theoretically, are the scale-dependent relative dispersion D2(t) = 〈D ⋅ D〉 (averaged over particle pairs) and the average longitudinal or separation velocity, Δv(r), at a given separation, r. The velocity scale is defined by the second order structure function Δv(r)=δv2, where δv(r) = (v(x + r) − v(x)) ⋅ r/∥r∥ (31, 32) where the averaging is now conditioned on the pair separation r.The applicability of classical dispersion theories (3234) developed in the context of homogeneous, isotropic turbulence with localized spectral forcing, to ocean flows subject to the effects of rotation, stratification, and complex forcing at disparate length and time scales remains unresolved. Turbulence theories broadly predict two distinct dispersion regimes depending upon the shape of the spatial kinetic energy spectrum, E(k) ∼ kβ, of the velocity field (35). For sufficiently steep spectra (β ≥ 3) the dispersion is expected to grow exponentially, D ∼ eλt with a scale-independent rate. At the submesoscales (∼ 100 m–10 km), this nonlocal growth rate will then be determined by the mesoscale motions currently resolved by predictive models. For shallower spectra (1 < β < 3), however, the dispersion is local, Dt2/(3−β), and the growth rate of a pollutant patch is dominated by advective processes at the scale of the patch. Accurate prediction of dispersion in this regime requires resolution of the advecting field at smaller scales than the mesoscale.Whereas compilations of data from dye measurements broadly support local dispersion in natural flows (36), the range of scales in any particular dye experiment is limited. A number of Lagrangian observational studies have attempted to fill this gap. LaCasce and Ohlmann (37) considered 140 pairs of surface drifters on the GoM shelf over a 5-y period and found evidence of a nonlocal regime for temporally smoothed data at 1-km scales. Koszalka et al. (38) using ??(100) drifter pairs with D0 < 2 km launched over 18 mo in the Norwegian Sea, found an exponential fit for D2(t) for a limited time (t = 0.5 − 2 d), although the observed longitudinal velocity structure function is less clearly fit by a corresponding quadratic. They concluded that a nonlocal dispersion regime could not be identified. In contrast, Lumpkin and Elipot (39) found evidence of local dispersion at 1-km scales using 15-m drogued drifters launched in the winter-time North Atlantic. It is not clear how the accuracy of the Argos positioning system (150–1,000 m) used in these studies affects the submesoscale dispersion estimates. Schroeder et al. (40), specifically targeting a coastal front using a multiscale sampling pattern, obtained results consistent with local dispersion, but the statistical significance (maximum 64 pairs) remained too low to be definitive.  相似文献   

13.
14.
Socioeconomic viability of fluvial-deltaic systems is limited by natural processes of these dynamic landforms. An especially impactful occurrence is avulsion, whereby channels unpredictably shift course. We construct a numerical model to simulate artificial diversions, which are engineered to prevent channel avulsion, and direct sediment-laden water to the coastline, thus mitigating land loss. We provide a framework that identifies the optimal balance between river diversion cost and civil disruption by flooding. Diversions near the river outlet are not sustainable, because they neither reduce avulsion frequency nor effectively deliver sediment to the coast; alternatively, diversions located halfway to the delta apex maximize landscape stability while minimizing costs. We determine that delta urbanization generates a positive feedback: infrastructure development justifies sustainability and enhanced landform preservation vis-à-vis diversions.

Deltaic environments are critical for societal wellbeing because these landscapes provide an abundance of natural resources that promote human welfare (1, 2). However, the sustainability of deltas is uncertain due to sea-level rise (3, 4), sediment supply reduction (46), and land subsidence (7, 8). Additionally, river avulsion, the process of sudden channel relocation (9, 10), presents a dichotomy to delta sustainability: the unanticipated civil disruption associated with flooding brought by channel displacement is at odds with society’s desire for landscape stability, yet channel relocation is needed to deliver nutrients and sediment to various locations along the deltaic coastline (11, 12). Indeed, for many of the world’s megadeltas, channel engineering practices have sought to restrict channel mobility and limit floodplain connectivity (13, 14), which in turn prevents sediment dispersal that is necessary to sustain deltas; as a consequence, land loss has ensued (15). Despite providing near-term stability (1315), engineering of deltaic channels is a long-term detrimental practice (11, 1517).To maximize societal benefit, measures that promote delta sustainability must balance engineering infrastructure cost and impact on delta morphology with benefits afforded by maintaining and developing deltaic landscapes (1, 2, 11, 12, 16–19). For example, channel diversions, costing millions to billions of dollars (2022), are now planned worldwide to both prevent unintended avulsions and ensure coastal sustainability through enhanced sediment delivery (e.g., Fig. 1A) (20, 21, 2326).Open in a separate windowFig. 1.(A) Satellite image of Yellow River delta (Landsat, 1978) showing coastline response to a diversion in 1976 at the open circle, which changed the channel course from the north (Diaokou lobe) to the east (Qingshuigou lobe) and produced flooding over the stripe-hatched area (30). (B and C) Planform view (B) and along-channel cross-section view (C) of conceptual model for numerical simulations and societal benefit formulation. In the diagrams, a diversion at LD0.8Lb floods an area (af) defined by Lf and θ, diverting sediment away from the deltaic lobe (with length Ll). Aggradation of the former channel bed (dashed line) is variable; hence, diversion length influences the propensity for subsequent avulsion setup.In this article, we consider the benefits and costs of such engineered river diversions and determine how these practices most effectively sustain deltaic landscapes, by assessing optimal placement and timing for river diversions. Addressing these points requires combining two modeling frameworks: a morphodynamic approach—evolving the landscape over time and space by evaluating the interactions of river fluid flow and sediment transport—and a decision-making framework (21, 22, 27, 28). The former simulates deltaic channel diversions by assessing the nonlinear relationships between channel diversion length (LD) and the frequency (timing) of avulsions (TA), while the latter incorporates a societal benefit model that approximates urbanization by considering the cost of flooding a landscape that would otherwise generate revenue. The aim is to optimize timing and placement of channel diversions, by giving consideration to morphodynamic operations and societal wellbeing. Interestingly, optimal societal benefit indicates that urbanization justifies enhanced sustainability measures, which contradicts existing paradigms that label development and sustainability mutually exclusive (3, 7, 12). Ultimately, the societal benefit model should be an integrated component in decision-making frameworks. This will help locate diversions and promote sustainable and equitable decisions considering historical, ethical, and environmental contexts for river management decisions (29).  相似文献   

15.
Despite its importance for forest regeneration, food webs, and human economies, changes in tree fecundity with tree size and age remain largely unknown. The allometric increase with tree diameter assumed in ecological models would substantially overestimate seed contributions from large trees if fecundity eventually declines with size. Current estimates are dominated by overrepresentation of small trees in regression models. We combined global fecundity data, including a substantial representation of large trees. We compared size–fecundity relationships against traditional allometric scaling with diameter and two models based on crown architecture. All allometric models fail to describe the declining rate of increase in fecundity with diameter found for 80% of 597 species in our analysis. The strong evidence of declining fecundity, beyond what can be explained by crown architectural change, is consistent with physiological decline. A downward revision of projected fecundity of large trees can improve the next generation of forest dynamic models.

“Belgium, Luxembourg, and The Netherlands are characterized by “young” apple orchards, where over 60% of the trees are under 10 y old. In comparison, Estonia and the Czech Republic have relatively “old” orchard[s] with almost 60% and 43% over 25 y old” (1).
“The useful lives for fruit and nut trees range from 16 years (peach trees) to 37 years (almond trees)…. The Depreciation Analysis Division believes that 61 years is the best estimate of the class life of fruit and nut trees based on the information available” (2).
When mandated by the 1986 Tax Reform Act to depreciate aging orchards, the Office of the US Treasury found so little information that they ultimately resorted to interviews with individual growers (2). One thing is clear from the age distributions of fruit and nut orchards throughout the world (1, 3, 4): Standard practice often replaces trees long before most ecologists would view them to be in physiological decline, despite the interruption of profits borne by growers as transplants establish and mature. Although seed establishment represents the dominant mode for forest regeneration globally, and the seeds, nuts, and fruits of woody plants make up to 3% of the human diet (5, 6), change in fecundity with tree size and age is still poorly understood. We examine here the relationship between tree fecundity and diameter, which is related to tree age in the sense that trees do not shrink in diameter (cambial layers typically add a new increment annually), but growth rates can range widely. Still, it is important not to ignore the evidence that declines with size may also be caused by aging. Although most analyses do not separate effects of size from age (because age is often unknown and confounded with size), both may contribute to size–fecundity relationships (7). Grafting experiments designed to isolate extrinsic influences (size and/or environment) from age-related gene expression suggest that size alone can sometimes explain declines in growth rate and physiological performance (810), consistent with pruning/coppicing practice to extend the reproductive life of commercial fruit trees. Hydraulic limitation can affect physiological function, including reduced photosynthetic gain that might contribute to loss of apical dominance, or “flattening” of the crown with increasing height (1116). The slowing of height growth relative to diameter growth in large trees is observed in many species (12, 17). At least one study suggests that age by itself may not lead to decline in fecundity of open-grown, generally small-statured bristlecone pine (Pinus longaeva) (18). By contrast, some studies provide evidence of tree senescence, including age-related genetic changes in meristems of grafted scions that cause declines in physiological function (1922). Koenig et al. (23) found that fecundity declined in the 5 y preceding death in eight Quercus species, although cause of death here, as in most cases, is hard to identify. Fielding (24) found that cone size of Pinus radiata declines with tree age and smaller cones produce fewer seeds (25). Some studies support age-related fecundity declines in herbaceous species (2628). Thus, there is evidence to suggest the fecundity schedules might show declines with size, age, or both.The reproductive potential of trees as they grow and age is of special concern to ecologists because, despite being relatively rare, large trees can contribute disproportionately to forest biomass due to the allometric scaling that amplifies linear growth in diameter to a volume increase that is more closely related to biomass (29, 30). Understanding the role of large trees can also benefit management in recovering forests (31). If allometric scaling applies to fecundity, then these large individuals might determine the species and genetic composition of seeds that compete for dominance in future forests.Unfortunately, underrepresentation of big trees in forests frustrates efforts to infer how fecundity changes with size. Simple allometric relationships between seed production and tree diameter can offer useful predictions for the small- to intermediate-size trees that dominate observational data, so it is not surprising that modeling began with the assumption of allometric scaling (3236). Extrapolation from these models would predict that seed production by the small trees from which most observations come may be overwhelmed by big trees. Despite the increase with tree size assumed by ecologists (37), evidence for declining reproduction in large trees has continued to accumulate from horticultural practice (3, 4, 38, 39) and at least some ecological (4045) and forestry literature (46, 47). However, we are unaware of studies that evaluate changes in fecundity that include substantial numbers of large trees.Understanding the role of size and age is further complicated by the fact that tree fecundity ranges over orders of magnitude from tree to tree of the same species and within the same tree from year to year—a phenomenon known as “masting.” The variation in seed-production data requires large sample sizes not only to infer the effects of size, but also to account for local habitat and interannual climate variation. For example, a one-time destructive harvest to count seeds in felled trees (48, 49) misses the fact that the same trees would offer a different picture had they been harvested in a different year. An oak that produces 100 acorns this year may produce 10,000 next year. A pine that produces 500 cones this year can produce zero next year. Few datasets offer the sample sizes of trees and tree years needed to estimate effects of size and habitat conditions in the face of this high intertree and interyear variability (43).We begin this analysis by extending allometric scaling to better reflect the geometry of fecundity with tree size. We then reexamine the size–fecundity relationship using data from the Masting Inference and Forecasting (MASTIF) project (50), which includes substantial representation of large trees, and a modeling framework that allows for the possibility that fecundity plateaus or even declines in large trees. Unlike previous studies, we account for the nonallometric influences that come through competition and climate. We demonstrate that fecundity–diameter relationships depart substantially from allometric scaling in ways that are consistent with physiological senescence.Continuous increase with size has been assumed in most models of tree fecundity, supported in part by allometric regressions against diameter, typically of the formlogMf=β0+βDlogD[1]for fecundity mass Mf=m×f (48, 51), where D is tree diameter, m is mass per seed, and fecundity f is seeds per tree per year. Of course, this model cannot be used to determine whether or how fecundity changes with tree diameter unless expanded to include additional quadratic or higher-order terms (52).The assumption of continual increase in fecundity was interpreted from early seed-trap studies, which initially assumed that βD=2, i.e., fecundity proportional to stem basal area (3334, 51). Models subsequently became more flexible, first with βD values fitted, rather than fixed, yielding estimates in the range (0.3, 0.9) in one study (ref. 52, 18 species) and (0, 4.1) in another (ref. 56, 4 species). However, underrepresentation of large trees in typical datasets means that model fitting is dominated by the abundant small size classes.To understand why data and models could fail to accurately represent change in fecundity with size, consider that allometric scaling in Eq. 1 can be maintained dynamically only if change in both adheres to a strict proportionality1fdfdt1DdDdt[2](57). For allometric scaling, any variable that affects diameter growth has to simultaneously affect change in fecundity and in the same, proportionate way. In other words, allometric scaling cannot hold if there are selective forces on fecundity that do not operate through diameter growth and vice versa.On top of this awkward constraint that demands proportionate responses of growth and fecundity, consider further that standard arguments for allometric scaling are not directly relevant for tree fecundity. Allometry is invoked for traits that maintain relationships between body parts as an organism changes size (29). For example, a diameter increment translates to an increase in volume throughout the tree (58, 59). Because the cambial layer essentially blankets the tree, a volume increment cannot depart much from a simple allometric relationship with diameter. However, the same cannot be said for all plant parts, many of which clearly do not allometrically scale; for example, seed size does not scale with leaf size (60), presumably because structural constraints are not the dominant forces that relate them (61).To highlight why selective forces might not generate strict allometric scaling for reproduction, consider that a tree allocates a small fraction of potential buds to reproduction in a given year (62, 63). Still, if the number of buds on a tree bears some direct relationship to crown dimensions and, thus, diameter, there might be allometric scaling. However, the fraction of buds allocated to reproduction and their subsequent development to seed is affected by interannual weather and other selective forces (e.g., bud abortion, pollen limitation) in ways that diameter growth is not (6466). In fact, weather might have opposing effects on growth and reproduction (67). Furthermore, resources can change the relationship between diameter and fecundity, including light levels (52, 6870) and atmospheric CO2 (71).Some arguments based on carbon balance anticipate a decline in fecundity with tree size (72). Increased stomatal limitation (11) and reduced leaf turgor pressure (14, 73) from increasing hydraulic path length could reduce carbon gains in large trees. Assimilation rates on a leaf area basis can decline with tree size (74), while respiration rate per leaf area can increase [Sequoia sempervirens (75), Liquidambar styraciflua (76), and Pinus sylvestris (77)], consistent with the notion that whole-plant respiration rate may roughly scale with biomass (78). Maintenance respiration costs scale with diameter in some tropical species (79) but perhaps not in Pinus contorta and Picea engelmannii (80). Self-pruning of lower branches can reduce maintenance costs (81), but the ratio of carbon gain to respiration cost can still decline with size, especially where leaf area plateaus and per-area assimilation rates of leaves decline in large trees.The question of size–fecundity relationships is related indirectly to the large literature on interannual variation in growth–fecundity allocation (3, 4, 43, 67, 8287). The frequency and timing of mast years and species differences in the volatility of seed production can be related to short-term changes in physiological state and pollen limitation that might not predict the long-term relationships between size and reproductive effort. The interannual covariance in diameter growth and reproductive effort can range from strong in some species to weak in others (70, 87, 88). Understanding the relationships between short-term allocation and size–fecundity differences will be an important focus of future research.Estimating effects of size on fecundity depends on the distribution of diameter data, [D], where the bracket notation indicates a distribution or density. For some early-successional species, the size distribution changes from dominance by small trees in young stands to absence of small trees in old stands. If our goal was to describe the population represented by a forest inventory plot, we would typically think about the joint distribution of fecundity and diameter values, [f,D]=[f|D][D], that is represented by the sample. The size–fecundity relationship estimated for a stand at different successional stages would diverge simply due to the distribution of diameters, i.e., differences in [D]. For example, application of Eq. 1 to harvested trees selected to balance size classes (uniform [D]) (48) overpredicts fecundity for large trees (49), but the relevance of such regressions for natural stands, where large trees are often rare, is unclear. Studies that expand Eq. 1 to allow for changing relationships with tree size now provide increasing evidence for a departure from allometric scaling in large trees (43, 70), despite dominance by small- to intermediate-size trees in these datasets. Here our goal is to understand the size–fecundity relationship [f|D] as an attribute of a species, i.e., not tied to a specific distribution of size classes observed in a particular stand.The well-known weak relationship between tree size and age that comes from variable growth histories makes it important to clarify the implications of any finding of fecundity that declines with tree size: Can it happen if there are not also fecundity declines with tree age? The only argument for continuing increase in fecundity with age in the face of observed decreases with size would have to assume that the biggest trees are also the youngest trees. Of course, a large individual can be younger than a small individual. However, at the species level, integrating over populations sampled widely, mean diameter increases with age; at the species level, declines with size also imply declines with age. Estimating accurate species-level size effects requires distributed data and large sample sizes. The analysis here fits species-level parameters, with 585,670 trees and 10,542,239 tree years across 597 species.Phylogenetic analysis might provide insight into the pervasiveness of fecundity declines with size. Inferring change in fecundity with size necessarily requires more information than is needed to fit a single slope parameter βD in the simple allometric model. The noisier the data, the more difficult it becomes to estimate the additional parameters that are needed to describe changes in the fecundity relationship with size. We thus expect that noise alone will preclude finding size-related change in some species, depending on sample size and non–size-related variation. If the vagaries of noisy data and the distribution of diameters preclude estimation of declines in some species, then we do not expect that phylogeny will explain which species do and do not show these declines. Rather than phylogeny, this explanation would instead be tied to sample size and the distribution of diameter data. Conversely, phylogenetic conservatism, i.e., a tendency for declines to be clustered in related species, could suggest that fecundity declines are real.To understand how seed production changes with tree size, our approach combines theory and data to evaluate allometric scaling and the alternative that fecundity may decline in large trees, consistent with physiological decline and senescence. We exploit two advances that are needed to determine how fecundity scales with tree size. First, datasets are needed with large trees, because studies in the literature often include few or none (85, 89, 90). Second, methods are introduced that are flexible to the possibility that fecundity continues to increase with size or not. We begin with a reformulation of allometric scaling, recognizing that change in fecundity could be regulated by size, without taking the form of Eq. 1 (Materials and Methods and SI Appendix, section S2). In other words, there could be allometric scaling with diameter, but it is not the relationship that has been used for structural quantities like biomass. We then analyze the relationships in data using a model that not only allows for potential changes in fecundity with size, but at the same time accounts for self-shading and shading by neighbors and for environmental variables that can affect fecundity and growth (Materials and Methods and SI Appendix, section S3). The fitted model is compared with our expanded allometric model to identify potential agreement. Finally, we examined phylogenetic trends in the species that do and do not show declines.  相似文献   

16.
Anaerobic microbial respiration in suboxic and anoxic environments often involves particulate ferric iron (oxyhydr-)oxides as terminal electron acceptors. To ensure efficient respiration, a widespread strategy among iron-reducing microorganisms is the use of extracellular electron shuttles (EES) that transfer two electrons from the microbial cell to the iron oxide surface. Yet, a fundamental understanding of how EES–oxide redox thermodynamics affect rates of iron oxide reduction remains elusive. Attempts to rationalize these rates for different EES, solution pH, and iron oxides on the basis of the underlying reaction free energy of the two-electron transfer were unsuccessful. Here, we demonstrate that broadly varying reduction rates determined in this work for different iron oxides and EES at varying solution chemistry as well as previously published data can be reconciled when these rates are instead related to the free energy of the less exergonic (or even endergonic) first of the two electron transfers from the fully, two-electron reduced EES to ferric iron oxide. We show how free energy relationships aid in identifying controls on microbial iron oxide reduction by EES, thereby advancing a more fundamental understanding of anaerobic respiration using iron oxides.

The use of iron oxides as terminal electron acceptors in anaerobic microbial respiration is central to biogeochemical element cycling and pollutant transformations in many suboxic and anoxic environments (16). To ensure efficient electron transfer to solid-phase ferric iron, Fe(III), at circumneutral pH, metal-reducing microorganisms from diverse phylae use dissolved extracellular electron shuttle (EES), including quinones (79), flavins (1016), and phenazines (1719), to transfer two electrons per EES molecule from the respiratory chain proteins in the outer membrane of the microbial cell to the iron oxide (17, 20, 21). The oxidized EES can diffuse back to the cell surface for rereduction, thereby completing the catalytic redox cycle involving the EES.The electron transfer from the reduced EES to Fe(III) is considered a key step in overall microbial Fe(III) respiration. Several lines of evidence suggest that the free energy of the electron transfer reaction, ΔrG, controls Fe(III) reduction rates (15, 17, 22, 23). For instance, microbial Fe(III) oxide reduction by dissolved model quinones as EES was accelerated only for quinones with standard two-electron reduction potentials, EH,1,20, that fell into a relatively narrow range of 180±80 mV at pH 7 (24). Furthermore, in abiotic experiments, Fe(III) reduction rates by EES decreased with increasing ΔrG that resulted from increasing either EH,1,20 of the EES (25, 26), the concentration of Fe(II) in the system (27), or solution pH (25, 26, 28). However, substantial efforts to relate Fe(III) reduction rates for different EES species, iron oxides, and pH to the EH,1,20 averaged over both electrons transferred from the EES to the iron oxides were only partially successful (25, 28). Reaction free energies of complex redox processes involving the transfer of multiple electrons can readily be calculated using differences in the reduction potentials averaged over all electrons transferred, and this approach is well established in biogeochemistry and microbial ecology. For kinetic considerations, however, the use of averaged reduction potentials is inappropriate.Herein, we posit that rates of Fe(III) reduction by EES instead relate to the ΔrG of the less exergonic first one-electron transfer from the two-electron reduced EES species to the iron oxide, following the general notion that reaction rates scale with reaction free energies (29). Our hypothesis is based on the fact that, at circumneutral to acidic pH and for many EES, the reduction potential of the first electron transferred to the fully oxidized EES to form the one-electron reduced intermediate semiquinone species, EH,1, is lower than the reduction potential of the second electron transferred to the semiquinone to form the fully two-electron reduced EES species, EH,2 [i.e., EH,1<EH,2 (3033)]. This difference in one-electron reduction potentials implies that the two-electron reduced EES (i.e., the hydroquinone) is the weaker one-electron reductant for Fe(III) as compared to the semiquinone species. We therefore expect that rates of iron oxide reduction relate to the ΔrG of the first electron transferred from the hydroquinone to Fe(III). The ΔrG of this first electron transfer may even be endergonic provided that the two-electron transfer is exergonic.We verified our hypothesis in abiotic model systems by demonstrating that reduction rates of two geochemically important crystalline iron oxides, goethite and hematite, by two-electron reduced quinone- and flavin-based EES over a wide pH range, and therefore thermodynamic driving force for Fe(III) reduction, correlate with the ΔrG of the first electron transferred from the fully reduced EES to Fe(III). We further show that rates of goethite and hematite reduction by EES reported in the literature are in excellent agreement with our rate data when comparing rates on the basis of the thermodynamics of the less exergonic first of the two electron transfers.  相似文献   

17.
When aged below the glass transition temperature, Tg, the density of a glass cannot exceed that of the metastable supercooled liquid (SCL) state, unless crystals are nucleated. The only exception is when another polyamorphic SCL state exists, with a density higher than that of the ordinary SCL. Experimentally, such polyamorphic states and their corresponding liquid–liquid phase transitions have only been observed in network-forming systems or those with polymorphic crystalline states. In otherwise simple liquids, such phase transitions have not been observed, either in aged or vapor-deposited stable glasses, even near the Kauzmann temperature. Here, we report that the density of thin vapor-deposited films of N,N′-bis(3-methylphenyl)-N,N′-diphenylbenzidine (TPD) can exceed their corresponding SCL density by as much as 3.5% and can even exceed the crystal density under certain deposition conditions. We identify a previously unidentified high-density supercooled liquid (HD-SCL) phase with a liquid–liquid phase transition temperature (TLL) 35 K below the nominal glass transition temperature of the ordinary SCL. The HD-SCL state is observed in glasses deposited in the thickness range of 25 to 55 nm, where thin films of the ordinary SCL have exceptionally enhanced surface mobility with large mobility gradients. The enhanced mobility enables vapor-deposited thin films to overcome kinetic barriers for relaxation and access the HD-SCL state. The HD-SCL state is only thermodynamically favored in thin films and transforms rapidly to the ordinary SCL when the vapor deposition is continued to form films with thicknesses more than 60 nm.

Glasses are formed when the structural relaxations in supercooled liquids (SCLs) become too slow, causing the system to fall out of equilibrium at the glass transition temperature (Tg). The resulting out-of-equilibrium glass state has a thermodynamic driving force to evolve toward the SCL state through physical aging (1). At temperatures just below Tg, the extent of equilibration is limited by the corresponding SCL state, while at much lower temperatures, equilibration is limited by the kinetic barriers for relaxation. As such, the degree of thermodynamic stability achieved through physical aging is limited (2).Physical vapor deposition (PVD) is an effective technique to overcome kinetic barriers for relaxation to produce thermodynamically stable glasses (310). The accelerated equilibration in these systems is due to their enhanced surface mobility (1114). During PVD, when the substrate temperature is held below Tg, molecules or atoms can undergo rearrangements and adopt more stable configurations at the free surface and proximate layers underneath (13). After the molecules are buried deeper into the film, their relaxation dynamics significantly slow down, which prevents further equilibration. Through this surface-mediated equilibration process, stable glasses can achieve low-energy states on the potential energy landscape that would otherwise require thousands or millions of years of physical aging (2, 3, 15, 16).As such, the degree of enhanced surface mobility and mobility gradients are critical factors in the formation of stable glasses (3, 11, 17, 18). While the effect of film thickness on the surface mobility and gradients of liquid-quenched (LQ) glasses has been studied in the past (19, 20), there are limited data on the role of film thickness in the stability of vapor-deposited glasses. In vapor-deposited toluene, it has been shown that decreasing the film thickness from 70 to 5 nm can increase the thermodynamic stability but decrease the apparent kinetic stability (5, 6). In contrast, thin films covered with a top layer of another material do not show a significant evidence of reduced kinetic stability (21), indicating the nontrivial role of mobility gradients in thermal and kinetic stability.Stable glasses of most organic molecules, with short-range intramolecular interactions, have properties that are indicative of the same corresponding metastable SCL state as LQ and aged glasses, without any evidence of the existence of generic liquid–liquid phase transitions that can potentially provide a resolution for the Kauzmann entropy crisis (22). The Kauzmann crisis occurs at the Kauzmann temperature (TK), where the extrapolated SCL has the same structural entropy as the crystal, producing thermodynamically impossible states just below this temperature. Recently, Beasley et al. (16) showed that near-equilibrium states of ethylbenzene can be produced using PVD down to 2 K above TK and hypothesized that any phase transition to an “ideal glass” state to avoid the Kauzmann crisis must occur at TK.In some glasses of elemental substances (23, 24) and hydrogen-bonding compounds (25, 26), liquid–liquid phase transitions can occur between polyamorphic states with distinct local packing structures that correspond to polymorphic crystalline phases. For example, at high pressures, high- and low-density supercooled water phases are interconvertible through a first-order phase transition (27, 28). Recent studies have demonstrated that such polyamorphic states can also be accessed through PVD in hydrogen-bonding systems with polymorphic crystal states at depositions above the nominal Tg (29, 30). However, these structure-specific transitions do not provide a general resolution for the Kauzmann crisis.Here, we report the observation of a liquid–liquid phase transition in vapor-deposited thin films of N,N′-bis(3-methylphenyl)-N,N′-diphenylbenzidine (TPD). TPD is a molecular glass former with only short-range intermolecular interactions. When thin films of TPD are vapor deposited onto substrates held at deposition temperatures (Tdep) below the nominal glass transition temperature of bulk TPD, Tg (bulk), films in the thickness range of 25nm<h<55nm achieve a high-density supercooled liquid (HD-SCL) state, which has not been previously observed. The liquid–liquid phase transition temperature (TLL) between the ordinary SCL and HD-SCL states is measured to be TLLTg(bulk)35K. The density of thin films deposited below TLL tangentially follows the HD-SCL line, which has a stronger temperature dependence than the ordinary SCL. When vapor deposition is continued to produce thicker films (h>60nm), the HD-SCL state transforms into the ordinary SCL state, indicating that the HD-SCL is only thermodynamically favored in the thin-film geometry. This transition is qualitatively different from the previously reported liquid–liquid phase transitions, as it is not related to a specific structural motif in TPD crystals, and it can only be observed in thin films, indicating that the energy landscape of thin films is favoring this high-density state.We observe an apparent correlation between enhanced mobility gradients in LQ thin films of TPD and the thickness range where HD-SCL states are produced during PVD. We hypothesize that enhanced mobility gradients are essential in providing access to regions of the energy landscape corresponding to the HD-SCL state, which are otherwise kinetically inaccessible. This hypothesis should be further investigated to better understand the origin of this phenomenon.  相似文献   

18.
19.
Carbon dioxide (CO2) supersaturation in lakes and rivers worldwide is commonly attributed to terrestrial–aquatic transfers of organic and inorganic carbon (C) and subsequent, in situ aerobic respiration. Methane (CH4) production and oxidation also contribute CO2 to freshwaters, yet this remains largely unquantified. Flood pulse lakes and rivers in the tropics are hypothesized to receive large inputs of dissolved CO2 and CH4 from floodplains characterized by hypoxia and reducing conditions. We measured stable C isotopes of CO2 and CH4, aerobic respiration, and CH4 production and oxidation during two flood stages in Tonle Sap Lake (Cambodia) to determine whether dissolved CO2 in this tropical flood pulse ecosystem has a methanogenic origin. Mean CO2 supersaturation of 11,000 ± 9,000 μatm could not be explained by aerobic respiration alone. 13C depletion of dissolved CO2 relative to other sources of organic and inorganic C, together with corresponding 13C enrichment of CH4, suggested extensive CH4 oxidation. A stable isotope-mixing model shows that the oxidation of 13C depleted CH4 to CO2 contributes between 47 and 67% of dissolved CO2 in Tonle Sap Lake. 13C depletion of dissolved CO2 was correlated to independently measured rates of CH4 production and oxidation within the water column and underlying lake sediments. However, mass balance indicates that most of this CH4 production and oxidation occurs elsewhere, within inundated soils and other floodplain habitats. Seasonal inundation of floodplains is a common feature of tropical freshwaters, where high reported CO2 supersaturation and atmospheric emissions may be explained in part by coupled CH4 production and oxidation.

Globally, most lakes and rivers are supersaturated with dissolved carbon dioxide (CO2) relative to the atmosphere, highlighting their outsized role in transferring and transforming terrestrial carbon (C) (13). Terrestrial–aquatic transfers of C can include CO2 dissolved in terrestrial ground and surface waters (36), dissolved inorganic carbon (DIC) from carbonate weathering (7, 8), or organic C from various sources that is subsequently respired in lakes and rivers (9, 10). Initially, oceanic export was thought to be the only fate for terrestrial–aquatic transfers of C, but a growing body of research on sediment burial of organic C and CO2 emissions from freshwaters prompted the “active pipe” revision to this initial set of assumptions (11). Although freshwaters are now recognized as focal points for transferring and transforming C on the landscape, most of this research has been conducted within temperate freshwaters (2, 11, 12). Few studies focus on the mechanisms of CO2 supersaturation in tropical lakes and rivers, with most conducted in just one watershed, the Amazon (4, 1315).CO2 supersaturation within tropical freshwaters is likely influenced by their unique flood pulse hydrology. The canonical flood pulse concept hypothesizes that annual flooding of riparian land will lead to organic C mobilization and respiration (16). Partial pressures of CO2 (pCO2) have been measured in excess of 44,000 μatm in the Amazon River (13), 16,000 μatm in the Congo River (17), and 12,000 μatm in the Lukulu River (17). Richey et al. (13), Borges et al. (18), and Zuidgeest et al. (17) have each shown that that riverine pCO2 scales with the amount of land flooded in these watersheds. Yet it was only recently that Abril and Borges (19) proposed the importance of flooded land to the “active pipe.” These authors differentiate uplands that unidirectionally drain water downhill (via ground and surface water) from floodplains that bidirectionally exchange water with lakes and rivers (19). They conceptualize how floodplains combine high hydrologic connectivity, high rates of primary production, and high rates of respiration to transfer relatively large amounts of C to tropical freshwaters (19).Methanogenesis inevitably results on floodplains after dissolved oxygen (O2) and other electron acceptors for anaerobic respiration such as iron and sulfate are consumed (16, 19). Horizontal gradients in dissolved O2 and reducing conditions have been observed extending from the center of lakes and rivers through their floodplains in the Mekong (20, 21), Congo (22), Pantanal (23), and Amazon watersheds (4). CH4 production and oxidation occur along such redox gradients (4, 16, 19, 23). CH4 is produced by acetate fermentation (Eq. 1) and carbonate reduction (Eq. 2) within freshwaters (24, 25). CH4 production coupled with aerobic oxidation results in CO2 (Eq. 3 and ref. 25), yet no studies have quantified the relative contribution of coupled CH4 production and oxidation to CO2 supersaturation within tropical freshwaters.CH3COOHCO2+CH4,[1]CO2+8H++8eCH4+2H2O,[2]CH4+2O2CO2+2H2O.[3]The relative contribution of coupled CH4 production and oxidation to CO2 supersaturation within tropical freshwaters can be traced with stable C isotopes of CO2 and CH4. Methanogenesis results in CH4 that is depleted in 13C (δ13C = −65 to −50‰ from acetate fermentation and −110 to −60‰ from carbonate reduction) compared to other potential sources of organic and inorganic C (δ13C = −37 to −7.7‰; see Materials and Methods) (2426). The oxidation of this 13C-depleted CH4 results in 13C-depleted CO2 (2426). At the same time, CH4 oxidation enriches the 13C/12C of residual CH4 as bacteria and archaea preferentially oxidize 12C-CH4 (25). This means that the 13C/12C of CO2 and CH4 can serve as powerful tools to determine the source of CO2 supersaturation within freshwaters.Tonle Sap Lake (TSL) is Southeast Asia’s largest lake and an understudied flood pulse ecosystem that supports a regionally important fishery (21, 22, 27). Each May through October, monsoonal rains and Himalayan snowmelt increase discharge in the Mekong River and cause one of its tributaries, the Tonle Sap River, to reverse course from southeast to northwest (21). During this course reversal, the Tonle Sap River floods TSL. The TSL flood pulse increases lake volume from 1.6 to 60 km3 and inundates 12,000 km2 of floodplain for 3 to 6 mo per year (21, 27). Holtgrieve et al. (22) have shown that aerobic respiration is consistently greater than primary production in TSL (i.e., net heterotrophy), with the expectation of consistent CO2 supersaturation. But, the partial pressures, C isotopic compositions, and ultimately the source of dissolved CO2 in TSL remain unquantified.To quantify CO2 supersaturation and its origins in TSL, we measured the partial pressures of CO2 and CH4 and compared their C isotopic composition to other potential sources of organic and inorganic C. We carried out these measurements in distinct lake environments during the high-water and falling-water stages of the flood pulse, hypothesizing that CH4 production and oxidation on the TSL floodplain would support CO2 supersaturation during the high-water stage. We found that coupled CH4 production and oxidation account for a nontrivial proportion of the total dissolved CO2 in all TSL environments and during both flood stages, showing that anaerobic degradation of organic C at aquatic–terrestrial transitions can support CO2 supersaturation within tropical freshwaters.  相似文献   

20.
We study the instantaneous normal mode (INM) spectrum of a simulated soft-sphere liquid at different equilibrium temperatures T. We find that the spectrum of eigenvalues ρ(λ) has a sharp maximum near (but not at) λ=0 and decreases monotonically with |λ| on both the stable and unstable sides of the spectrum. The spectral shape strongly depends on temperature. It is rather asymmetric at low temperatures (close to the dynamical critical temperature) and becomes symmetric at high temperatures. To explain these findings we present a mean-field theory for ρ(λ), which is based on a heterogeneous elasticity model, in which the local shear moduli exhibit spatial fluctuations, including negative values. We find good agreement between the simulation data and the model calculations, done with the help of the self-consistent Born approximation (SCBA), when we take the variance of the fluctuations to be proportional to the temperature T. More importantly, we find an empirical correlation of the positions of the maxima of ρ(λ) with the low-frequency exponent of the density of the vibrational modes of the glasses obtained by quenching to T=0 from the temperature T. We discuss the present findings in connection to the liquid to glass transformation and its precursor phenomena.

The investigation of the potential energy surface (PES) V(r1(t)rN(t)) of a liquid (made up of N particles with positions r1(t)rN(t) at a time instant t) and the corresponding instantaneous normal modes (INMs) of the (Hessian) matrix of curvatures has been a focus of liquid and glass science since the appearance of Goldstein’s seminal article (1) on the relation between the PES and the liquid dynamics in the viscous regime above the glass transition (227).The PES has been shown to form a rather ragged landscape in configuration space (8, 28, 29) characterized by its stationary points. In a glass these points are minima and are called “inherent structures.” The PES is believed to contain important information on the liquid–glass transformation mechanism. For the latter a complete understanding is still missing (28, 30, 31). The existing molecular theory of the liquid–glass transformation is mode-coupling theory (MCT) (32, 33) and its mean-field Potts spin version (28, 34). MCT predicts a sharp transition at a temperature TMCT>Tg, where Tg is the temperature of structural arrest (glass transition temperature). MCT completely misses the heterogeneous activated relaxation processes (dynamical heterogeneities), which are evidently present around and below TMCT and which are related to the unstable (negative-λ) part of the INM spectrum (28, 30).Near and above TMCT, apparently, there occurs a fundamental change in the PES. Numerical studies of model liquids have shown that minima present below TMCT change into saddles, which then explains the absence of activated processes above TMCT (224). Very recently, it was shown that TMCT is related to a localization–delocalization transition of the unstable INM modes (25, 26).The INM spectrum is obtained in molecular dynamic simulations by diagonalizing the Hessian matrix of the interaction potential, taken at a certain time instant t:Hijαβ(t)=2xi(α)xj(β)V{r1(t)rN(t)},[1]with ri=(xi(1),xi(2),xi(3)). For large positive values of the eigenvalues λj (j=1N, N being the number of particles in the system) they are related to the square of vibrational frequencies λj=ωj2, and one can consider the Hessian as the counterpart of the dynamical matrix of a solid. In this high-frequency regime one can identify the spectrum with the density of vibrational states (DOS) of the liquid viag(ω)=2ωρ(λ(ω))=13Njδ(ωωj).[2]For small and negative values of λ this identification is not possible. For the unstable part of the spectrum (λ<0) it has become common practice to call the imaginary number λ=iω˜ and define the corresponding DOS asg(ω˜)2ω˜ρ(λ(ω˜)).[3]This function is plotted on the negative ω axis and the stable g(ω), according to [2], on the positive axis. However, the (as we shall see, very interesting) details of the spectrum ρ(λ) near λ = 0 become almost completely hidden by multiplying the spectrum with |ω|. In fact, it has been demonstrated by Sastry et al. (6) and Taraskin and Elliott (7) already 2 decades ago that the INM spectrum of liquids, if plotted as ρ(λ) and not as g(ω) according to [2] and [3], exhibits a characteristic cusp-like maximum at λ = 0. The shape of the spectrum changes strongly with temperature. This is what we find as well in our simulation and what we want to explore further in our present contribution.In the present contribution we demonstrate that the strong change of the spectrum with temperature can be rather well explained in terms of a model, in which the instantaneous harmonic spectrum of the liquid is interpreted to be that of an elastic medium, in which the local shear moduli exhibit strong spatial fluctuations, which includes a large number of negative values. Because these fluctuations are just a snapshot of thermal fluctuations, we assume that they are obeying Gaussian statistics, the variance of which is proportional to the temperature.Evidence for a characteristic change in the liquid configurations in the temperature range above Tg has been obtained in recent simulation studies of the low-frequency vibrational spectrum of glasses, which have been rapidly quenched from a certain parental temperature T*. If T* is decreased from high temperatures toward TMCT, the low-frequency exponent of the vibrational DOS of the daughter glass (quenched from T* to T = 0) changed from Debye-like g(ω)ω2 to g(ω)ωs with s > 2. In our numerical investigation of the INM spectra we show a correlation of some details of the low-eigenvalue features of these spectra with the low-frequency properties of the daughter glasses obtained by quenching from the parental temperatures.The stochastic Helmholtz equations (Eq. 7) of an elastic model with spatially fluctuating shear moduli can be readily solved for the averaged Green’s functions by field theoretical techniques (3537). Via a saddle point approximation with respect to the resulting effective field theory one arrives at a mean-field theory (self-consistent Born approximation [SCBA]) for the self-energy of the averaged Green’s functions. The SCBA predicts a stable spectrum below a threshold value of the variance. Restricted to this stable regime, this theory, called heterogeneous elasticity theory (HET), was rather successful in explaining several low-frequency anomalies in the vibrational spectrum of glasses, including the so-called boson peak, which is an enhancement at finite frequencies over the Debye behavior of the DOS g(ω)ω2 (3741). We now explore the unstable regime of this theory and compare it to the INM spectrum of our simulated soft-sphere liquid.*We start Results by presenting a comparison of the simulated spectra of the soft-sphere liquid with those obtained by the unstable version of HET-SCBA theory. We then concentrate on some specific features of the INM spectra, namely, the low-eigenvalue slopes and the shift of the spectral maximum from λ = 0. Both features are accounted for by HET-SCBA. In particular, we find an interesting law for the difference between the slopes of the unstable and the stable parts of the spectrum, which behaves as T2/3, which, again, is accounted for by HET-SCBA.In the end we compare the shift of the spectral maximum with the low-frequency exponent of the DOS of the corresponding daughter glasses and find an empirical correlation. We discuss these results in connection with the saddle to minimum transformation near TMCT.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号