首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 42 毫秒
1.
Molecular, polymeric, colloidal, and other classes of liquids can exhibit very large, spatially heterogeneous alterations of their dynamics and glass transition temperature when confined to nanoscale domains. Considerable progress has been made in understanding the related problem of near-interface relaxation and diffusion in thick films. However, the origin of “nanoconfinement effects” on the glassy dynamics of thin films, where gradients from different interfaces interact and genuine collective finite size effects may emerge, remains a longstanding open question. Here, we combine molecular dynamics simulations, probing 5 decades of relaxation, and the Elastically Cooperative Nonlinear Langevin Equation (ECNLE) theory, addressing 14 decades in timescale, to establish a microscopic and mechanistic understanding of the key features of altered dynamics in freestanding films spanning the full range from ultrathin to thick films. Simulations and theory are in qualitative and near-quantitative agreement without use of any adjustable parameters. For films of intermediate thickness, the dynamical behavior is well predicted to leading order using a simple linear superposition of thick-film exponential barrier gradients, including a remarkable suppression and flattening of various dynamical gradients in thin films. However, in sufficiently thin films the superposition approximation breaks down due to the emergence of genuine finite size confinement effects. ECNLE theory extended to treat thin films captures the phenomenology found in simulation, without invocation of any critical-like phenomena, on the basis of interface-nucleated gradients of local caging constraints, combined with interfacial and finite size-induced alterations of the collective elastic component of the structural relaxation process.

Spatially heterogeneous dynamics in glass-forming liquids confined to nanoscale domains (17) play a major role in determining the properties of molecular, polymeric, colloidal, and other glass-forming materials (8), including thin films of polymers (9, 10) and small molecules (1115), small-molecule liquids in porous media (2, 4, 16, 17), semicrystalline polymers (18, 19), polymer nanocomposites (2022), ionomers (2325), self-assembled block and layered (2633) copolymers, and vapor-deposited ultrastable molecular glasses (3436). Intense interest in this problem over the last 30 y has also been motivated by the expectation that its understanding could reveal key insights concerning the mechanism of the bulk glass transition.Considerable progress has been made for near-interface altered dynamics in thick films, as recently critically reviewed (1). Large amplitude gradients of the structural relaxation time, τ(z,T), converge to the bulk value, τbulk(T), in an intriguing double-exponential manner with distance, z, from a solid or vapor interface (13, 3742). This implies that the corresponding effective activation barrier, Ftotal(z,T,H) (where H is film thickness), varies exponentially with z, as does the glass transition temperature, Tg (37). Thus the fractional reduction in activation barrier, ε(z,H), obeys the equation ε(z,H)1Ftotal(z,T,H)/Ftotal,bulk(T)=ε0exp(z/ξF), where Ftotal,bulk(T) is the bulk temperature-dependent barrier and ξF a length scale of modest magnitude. Although the gradient of reduction in absolute activation barriers becomes stronger with cooling, the amplitude of the fractional reduction of the barrier gradient, quantified by ε0, and the range ξF of this gradient, exhibit a weak or absent temperature dependence at the lowest temperatures accessed by simulations (typically with the strength of temperature dependence of ξF decreasing rather than increasing on cooling), which extend to relaxation timescales of order 105 ps. This finding raises questions regarding the relevance of critical-phenomena–like ideas for nanoconfinement effects (1). Partially due to this temperature invariance, coarse-grained and all-atom simulations (1, 37, 42, 43) have found a striking empirical fractional power law decoupling relation between τ(z,T) and τbulk(T):τ(T,z)τbulk(T)(τbulk(T))ε(z).[1]Recent theoretical analysis suggests (44) that this behavior is consistent with a number of experimental data sets as well (45, 46). Eq. 1 also corresponds to a remarkable factorization of the temperature and spatial location dependences of the barrier:Ftotal(z,T)=[1ε(z)]Ftotal,bulk(T).[2]This finding indicates that the activation barrier for near-interface relaxation can be factored into two contributions: a z-dependent, but T-independent, “decoupling exponent,” ε(z), and a temperature-dependent, but position-insensitive, bulk activation barrier, Ftotal,bulk(T). Eq. 2 further emphasizes that ε(z) is equivalent to an effective fractional barrier reduction factor (for a vapor interface), 1Ftotal(z,T,H)/Ftotal,bulk(T), that can be extracted from relaxation data.In contrast, the origin of “nanoconfinement effects” in thin films, and how much of the rich thick-film physics survives when dynamic gradients from two interfaces overlap, is not well understood. The distinct theoretical efforts for aspects of the thick-film phenomenology (44, 4750) mostly assume an additive summation of one-interface effects in thin films, thereby ignoring possibly crucial cooperative and whole film finite size confinement effects. If the latter involve phase-transition–like physics as per recent speculations (14, 51), one can ask the following: do new length scales emerge that might be truncated by finite film size? Alternatively, does ultrathin film phenomenology arise from a combination of two-interface superposition of the thick-film gradient physics and noncritical cooperative effects, perhaps in a property-, temperature-, and/or thickness-dependent manner?Here, we answer these questions and establish a mechanistic understanding of thin-film dynamics for the simplest and most universal case: a symmetric freestanding film with two vapor interfaces. We focus on small molecules (modeled theoretically as spheres) and low to medium molecular weight unentangled polymers, which empirically exhibit quite similar alterations in dynamics under “nanoconfinement.” We do not address anomalous phenomena [e.g., much longer gradient ranges (29), sporadic observation of two distinct glass transition temperatures (52, 53)] that are sometimes reported in experiments with very high molecular weight polymers and which may be associated with poorly understood chain connectivity effects that are distinct from general glass formation physics (5456).We employ a combination of molecular dynamics simulations with a zero-parameter extension to thin films of the Elastically Cooperative Nonlinear Langevin Equation (ECNLE) theory (57, 58). This theory has previously been shown to predict well both bulk activated relaxation over up to 14 decades (4446) and the full single-gradient phenomenology in thick films (1). Here, we extend this theory to treat films of finite thickness, accounting for coupled interface and geometric confinement effects. We compare predictions of ECNLE theory to our previously reported (37, 43) and new simulations, which focus on translational dynamics of films comprised of a standard Kremer–Grest-like bead-spring polymer model (see SI Appendix). These simulations cover a wide range of film thicknesses (H, from 4 to over 90 segment diameters σ) and extend to low temperatures where the bulk alpha time is ∼0.1 μs (105 Lennard Jones time units τLJ).The generalized ECNLE theory is found to be in agreement with simulation for all levels of nanoconfinement. We emphasize that this theory does not a priori assume any of the empirically established behaviors discovered using simulation (e.g., fractional power law decoupling, double-exponential barrier gradient, gradient flattening) but rather predicts these phenomena based upon interfacial modifications of the two coupled contributions to the underlying activation barrier– local caging constraints and a long-ranged collective elastic field. It is notable that this strong agreement is found despite the fact the dynamical ideas are approximate, and a simple hard sphere fluid model is employed in contrast to the bead-spring polymers employed in simulation. The basic unit of length in simulation (bead size σ) and theory (hard sphere diameter d) are expected to be proportional to within a prefactor of order unity, which we neglect in making comparisons.As an empirical matter, we find from simulation that many features of thin-film behavior can be described to leading order by a linear superposition of the thick-film gradients in activation barrier, that is:ε(z,H)=1Ftotal(z,T,H)/Ftotal,bulk(T)ε0[exp(z/ξF)+exp((Hz)/ξF)],[3]where the intrinsic decay length ξF is unaltered from its thick-film value and where ε0 is a constant that, in the hypothesis of literal gradient additivity, is invariant to temperature and film thickness. We employ this functional form [originally suggested by Binder and coworkers (59)], which is based on a simple superposition of the two single-interface gradients, as a null hypothesis throughout this study: this form is what one expects if no new finite-size physics enters the thin-film problem relative to the thick film.However, we find that the superposition approximation progressively breaks down, and eventually entirely fails, in ultrathin films as a consequence of the emergence of a finite size confinement effect. The ECNLE theory predicts that this failure is not tied to a phase-transition–like mechanism but rather is a consequence of two key coupled physical effects: 1) transfer of surface-induced reduction of local caging constraints into the film, and 2) interfacial truncation and nonadditive modifications of the collective elastic contribution to the activation barrier.  相似文献   

2.
The intracellular milieu differs from the dilute conditions in which most biophysical and biochemical studies are performed. This difference has led both experimentalists and theoreticians to tackle the challenging task of understanding how the intracellular environment affects the properties of biopolymers. Despite a growing number of in-cell studies, there is a lack of quantitative, residue-level information about equilibrium thermodynamic protein stability under nonperturbing conditions. We report the use of NMR-detected hydrogen–deuterium exchange of quenched cell lysates to measure individual opening free energies of the 56-aa B1 domain of protein G (GB1) in living Escherichia coli cells without adding destabilizing cosolutes or heat. Comparisons to dilute solution data (pH 7.6 and 37 °C) show that opening free energies increase by as much as 1.14 ± 0.05 kcal/mol in cells. Importantly, we also show that homogeneous protein crowders destabilize GB1, highlighting the challenge of recreating the cellular interior. We discuss our findings in terms of hard-core excluded volume effects, charge–charge GB1-crowder interactions, and other factors. The quenched lysate method identifies the residues most important for folding GB1 in cells, and should prove useful for quantifying the stability of other globular proteins in cells to gain a more complete understanding of the effects of the intracellular environment on protein chemistry.Proteins function in a heterogeneous and crowded intracellular environment. Macromolecules comprise 20–30% of the volume of an Escherichia coli cell and reach concentrations of 300–400 g/L (1, 2). Theory predicts that the properties of proteins and nucleic acids can be significantly altered in cells compared with buffer alone (3, 4). Nevertheless, most biochemical and biophysical studies are conducted under dilute (<10 g/L macromolecules) conditions. Here, we augment the small but growing list of reports probing the equilibrium thermodynamic stability of proteins in living cells (59), and provide, to our knowledge, the first measurement of residue-level stability under nonperturbing conditions.Until recently, the effects of macromolecular crowding on protein stability were thought to be caused solely by hard-core, steric repulsions arising from the impenetrability of matter (4, 10, 11). The expectation was that crowding enhances stability by favoring the compact native state over the ensemble of denatured states. Increased attention to transient, nonspecific protein-protein interactions (1215) has led both experimentalists (1619) and theoreticians (2022) to recognize the effects of chemical interactions between crowder and test protein when assessing the net effect of macromolecular crowding. These weak, nonspecific interactions can reinforce or oppose the effect of hard-core repulsions, resulting in increased or decreased stability depending on the chemical nature of the test protein and crowder (2326).We chose the B1 domain of streptococcal protein G (GB1) (27) as our test protein because its structure, stability and folding kinetics have been extensively studied in dilute solution (2838). Its small size (56 aa; 6.2 kDa) and high thermal stability make GB1 well suited for studies by NMR spectroscopy.Quantifying the equilibrium thermodynamic stability of proteins relies on determining the relative populations of native and denatured states. Because the denatured state ensemble of a stable protein is sparsely populated under native conditions, stability is usually probed by adding heat or a cosolute to promote unfolding so that the concentration ratio of the two states can be determined (39). However, stability can be measured without these perturbations by exploiting the phenomenon of backbone amide H/D exchange (40) detected by NMR spectroscopy (41). The observed rate of amide proton (N–H) exchange, kobs, is related to equilibrium stability by considering a protein in which each N–H exists in an open (exposed, exchange-competent) state, or a closed (protected, exchange-incompetent) state (40, 42):closed(NH)kclkopopen(NH)kintopen(ND)kopkclclosed(ND).[1]Each position opens and closes with rate constants, kop and kcl (where Kop = kop/kcl), and exchange from the open state occurs with intrinsic rate constant, kint. Values for kint are based on exchange data from unstructured peptides (43, 44). If the test protein is stable (i.e., kcl >> kop), the observed rate becomes:kobs=kopkintkcl+kint.[2]Exchange occurs within two limits (42). At the EX1 limit, closing is rate determining, and kobs = kop. This limit is usually observed for less stable proteins and at basic pH (45). Most globular proteins undergo EX2 kinetics, where exchange from the open state is rate limiting (i.e., kcl >> kint), and kobs values can be converted to equilibrium opening free energies, ΔGop° (46):kobs=kopkclkint=Kopkint[3]ΔGop°=RTlnkobskint,[4]where RT is the molar gas constant multiplied by the absolute temperature.The backbone amides most strongly involved in H-bonded regions of secondary structure exchange only from the fully unfolded state, yielding a maximum value of ΔGop° (4749). For these residues ΔGop° approximates the free energy of denaturation, ΔGden°, providing information on global stability. Lower amplitude fluctuations of the native state can give rise to partially unfolded forms (50), resulting in residues with ΔGop° values less than those of the global unfolders.In summary, NMR-detected H/D exchange can measure equilibrium thermodynamic stability of a protein at the level of individual amino acid residues under nonperturbing conditions. Inomata et al. (51) used this technique to measure kobs values in human cells for four residues in ubiquitin, but experiments confirming the exchange mechanism were not reported and opening free energies were not quantified. Our results fill this void and provide quantitative residue-level protein stability measurements in living cells under nonperturbing conditions.  相似文献   

3.
Amide hydrogen exchange (HX) is widely used in protein biophysics even though our ignorance about the HX mechanism makes data interpretation imprecise. Notably, the open exchange-competent conformational state has not been identified. Based on analysis of an ultralong molecular dynamics trajectory of the protein BPTI, we propose that the open (O) states for amides that exchange by subglobal fluctuations are locally distorted conformations with two water molecules directly coordinated to the N–H group. The HX protection factors computed from the relative O-state populations agree well with experiment. The O states of different amides show little or no temporal correlation, even if adjacent residues unfold cooperatively. The mean residence time of the O state is ∼100 ps for all examined amides, so the large variation in measured HX rate must be attributed to the opening frequency. A few amides gain solvent access via tunnels or pores penetrated by water chains including native internal water molecules, but most amides access solvent by more local structural distortions. In either case, we argue that an overcoordinated N–H group is necessary for efficient proton transfer by Grotthuss-type structural diffusion.Before the tightly packed and densely H-bonded structure of globular proteins had been established, Hvidt and Linderstrøm-Lang (1) showed that all backbone amide hydrogens of insulin exchange with water hydrogens, implying that all parts of the polypeptide backbone are, at least transiently, exposed to solvent. In the following 60 y, hydrogen exchange (HX), usually monitored by NMR spectroscopy (2) or mass spectrometry (3), has been widely used to study protein folding and stability (410), structure (11, 12), flexibility and dynamics (1315), and solvent accessibility and binding (16, 17), often with single-residue resolution. However, because the exchange mechanism is unclear, HX data from proteins can, at best, be interpreted qualitatively (1825).Under most conditions, amide HX is catalyzed by hydroxide ions (26, 27) at a rate that is influenced by inductive and steric effects from adjacent side chains (28). For unstructured peptides, HX is a slow process simply because the hydroxide concentration is low. For example, at 25° C and pH 4, HX occurs on a time scale of minutes. Under similar conditions, amides buried in globular proteins exchange on a wide range of time scales, extending up to centuries. HX can only occur if the amide is exposed to solvent, so conformational fluctuations must be an integral part of the HX mechanism (18).Under sufficiently destabilizing conditions HX occurs from the denatured-state ensemble, but under native conditions few amides exchange by such global unfolding (9, 2931). For example, in bovine pancreatic trypsin inhibitor (BPTI), 8 amides in the core β-sheet exchange by global unfolding under native conditions (7, 32), whereas the remaining 45 amides require less extensive conformational fluctuations. Much of the debate in the protein HX field over the past half-century has concerned the nature of these subglobal fluctuations and their frequency, duration, amplitude, and cooperativity (1825).According to the standard HX model (18), each amide can exist in a closed (C) state, where exchange cannot occur, or in an open (O) state, where exchange proceeds at a rate kint. The kinetic scheme for H exchange into D2O then reads as(NH)Ckclkop(NH)Okint(ND)Oand the measured steady-state HX rate is kHXkop?kint/(kopkclkint). To make this phenomenological model practically useful, two auxiliary assumptions are needed to disentangle the conformational and intrinsic parts of the process: (i) The conformational fluctuations (kop and kcl) are independent of pH, and (ii) HX from the O state proceeds at the same rate as in model peptides with the same neighboring side chains, so that kint=kHX0.Two HX regimes are distinguished with reference to the pH dependence of kHX (18). If kHX is constant in some pH range, it follows that kint ? kopkcl so that kHXkop. In this so-called EX1 limit, the HX experiment measures the opening rate, or the mean residence time (MRT), of the C state, τC = 1/kop. For BPTI, such pH invariance has only been observed for the eight core amides, and then only in a narrow pH interval (32).More commonly, HX experiments are performed in the EX2 limit, where kint ? kopkcl. Then kHXkint/(κ + 1), where κ ≡ kcl/kopτC/τO is the protection factor (PF). At equilibrium, the fractional populations, fC and fO, and the rates are linked by detailed balance, kop?fCkcl?fO, so the PF may also be expressed as κfC/fO. Clearly, 1/(κ + 1) is the probability of finding the amide in the O state, 1/κ is the C  ?  O equilibrium constant, and βG = ln?κ is the free energy difference between the O and C states in units of kB?T ≡ 1/β. The PF can thus be deduced from the HX rates measured (under EX2 conditions) for the amide in the protein and in a model peptide as κ=kHX0/kHX1.The vast majority of the available protein HX data pertains to the EX2 regime and thus provides no information about the time scales, τC and τO, of the conformational fluctuations, except for the EX2 bound: 1/τC+1/τOkintkHX0. In the typical case where kHXkHX0, so that τC ? τO, we therefore only know that τO1/kHX0, which is in the millisecond range at pH 9 (EX2 HX data are usually measured at lower pH, where 1/kHX0 is even longer). Our analysis indicates that τO is seven orders of magnitude shorter than this upper bound estimate.The HX experiment is unique in probing sparsely populated conformational states with single-residue resolution. However, the physical significance of the PF is obscured by our ignorance about the structure and dynamics of the O state. Several attempts have been made to correlate experimental PFs with physical attributes of the amides, such as solvent contact (3337), burial depth (38), intramolecular H-bonds (35, 3840), packing density (38, 41), or electric field (42). Where significant correlations have been found, they suggest that the chosen attribute can serve as a proxy for the propensity for C → O fluctuations. However, whether based on crystal structures or molecular dynamics (MD) trajectories, these studies examined the time-averaged protein structure, which is dominated by the C state and therefore provides little or no information about the nature of the C → O fluctuations.In principle, the O state can be identified from molecular simulations, but this requires extensive conformational sampling because most C → O transitions are exceedingly rare. To date, this approach has been tried only with coarse-grained and/or empirical protein models without explicit solvent (4345), or for HX from the denatured-state ensemble (46). The recent availability of ultralong MD simulations with realistic force fields opens up new opportunities in the search for the elusive O state. We have thus analyzed the millisecond MD trajectory of fully solvated native BPTI performed by Shaw et al. (47). Fortunately, BPTI is also among the proteins that have been most thoroughly studied by HX experiments.  相似文献   

4.
The transacting activator of transduction (TAT) protein plays a key role in the progression of AIDS. Studies have shown that a +8 charged sequence of amino acids in the protein, called the TAT peptide, enables the TAT protein to penetrate cell membranes. To probe mechanisms of binding and translocation of the TAT peptide into the cell, investigators have used phospholipid liposomes as cell membrane mimics. We have used the method of surface potential sensitive second harmonic generation (SHG), which is a label-free and interface-selective method, to study the binding of TAT to anionic 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-1′-rac-glycerol (POPG) and neutral 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) liposomes. It is the SHG sensitivity to the electrostatic field generated by a charged interface that enabled us to obtain the interfacial electrostatic potential. SHG together with the Poisson–Boltzmann equation yielded the dependence of the surface potential on the density of adsorbed TAT. We obtained the dissociation constants Kd for TAT binding to POPC and POPG liposomes and the maximum number of TATs that can bind to a given liposome surface. For POPC Kd was found to be 7.5 ± 2 μM, and for POPG Kd was 29.0 ± 4.0 μM. As TAT was added to the liposome solution the POPC surface potential changed from 0 mV to +37 mV, and for POPG it changed from −57 mV to −37 mV. A numerical calculation of Kd, which included all terms obtained from application of the Poisson–Boltzmann equation to the TAT liposome SHG data, was shown to be in good agreement with an approximated solution.The HIV type 1 (HIV-1) transacting activator of transduction (TAT) is an important regulatory protein for viral gene expression (13). It has been established that the TAT protein has a key role in the progression of AIDS and is a potential target for anti-HIV vaccines (4). For the TAT protein to carry out its biological functions, it needs to be readily imported into the cell. Studies on the cellular internalization of TAT have led to the discovery of the TAT peptide, a highly cationic 11-aa region (protein transduction domain) of the 86-aa full-length protein that is responsible for the TAT protein translocating across phospholipid membranes (58). The TAT peptide is a member of a class of peptides called cell-penetrating peptides (CPPs) that have generated great interest for drug delivery applications (ref. 9 and references therein). The exact mechanism by which the TAT peptide enters cells is not fully understood, but it is likely to involve a combination of energy-independent penetration and endocytosis pathways (8, 10). The first step in the process is high-affinity binding of the peptide to phospholipids and other components on the cell surface such as proteins and glycosaminoglycans (1, 9).The binding of the TAT peptide to liposomes has been investigated using a variety of techniques, each of which has its own advantages and limitations. Among the techniques are isothermal titration calorimetry (9, 11), fluorescence spectroscopy (12, 13), FRET (12, 14), single-molecule fluorescence microscopy (15, 16), and solid-state NMR (17). Second harmonic generation (SHG), as an interface-selective technique (1824), does not require a label, and because SHG is sensitive to the interface potential, it is an attractive method to selectively probe the binding of the highly charged (+8) TAT peptide to liposome surfaces. Although coherent SHG is forbidden in centrosymmetric and isotropic bulk media for reasons of symmetry, it can be generated by a centrosymmetric structure, e.g., a sphere, provided that the object is centrosymmetric over roughly the length scale of the optical coherence, which is a function of the particle size, the wavelength of the incident light, and the refractive indexes at ω and 2ω (2530). As a second-order nonlinear optical technique SHG has symmetry restrictions such that coherent SHG is not generated by the randomly oriented molecules in the bulk liquid, but can be generated coherently by the much smaller population of oriented interfacial species bound to a particle or planar surfaces. As a consequence the SHG signal from the interface is not overwhelmed by SHG from the much larger populations in the bulk media (2528).The total second harmonic electric field, E2ω, originating from a charged interface in contact with water can be expressed as (3133)E2ωiχc,i(2)EωEω+jχinc,j(2)EωEω+χH2O(3)EωEωΦ,[1]where χc,i(2) represents the second-order susceptibility of the species i present at the interface; χinc,j(2) represents the incoherent contribution of the second-order susceptibility, arising from density and orientational fluctuations of the species j present in solution, often referred to as hyper-Rayleigh scattering; χH2O(3) is the third-order susceptibility originating chiefly from the polarization of the bulk water molecules polarized by the charged interface; Φ is the potential at the interface that is created by the surface charge; and Eω is the electric field of the incident light at the fundamental frequency ω. The second-order susceptibility, χc,i(2), can be written as the product of the number of molecules, N, at the surface and the orientational ensemble average of the hyperpolarizability αi(2) of surface species i, yielding χc,i(2)=Nαi(2) (18). The bracket ?? indicates an orientational average over the interfacial molecules. The third term in Eq. 1 depicts a third-order process by which a second harmonic field is generated by a charged interface. This term is the focus of our work. The SHG signal is dependent on the surface potential created by the electrostatic field of the surface charges, often called the χ(3) contribution to the SHG signal. The χ(3) method has been used to extract the surface charge density of charged planar surfaces and microparticle surfaces, e.g., liposomes, polymer beads, and oil droplets in water (21, 25, 3439).In this work, the χ(3) SHG method is used to explore a biomedically relevant process. The binding of the highly cationic HIV-1 TAT peptide to liposome membranes changes the surface potential, thereby enabling the use of the χ(3) method to study the binding process in a label-free manner. Two kinds of liposomes, neutral 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and anionic 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-1′-rac-glycerol (POPG), were investigated. The chemical structures of TAT, POPC, and POPG lipids are shown in Scheme 1.Open in a separate windowScheme 1.Chemical structures of HIV-1 TAT (47–57) peptide and the POPC and POPG lipids.  相似文献   

5.
Advances in polymer chemistry over the last decade have enabled the synthesis of molecularly precise polymer networks that exhibit homogeneous structure. These precise polymer gels create the opportunity to establish true multiscale, molecular to macroscopic, relationships that define their elastic and failure properties. In this work, a theory of network fracture that accounts for loop defects is developed by drawing on recent advances in network elasticity. This loop-modified Lake–Thomas theory is tested against both molecular dynamics (MD) simulations and experimental fracture measurements on model gels, and good agreement between theory, which does not use an enhancement factor, and measurement is observed. Insight into the local and global contributions to energy dissipated during network failure and their relation to the bond dissociation energy is also provided. These findings enable a priori estimates of fracture energy in swollen gels where chain scission becomes an important failure mechanism.

Models that link materials structure to macroscopic behavior can account for multiple levels of molecular structure. For example, the statistical, affine deformation model connects the elastic modulus E to the molecular structure of a polymer chain,Eaff=3νkbT(ϕo13Roϕ13R)2,[1]where ν is density of chains, ϕ is polymer volume fraction, R is end-to-end distance, ϕo and Ro represent the parameters taken in the reference state that is assumed to be the reaction concentration in this work, and kbT is the available thermal energy where kb is Boltzmann’s constant and T is temperature (16). Refinements to this model that account for network-level structure, such as the presence of trapped entanglements or number of connections per junction, have been developed (711). Further refinements to the theory of network elasticity have been developed to account for dynamic processes such as chain relaxation and solvent transport (1217). Together these refinements link network elasticity to chain-level molecular structure, network-level structure, and the dynamic processes that occur at both size scales.While elasticity has been connected to multiple levels of molecular structure, models for network fracture have not developed to a similar extent. The fracture energy Gc typically relies upon the large strain deformation behavior of polymer networks, making it experimentally difficult to separate the elastic energy released upon fracture from that dissipated through dynamic processes (1826). In fact, most fracture theories have been developed at the continuum scale and have focused on modeling dynamic dissipation processes (27). An exception to this is the theory of Lake and Thomas that connects the elastic energy released during chain scission to chain-level structure,Gc,LT=ChainsArea×EnergyDissipatedChain=νRoNU,[2]where NU is the total energy released when a chain ruptures in which N represents the number of monomer segments in the chain and U the energy released per monomer (26).While this model was first introduced in 1967, experimental attempts to verify Lake–Thomas theory as an explicit model, as summarized in SI Appendix, have been unsuccessful. Ahagon and Gent (28) and Gent and Tobias (29) attempted to do this on highly swollen networks at elevated temperature but found that, while the scalings from Eq. 2 work well, an enhancement factor was necessary to observe agreement between theory and experiment. This led many researchers to conclude that Lake–Thomas theory worked only as a scaling argument. In 2008, Sakai et al. (30) introduced a series of end-linked tetrafunctional, star-like poly(ethylene glycol) (PEG) gels. Scattering measurements indicated a lack of nanoscale heterogeneities that are characteristic of most polymer networks (3032). Fracture measurements on these well-defined networks were performed and it was again observed that an enhancement factor was necessary to realize explicit agreement between experiment and theory (33). Arora et al. (34) recently attempted to address this discrepancy by accounting for loop defects; however, different assumptions were used when inputting U to calculate Lake–Thomas theory values that again required the use of an enhancement factor to achieve quantitative agreement. In this work we demonstrate that refining the Lake–Thomas theory to account for loop defects while using the full bond dissociation energy to represent U yields excellent agreement between the theory and both simulation and experimental data without the use of any adjustable parameters.PEG gels synthesized via telechelic end-linking reactions create the opportunity to build upon previous theory to establish true multiscale, molecular to macroscopic relationships that define the fracture response of polymer networks. This paper combines pure shear notch tests, molecular dynamics (MD) simulations, and theory to quantitatively extend the concept of network fracture without the use of an enhancement factor. First, the control of molecular-level structure in end-linked gel systems is discussed. Then, the choice of molecular parameters used to estimate chain- and network-level properties is discussed. Experimental and MD simulation methods used when fracturing model end-linked networks are then presented. A theory of network fracture that accounts for loop defects is developed, in the context of other such models that have emerged recently, and tested against data from experiments and MD simulations. Finally, a discussion of the local and global energy dissipated during failure of the network is presented.  相似文献   

6.
7.
Fluids are known to trigger a broad range of slip events, from slow, creeping transients to dynamic earthquake ruptures. Yet, the detailed mechanics underlying these processes and the conditions leading to different rupture behaviors are not well understood. Here, we use a laboratory earthquake setup, capable of injecting pressurized fluids, to compare the rupture behavior for different rates of fluid injection, slow (megapascals per hour) versus fast (megapascals per second). We find that for the fast injection rates, dynamic ruptures are triggered at lower pressure levels and over spatial scales much smaller than the quasistatic theoretical estimates of nucleation sizes, suggesting that such fast injection rates constitute dynamic loading. In contrast, the relatively slow injection rates result in gradual nucleation processes, with the fluid spreading along the interface and causing stress changes consistent with gradually accelerating slow slip. The resulting dynamic ruptures propagating over wetted interfaces exhibit dynamic stress drops almost twice as large as those over the dry interfaces. These results suggest the need to take into account the rate of the pore-pressure increase when considering nucleation processes and motivate further investigation on how friction properties depend on the presence of fluids.

The close connection between fluids and faulting has been revealed by a large number of observations, both in tectonic settings and during human activities, such as wastewater disposal associated with oil and gas extraction, geothermal energy production, and CO2 sequestration (111). On and around tectonic faults, fluids also naturally exist and are added at depths due to rock-dehydration reactions (1215) Fluid-induced slip behavior can range from earthquakes to slow, creeping motion. It has long been thought that creeping and seismogenic fault zones have little to no spatial overlap. Nonetheless, growing evidence suggests that the same fault areas can exhibit both slow and dynamic slip (1619). The existence of large-scale slow slip in potentially seismogenic areas has been revealed by the presence of transient slow-slip events in subduction zones (16, 18) and proposed by studies investigating the physics of foreshocks (2022).Numerical and laboratory modeling has shown that such complex fault behavior can result from the interaction of fluid-related effects with the rate-and-state frictional properties (9, 14, 19, 23, 24); other proposed rheological explanations for complexities in fault stability include combinations of brittle and viscous rheology (25) and friction-to-flow transitions (26). The interaction of frictional sliding and fluids results in a number of coupled and competing mechanisms. The fault shear resistance τres is typically described by a friction model that linearly relates it to the effective normal stress σ^n via a friction coefficient f:τres=fσ^n=f(σnp),[1]where σn is the normal stress acting across the fault and p is the pore pressure. Clearly, increasing pore pressure p would reduce the fault frictional resistance, promoting the insurgence of slip. However, such slip need not be fast enough to radiate seismic waves, as would be characteristic of an earthquake, but can be slow and aseismic. In fact, the critical spatial scale h* for the slipping zone to reach in order to initiate an unstable, dynamic event is inversely proportional to the effective normal stress (27, 28) and hence increases with increasing pore pressure, promoting stable slip. This stabilizing effect of increasing fluid pressure holds for both linear slip-weakening and rate-and-state friction; it occurs because lower effective normal stress results in lower fault weakening during slip for the same friction properties. For example, the general form for two-dimensional (2D) theoretical estimates of this so-called nucleation size, h*, on rate-and-state faults with steady-state, velocity-weakening friction is given by:h*=(μ*DRS)/[F(a,b)(σnp)],[2]where μ*=μ/(1ν) for modes I and II, and μ*=μ for mode III (29); DRS is the characteristic slip distance; and F(a, b) is a function of the rate-and-state friction parameters a and b. The function F(a, b) depends on the specific assumptions made to obtain the estimate: FRR(a,b)=4(ba)/π (ref. 27, equation 40) for a linearized stability analysis of steady sliding, or FRA(a,b)=[π(ba)2]/2b, with a/b>1/2 for quasistatic crack-like expansion of the nucleation zone (ref. 30, equation 42).Hence, an increase in pore pressure induces a reduction in the effective normal stress, which both promotes slip due to lower frictional resistance and increases the critical length scale h*, potentially resulting in slow, stable fault slip instead of fast, dynamic rupture. Indeed, recent field and laboratory observations suggest that fluid injection triggers slow slip first (4, 9, 11, 31). Numerical modeling based on these effects, either by themselves or with an additional stabilizing effect of shear-layer dilatancy and the associated drop in fluid pressure, have been successful in capturing a number of properties of slow-slip events observed on natural faults and in field fluid-injection experiments (14, 24, 3234). However, understanding the dependence of the fault response on the specifics of pore-pressure increase remains elusive. Several studies suggest that the nucleation size can depend on the loading rate (3538), which would imply that the nucleation size should also depend on the rate of friction strength change and hence on the rate of change of the pore fluid pressure. The dependence of the nucleation size on evolving pore fluid pressure has also been theoretically investigated (39). However, the commonly used estimates of the nucleation size (Eq. 2) have been developed for faults under spatially and temporally uniform effective stress, which is clearly not the case for fluid-injection scenarios. In addition, the friction properties themselves may change in the presence of fluids (4042). The interaction between shear and fluid effects can be further affected by fault-gauge dilation/compaction (40, 4345) and thermal pressurization of pore fluids (42, 4648).Recent laboratory investigations have been quite instrumental in uncovering the fundamentals of the fluid-faulting interactions (31, 45, 4957). Several studies have indicated that fluid-pressurization rate, rather than injection volume, controls slip, slip rate, and stress drop (31, 49, 57). Rapid fluid injection may produce pressure heterogeneities, influencing the onset of slip. The degree of heterogeneity depends on the balance between the hydraulic diffusion rate and the fluid-injection rate, with higher injection rates promoting the transition from drained to locally undrained conditions (31). Fluid pressurization can also interact with friction properties and produce dynamic slip along rate-strengthening faults (50, 51).In this study, we investigate the relation between the rate of pressure increase on the fault and spontaneous rupture nucleation due to fluid injection by laboratory experiments in a setup that builds on and significantly develops the previous generations of laboratory earthquake setup of Rosakis and coworkers (58, 59). The previous versions of the setup have been used to study key features of dynamic ruptures, including sub-Rayleigh to supershear transition (60); rupture directionality and limiting speeds due to bimaterial effects (61); pulse-like versus crack-like behavior (62); opening of thrust faults (63); and friction evolution (64). A recent innovation in the diagnostics, featuring ultrahigh-speed photography in conjunction with digital image correlation (DIC) (65), has enabled the quantification of the full-field behavior of dynamic ruptures (6668), as well as the characterization of the local evolution of dynamic friction (64, 69). In these prior studies, earthquake ruptures were triggered by the local pressure release due to an electrical discharge. This nucleation procedure produced only dynamic ruptures, due to the nearly instantaneous normal stress reduction.To study fault slip triggered by fluid injection, we have developed a laboratory setup featuring a hydraulic circuit capable of injecting pressurized fluid onto the fault plane of a specimen and a set of experimental diagnostics that enables us to detect both slow and fast fault slip and stress changes. The range of fluid-pressure time histories produced by this setup results in both quasistatic and dynamic rupture nucleation; the diagnostics allows us to capture the nucleation processes, as well as the resulting dynamic rupture propagation. In particular, here, we explore two injection techniques: procedure 1, a gradual, and procedure 2, a sharp fluid-pressure ramp-up. An array of strain gauges, placed on the specimen’s surface along the fault, can capture the strain (translated into stress) time histories over a wide range of temporal scales, spanning from microseconds to tens of minutes. Once dynamic ruptures nucleate, an ultrahigh-speed camera records images of the propagating ruptures, which are turned into maps of full-field displacements, velocities, and stresses by a tailored DIC) analysis. One advantage of using a specimen made of an analog material, such as poly(methyl meth-acrylate) (PMMA) used in this study, is its transparency, which allows us to look at the interface through the bulk and observe fluid diffusion over the interface. Another important advantage of using PMMA is that its much lower shear modulus results in much smaller nucleation sizes h* than those for rocks, allowing the experiments to produce both slow and fast slip in samples of manageable sizes.We start by describing the laboratory setup and the diagnostics monitoring the pressure evolution and the slip behavior. We then present and discuss the different slip responses measured as a result of slow versus fast fluid injection and interpret our measurements by using the rate-and-state friction framework and a pressure-diffusion model.  相似文献   

8.
Macromolecular phase separation is thought to be one of the processes that drives the formation of membraneless biomolecular condensates in cells. The dynamics of phase separation are thought to follow the tenets of classical nucleation theory, and, therefore, subsaturated solutions should be devoid of clusters with more than a few molecules. We tested this prediction using in vitro biophysical studies to characterize subsaturated solutions of phase-separating RNA-binding proteins with intrinsically disordered prion-like domains and RNA-binding domains. Surprisingly, and in direct contradiction to expectations from classical nucleation theory, we find that subsaturated solutions are characterized by the presence of heterogeneous distributions of clusters. The distributions of cluster sizes, which are dominated by small species, shift continuously toward larger sizes as protein concentrations increase and approach the saturation concentration. As a result, many of the clusters encompass tens to hundreds of molecules, while less than 1% of the solutions are mesoscale species that are several hundred nanometers in diameter. We find that cluster formation in subsaturated solutions and phase separation in supersaturated solutions are strongly coupled via sequence-encoded interactions. We also find that cluster formation and phase separation can be decoupled using solutes as well as specific sets of mutations. Our findings, which are concordant with predictions for associative polymers, implicate an interplay between networks of sequence-specific and solubility-determining interactions that, respectively, govern cluster formation in subsaturated solutions and the saturation concentrations above which phase separation occurs.

Phase separation of RNA-binding proteins with disordered prion-like domains (PLDs) and RNA-binding domains (RBDs) is implicated in the formation and dissolution of membraneless biomolecular condensates such as RNA–protein (RNP) granules (19). Macroscopic phase separation is a process whereby a macromolecule in a solvent separates into a dilute, macromolecule-deficient phase that coexists with a dense, macromolecule-rich phase (10, 11). In a binary mixture, the soluble phase, comprising dispersed macromolecules that are well mixed with the solvent, becomes saturated at a concentration designated as csat. Above csat, for total macromolecular concentrations ctot that are between the binodal and spinodal, phase separation of full-length RNA-binding proteins and PLDs is thought to follow classical nucleation theory (1215).In classical nucleation theories, clusters representing incipient forms of the new dense phase form within dispersed phases of supersaturated solutions defined by ctot > csat (16, 17). In the simplest formulation of classical nucleation theory (1618), the free energy of forming a cluster of radius a is ΔF=4π3a3Δμρn+4πa2γ. Here, Δµ is the difference in the chemical potential between the one-phase and two-phase regimes (see discussion in SI Appendix), which is negative in supersaturated solutions and positive in subsaturated solutions; ρn is the number of molecules per unit volume, and γ is the interfacial tension between dense and dilute phases. At temperature T, in a seed-free solution, the degree of supersaturation s is defined as sΔμRT=ln(ctotcsat), where R is the ideal gas constant. Here, s is positive for ctot > csat, and, as s increases, cluster formation becomes more favorable. Above a critical radius a*, the free energy of cluster formation can overcome the interfacial penalty, and the new dense phase grows in a thermodynamically downhill fashion. Ideas from classical nucleation theory have been applied to analyze and interpret the dynamics of phase separation in supersaturated solutions (12, 13, 15). Classical nucleation theories stand in contrast to two-step nucleation theories that predict the existence of prenucleation clusters in supersaturated solutions (1922). These newer theories hint at the prospect of there being interesting features in subsaturated solutions, where ctot < csat and s < 0.The subsaturated regime, where s is negative, corresponds to the one-phase regime. Ignoring the interfacial tension, the free energy of realizing clusters with n molecules in subsaturated solutions is: ΔF = –nΔµ. Therefore, the probability P(n) of forming a cluster of n molecules in a subsaturated solution is proportional to exp(sn). Accordingly, the relative probability P(n)/P(1) of forming clusters with n molecules will be exp(s(n – 1)). This quantity, which may be thought of as the concentration of clusters with n molecules, is negligibly small for clusters with more than a few molecules. This is true irrespective of the degree of subsaturation, s. Is this expectation from classical nucleation theories valid? We show here that subsaturated solutions feature a rich distribution of species not anticipated by classical nucleation theories. We report results from measurements of cluster size distributions in subsaturated solutions of phase-separating RNA-binding proteins from the FUS-EWSR1-TAF15 (FET) family. We find that these systems form clusters in subsaturated solutions, and that the cluster sizes follow heavy-tailed distributions. The abundant species are always small clusters. However, as total macromolecular concentration (ctot) increases, the distributions of cluster sizes shift continuously toward larger values. We discuss these findings in the context of theories for associative polymers (9, 2330).  相似文献   

9.
10.
Reliable forecasts for the dispersion of oceanic contamination are important for coastal ecosystems, society, and the economy as evidenced by the Deepwater Horizon oil spill in the Gulf of Mexico in 2010 and the Fukushima nuclear plant incident in the Pacific Ocean in 2011. Accurate prediction of pollutant pathways and concentrations at the ocean surface requires understanding ocean dynamics over a broad range of spatial scales. Fundamental questions concerning the structure of the velocity field at the submesoscales (100 m to tens of kilometers, hours to days) remain unresolved due to a lack of synoptic measurements at these scales. Using high-frequency position data provided by the near-simultaneous release of hundreds of accurately tracked surface drifters, we study the structure of submesoscale surface velocity fluctuations in the Northern Gulf of Mexico. Observed two-point statistics confirm the accuracy of classic turbulence scaling laws at 200-m to 50-km scales and clearly indicate that dispersion at the submesoscales is local, driven predominantly by energetic submesoscale fluctuations. The results demonstrate the feasibility and utility of deploying large clusters of drifting instruments to provide synoptic observations of spatial variability of the ocean surface velocity field. Our findings allow quantification of the submesoscale-driven dispersion missing in current operational circulation models and satellite altimeter-derived velocity fields.The Deepwater Horizon (DwH) incident was the largest accidental oil spill into marine waters in history with some 4.4 million barrels released into the DeSoto Canyon of the northern Gulf of Mexico (GoM) from a subsurface pipe over ∼84 d in the spring and summer of 2010 (1). Primary scientific questions, with immediate practical implications, arising from such catastrophic pollutant injection events are the path, speed, and spreading rate of the pollutant patch. Accurate prediction requires knowledge of the ocean flow field at all relevant temporal and spatial scales. Whereas ocean general circulation models were widely used during and after the DwH incident (26), such models only capture the main mesoscale processes (spatial scale larger than 10 km) in the GoM. The main factors controlling surface dispersion in the DeSoto Canyon region remain unclear. The region lies between the mesoscale eddy-driven deep water GoM (7) and the wind-driven shelf (8) while also being subject to the buoyancy input of the Mississippi River plume during the spring and summer months (9). Images provided by the large amounts of surface oil produced in the DwH incident revealed a rich array of flow patterns (10) showing organization of surface oil not only by mesoscale straining into the loop current “Eddy Franklin,” but also by submesoscale processes. Such processes operate at spatial scales and involve physics not currently captured in operational circulation models. Submesoscale motions, where they exist, can directly influence the local transport of biogeochemical tracers (11, 12) and provide pathways for energy transfer from the wind-forced mesoscales to the dissipative microscales (1315). Dynamics at the submesoscales have been the subject of recent research (1620). However, the investigation of their effect on ocean transport has been predominantly modeling based (13, 2123) and synoptic observations, at adequate spatial and temporal resolutions, are rare (24, 25). The mechanisms responsible for the establishment, maintenance, and energetics of such features in the Gulf of Mexico remain unclear.Instantaneous measurement of all representative spatiotemporal scales of the ocean state is notoriously difficult (26). As previously reviewed (27), traditional observing systems are not ideal for synoptic sampling of near-surface flows at the submesoscale. Owing to the large spacing between ground tracks (28) and along-track signal contamination from high-frequency motions (29), gridded altimeter-derived sea level anomalies only resolve the largest submesoscale motions. Long time-series ship-track current measurements attain similar, larger than 2 km, spatial resolutions, and require averaging the observations over evolving ocean states (30). Simultaneous, two-point accoustic Doppler current profiler measurements from pairs of ships (25) provide sufficient resolution to show the existence of energetic submesoscale fluctuations in the mixed layer, but do not explicitly quantify the scale-dependent transport induced by such motions at the surface. Lagrangian experiments, centered on tracking large numbers of water-following instruments, provide the most feasible means of obtaining spatially distributed, simultaneous measurements of the structure of the ocean’s surface velocity field on 100-m to 10-km length scales.Denoting a trajectory by x(a, t), where x(a, t0) = a, the relative separation of a particle pair is given by D(t,D0)=x(a1,t)x(a2,t)=D0+t0tΔv(t,D0)dt, where the Lagrangian velocity difference is defined by Δv(t, D0) = v(a1, t) − v(a2, t). The statistical quantities of interest, both practically and theoretically, are the scale-dependent relative dispersion D2(t) = 〈D ⋅ D〉 (averaged over particle pairs) and the average longitudinal or separation velocity, Δv(r), at a given separation, r. The velocity scale is defined by the second order structure function Δv(r)=δv2, where δv(r) = (v(x + r) − v(x)) ⋅ r/∥r∥ (31, 32) where the averaging is now conditioned on the pair separation r.The applicability of classical dispersion theories (3234) developed in the context of homogeneous, isotropic turbulence with localized spectral forcing, to ocean flows subject to the effects of rotation, stratification, and complex forcing at disparate length and time scales remains unresolved. Turbulence theories broadly predict two distinct dispersion regimes depending upon the shape of the spatial kinetic energy spectrum, E(k) ∼ kβ, of the velocity field (35). For sufficiently steep spectra (β ≥ 3) the dispersion is expected to grow exponentially, D ∼ eλt with a scale-independent rate. At the submesoscales (∼ 100 m–10 km), this nonlocal growth rate will then be determined by the mesoscale motions currently resolved by predictive models. For shallower spectra (1 < β < 3), however, the dispersion is local, Dt2/(3−β), and the growth rate of a pollutant patch is dominated by advective processes at the scale of the patch. Accurate prediction of dispersion in this regime requires resolution of the advecting field at smaller scales than the mesoscale.Whereas compilations of data from dye measurements broadly support local dispersion in natural flows (36), the range of scales in any particular dye experiment is limited. A number of Lagrangian observational studies have attempted to fill this gap. LaCasce and Ohlmann (37) considered 140 pairs of surface drifters on the GoM shelf over a 5-y period and found evidence of a nonlocal regime for temporally smoothed data at 1-km scales. Koszalka et al. (38) using ??(100) drifter pairs with D0 < 2 km launched over 18 mo in the Norwegian Sea, found an exponential fit for D2(t) for a limited time (t = 0.5 − 2 d), although the observed longitudinal velocity structure function is less clearly fit by a corresponding quadratic. They concluded that a nonlocal dispersion regime could not be identified. In contrast, Lumpkin and Elipot (39) found evidence of local dispersion at 1-km scales using 15-m drogued drifters launched in the winter-time North Atlantic. It is not clear how the accuracy of the Argos positioning system (150–1,000 m) used in these studies affects the submesoscale dispersion estimates. Schroeder et al. (40), specifically targeting a coastal front using a multiscale sampling pattern, obtained results consistent with local dispersion, but the statistical significance (maximum 64 pairs) remained too low to be definitive.  相似文献   

11.
A continuum of water populations can exist in nanoscale layered materials, which impacts transport phenomena relevant for separation, adsorption, and charge storage processes. Quantification and direct interrogation of water structure and organization are important in order to design materials with molecular-level control for emerging energy and water applications. Through combining molecular simulations with ambient-pressure X-ray photoelectron spectroscopy, X-ray diffraction, and diffuse reflectance infrared Fourier transform spectroscopy, we directly probe hydration mechanisms at confined and nonconfined regions in nanolayered transition-metal carbide materials. Hydrophobic (K+) cations decrease water mobility within the confined interlayer and accelerate water removal at nonconfined surfaces. Hydrophilic cations (Li+) increase water mobility within the confined interlayer and decrease water-removal rates at nonconfined surfaces. Solutes, rather than the surface terminating groups, are shown to be more impactful on the kinetics of water adsorption and desorption. Calculations from grand canonical molecular dynamics demonstrate that hydrophilic cations (Li+) actively aid in water adsorption at MXene interfaces. In contrast, hydrophobic cations (K+) weakly interact with water, leading to higher degrees of water ordering (orientation) and faster removal at elevated temperatures.

Geologic clays are minerals with variable amounts of water trapped within the bulk structure (1) and are routinely used as hydraulic barriers where water and contaminant transport must be controlled (2, 3). These layered materials can exhibit large degrees of swelling when intercalated with a hydrated cation (4). Fundamentally, water adsorption at exposed interfaces and transport in confined channels is dictated by geometry, morphology, and chemistry (e.g., surface chemistry, local solutes, etc.) (5). Understanding water adsorption and swelling in natural clay materials has significant implications for understanding water interactions in nanoscale layered materials. At the nanoscale, the ability to control the interlayer swelling and water adsorption can lead to more precise control over mass and reactant transport, resulting in enhancement in properties necessary for next-generation energy storage (power and capacity) (68), membranes (selectivity, salt rejection, and water permeability), catalysis (913), and adsorption (14).Two-dimensional (2D) and multilayered transition-metal carbides and nitrides (MXenes) are a recent addition to the few-atom-thick materials and have been widely studied in their applications to energy storage (6, 9, 15, 16), membranes (13), and adsorption (17). MXenes (Mn+1XnTx) are produced via selective etching of A elements from ceramic MAX (Mn+1 AXn) phase materials (11, 18). The removal of A element results in thin Mn+1 Xn nanosheets with negative termination groups (Tx). MXene’s hydrophilic and negatively charged surface properties promote spontaneous intercalation of a wide array of ions and compounds. Cation intercalation properties in MXenes have been vigorously explored due to their demonstrated high volumetric capacitance, which may enable high-rate energy storage (6, 19). In addition, their unique and rich surface chemistry may enable selective ion adsorption, making them promising candidates for water purification and catalytic applications (2022).Water and ion transport within multilayered MXenes is governed by the presence of a continuum of water populations. The configuration of water in confined (interlayer) and nonconfined state (surface) influences the material system’s physical properties (13, 2327). However, our current understanding of water–surface interactions and water structure at the molecular scale is incomplete due to limited characterization approaches (28). Most modern observations are limited to macroscopic measurements (e.g., transport measurement, contact angle, etc.), which do not capture the impact of local heterogeneity due to surface roughness, surface chemistry, solutes, etc. (29). Herein, we address this gap via combining theory with an ensemble of direct and indirect interrogation techniques. Water structure and sorption properties at MXene interfaces are directly probed by using ambient-pressure X-ray photoelectron spectroscopy (APXPS), X-ray diffraction (XRD), and diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS). APXPS enables detection of local chemically specific signatures and quantitative analysis at near-ambient pressures (30). This technique provides the ability to spatially resolve the impact of surface chemistry and solutes on water sorption/desorption at water–solid interfaces. Model hydrophobic (e.g., K+) and hydrophilic (e.g., Li+) cations were intercalated into the layers via ion exchange to systematically probe the impacts of charged solutes on water orientation and sorption. Prior reports suggest that water within the confined interlayer transforms from bulk-like to crystalline when intercalated with bulky cations (31, 32). Furthermore, it has been demonstrated that water ordering is correlated with ion size (33, 34). Here, we expand upon this early work and examine the role that solute hydrophobicity and hydrophilicity impacts water adsorption on solid interfaces. Water mobility within the interlayer is impacted by the hydration energy of that cation. Results shed light on the intertwined role that surface counterions and terminating groups play on the dynamics of hydration and dehydration.  相似文献   

12.
Knowledge of the dynamical behavior of proteins, and in particular their conformational fluctuations, is essential to understanding the mechanisms underlying their reactions. Here, transient enhancement of the isothermal partial molar compressibility, which is directly related to the conformational fluctuation, during a chemical reaction of a blue light sensor protein from the thermophilic cyanobacterium Thermosynechococcus elongatus BP-1 (TePixD, Tll0078) was investigated in a time-resolved manner. The UV-Vis absorption spectrum of TePixD did not change with the application of high pressure. Conversely, the transient grating signal intensities representing the volume change depended significantly on the pressure. This result implies that the compressibility changes during the reaction. From the pressure dependence of the amplitude, the compressibility change of two short-lived intermediate (I1 and I2) states were determined to be +(5.6 ± 0.6) × 10−2 cm3⋅mol−1⋅MPa−1 for I1 and +(6.6 ± 0.7)×10−2 cm3⋅mol−1⋅MPa−1 for I2. This result showed that the structural fluctuation of intermediates was enhanced during the reaction. To clarify the relationship between the fluctuation and the reaction, the compressibility of multiply excited TePixD was investigated. The isothermal compressibility of I1 and I2 intermediates of TePixD showed a monotonic decrease with increasing excitation laser power, and this tendency correlated with the reactivity of the protein. This result indicates that the TePixD decamer cannot react when its structural fluctuation is small. We concluded that the enhanced compressibility is an important factor for triggering the reaction of TePixD. To our knowledge, this is the first report showing enhanced fluctuations of intermediate species during a protein reaction, supporting the importance of fluctuations.Proteins often transfer information through changes in domain–domain (or intermolecular) interactions. Photosensor proteins are an important example. They have light-sensing domains and function by using the light-driven changes in domain–domain interactions (1). The sensor of blue light using FAD (BLUF) domain is a light-sensing module found widely among the bacterial kingdom (2). The BLUF domain initiates its photoreaction by the light excitation of the flavin moiety inside the protein, which changes the domain–domain interaction, causing a quaternary structural change and finally transmitting biological signals (3, 4). It has been an important research topic to elucidate how the initial photochemistry occurring in the vicinity of the chromophore leads to the subsequent large conformation change in other domains, which are generally apart from the chromophore.It may be reasonable to consider that the conformation change in the BLUF domain is the driving force in its subsequent reaction; that is, the change in domain–domain interaction. However, sometimes, clear conformational changes have not been observed for the BLUF domain; its conformation is very similar before and after photo-excitation (513). The circular dichroism (CD) spectra of BLUF proteins AppA and PixD from thermophilic cyanobacterium Thermosynechococcus elongatus BP-1 (TePixD) did not change on illumination (5, 13). Similarly, solution NMR studies of AppA and BlrB showed only small chemical shifts on excitation (9, 10). The solution NMR structure of BlrP1 showed a clear change, but this was limited in its C-terminal extension region and not core BLUF (11). Furthermore, the diffusion coefficient (D) of the BLUF domain of YcgF was not changed by photo-excitation (12), although D is sensitive to global conformational changes. These results imply that a minor structural change occurs in the BLUF domain. In such cases, how does the BLUF domain control its interdomain interaction? Recently, a molecular dynamics (MD) simulation on another light-sensing domain, the light-oxygen-voltage (LOV) sensing domain, suggested that fluctuation of the LOV core structure could be a key to understanding the mechanism of information transfer (1416).Because proteins work at room temperature, they are exposed to thermal fluctuations. The importance of such structural fluctuations for biomolecular reactions has been also pointed out: for example, enzymatic activity (1720). Experimental detections of such conformation fluctuations using single molecular detection (21) or NMR techniques such as the hydrogen-deuterium (H-D) exchange, relaxation dispersion method, and high-pressure NMR (2224) have succeeded. However, these techniques could not detect the fluctuation of short-lived transient species. Indeed, single molecule spectroscopy can trace the fluctuation in real time, but it is still rather difficult to detect rapid fluctuations for a short-lived intermediate during a reaction. Therefore, information about the fluctuation of intermediates is thus far limited.A thermodynamic measurement is another way to characterize the fluctuation of proteins. In particular, the partial molar isothermal compressibility [K¯T=(V¯/P)T] is essential, because this property is directly linked to the mean-square fluctuations of the protein partial molar volume by (V¯V¯)2δV¯2=kBTK¯T (25). (Here, <X> means the averaged value of a quantity of X.) Therefore, isothermal compressibility is thought to reflect the structural fluctuation of molecules (26). However, experimental measurement of this parameter of proteins in a dilute solution is quite difficult. Indeed, this quantity has been determined indirectly from the theoretical equation using the adiabatic compressibility of a protein solution, which was determined by the sound velocity in the solution (2631). Although the relation between volume fluctuations and isothermal compressibility is rigorously correct only with respect to the intrinsic part of the volume compressibility, and not the partial molar volume compressibility (32), we considered that this partial molar volume compressibility is still useful for characterizing the fluctuation of the protein structure including its interacting water molecules. In fact, the relationship between β¯T and the volume fluctuation has been often used to discuss the fluctuation of proteins (17, 2628), and the strong correlation of β¯T of reactants with the functioning for some enzymes (17, 33, 34) has been reported. These studies show the functional importance of the structural fluctuation represented by β¯T. However, thermodynamic techniques lack time resolution, and it has been impossible to measure the fluctuations of short-lived intermediate species.Recently, we developed a time-resolving method for assessing thermodynamic properties using the pulsed laser induced transient grating (TG) method. Using this method, we thus far succeeded in measuring the enthalpy change (ΔH) (3538), partial molar volume change (ΔV¯) (12, 35, 37), thermal expansion change (Δα¯th) (12, 37), and heat capacity change (ΔCp) (3638) for short-lived species. Therefore, in principle, the partial molar isothermal compressibility change (ΔK¯T) of a short-lived intermediate become observable if we conduct the TG experiment under the high-pressure condition and detect ΔV¯ with varying external pressure.There are several difficulties in applying the traditional high-pressure cell to the TG method to measure thermodynamic parameters quantitatively. The most serious problem is ensuring the quantitative performance of the intensity of TG signals measured under the high-pressure condition. On this point, our group has developed a new high-pressure cell specially designed for TG spectroscopy (39) and overcome this problem. In this paper, by applying this high-pressure TG system to the BLUF protein TePixD, we report the first measurement, to our knowledge, of ΔK¯T of short-lived intermediates to investigate the mechanism underlying signal transmission by BLUF proteins, from the view point of the transient fluctuation.TePixD is a homolog of the BLUF protein PixD, which regulates the phototaxis of cyanobacterium (40) and exists in a thermophilic cyanobacterium Thermocynechococcus elongates BP-1 (Tll0078). TePixD is a relatively small (17 kDa) protein that consists only of the BLUF domain with two extended helices in the C-terminal region. In crystals and solutions, it forms a decamer that consists of two pentameric rings (41). The photochemistry of TePixD is typical among BLUF proteins (4245); on blue light illumination, the absorption spectrum shifts toward red by about 10 nm within a nanosecond. The absorption spectrum does not change further, and the dark state is recovered with a time constant of ∼5 s at room temperature (40, 43). The spectral red shift was explained by the rearrangement of the hydrogen bond network around the chromophore (6, 4648). The TG method has revealed the dynamic photoreaction mechanism, which cannot be detected by conventional spectroscopic methods. The TG signal of TePixD (Fig. S1) showed that there are two spectrally silent reaction phases: a partial molar volume expansion with the time constant of ∼40 μs and the diffusion coefficient (D) change with a time constant of ∼4 ms. Furthermore, it was reported that the pentamer and decamer states of TePixD are in equilibrium and that the final photoproduct of the decamer is pentamers generated by its dissociation (13, 49). On the basis of these studies, the reaction scheme has been identified as shown in Fig. 1. Here, I1 is the intermediate of the spectrally red-shifted species (generated within a nanosecond) and I2 is the one created on the subsequent volume expansion process of +4 cm3⋅mol−1 (∼40 μs). Furthermore, an experiment of the excitation laser power dependence of its TG signal revealed that the TePixD decamer undergoes the original dissociation reaction when only one monomer in the decamer is excited (50). In this study, we investigated the transient compressibility of the intermediates I1 and I2 of the photoreaction of TePixD and found a direct link between their fluctuation and reactivity.Open in a separate windowFig. 1.Schematic illustration of the photoreaction of TePixD. Yellow circles represent the TePixD monomer in the ground state, which constructs the decamer and pentamer states. In the dark state, these two forms are in equilibrium. The excited, spectral red-shifted state of the TePixD monomer is indicated by a red circle. The square represents the I2 state of the monomer, which is created by the volume expansion process.  相似文献   

13.
We study the instantaneous normal mode (INM) spectrum of a simulated soft-sphere liquid at different equilibrium temperatures T. We find that the spectrum of eigenvalues ρ(λ) has a sharp maximum near (but not at) λ=0 and decreases monotonically with |λ| on both the stable and unstable sides of the spectrum. The spectral shape strongly depends on temperature. It is rather asymmetric at low temperatures (close to the dynamical critical temperature) and becomes symmetric at high temperatures. To explain these findings we present a mean-field theory for ρ(λ), which is based on a heterogeneous elasticity model, in which the local shear moduli exhibit spatial fluctuations, including negative values. We find good agreement between the simulation data and the model calculations, done with the help of the self-consistent Born approximation (SCBA), when we take the variance of the fluctuations to be proportional to the temperature T. More importantly, we find an empirical correlation of the positions of the maxima of ρ(λ) with the low-frequency exponent of the density of the vibrational modes of the glasses obtained by quenching to T=0 from the temperature T. We discuss the present findings in connection to the liquid to glass transformation and its precursor phenomena.

The investigation of the potential energy surface (PES) V(r1(t)rN(t)) of a liquid (made up of N particles with positions r1(t)rN(t) at a time instant t) and the corresponding instantaneous normal modes (INMs) of the (Hessian) matrix of curvatures has been a focus of liquid and glass science since the appearance of Goldstein’s seminal article (1) on the relation between the PES and the liquid dynamics in the viscous regime above the glass transition (227).The PES has been shown to form a rather ragged landscape in configuration space (8, 28, 29) characterized by its stationary points. In a glass these points are minima and are called “inherent structures.” The PES is believed to contain important information on the liquid–glass transformation mechanism. For the latter a complete understanding is still missing (28, 30, 31). The existing molecular theory of the liquid–glass transformation is mode-coupling theory (MCT) (32, 33) and its mean-field Potts spin version (28, 34). MCT predicts a sharp transition at a temperature TMCT>Tg, where Tg is the temperature of structural arrest (glass transition temperature). MCT completely misses the heterogeneous activated relaxation processes (dynamical heterogeneities), which are evidently present around and below TMCT and which are related to the unstable (negative-λ) part of the INM spectrum (28, 30).Near and above TMCT, apparently, there occurs a fundamental change in the PES. Numerical studies of model liquids have shown that minima present below TMCT change into saddles, which then explains the absence of activated processes above TMCT (224). Very recently, it was shown that TMCT is related to a localization–delocalization transition of the unstable INM modes (25, 26).The INM spectrum is obtained in molecular dynamic simulations by diagonalizing the Hessian matrix of the interaction potential, taken at a certain time instant t:Hijαβ(t)=2xi(α)xj(β)V{r1(t)rN(t)},[1]with ri=(xi(1),xi(2),xi(3)). For large positive values of the eigenvalues λj (j=1N, N being the number of particles in the system) they are related to the square of vibrational frequencies λj=ωj2, and one can consider the Hessian as the counterpart of the dynamical matrix of a solid. In this high-frequency regime one can identify the spectrum with the density of vibrational states (DOS) of the liquid viag(ω)=2ωρ(λ(ω))=13Njδ(ωωj).[2]For small and negative values of λ this identification is not possible. For the unstable part of the spectrum (λ<0) it has become common practice to call the imaginary number λ=iω˜ and define the corresponding DOS asg(ω˜)2ω˜ρ(λ(ω˜)).[3]This function is plotted on the negative ω axis and the stable g(ω), according to [2], on the positive axis. However, the (as we shall see, very interesting) details of the spectrum ρ(λ) near λ = 0 become almost completely hidden by multiplying the spectrum with |ω|. In fact, it has been demonstrated by Sastry et al. (6) and Taraskin and Elliott (7) already 2 decades ago that the INM spectrum of liquids, if plotted as ρ(λ) and not as g(ω) according to [2] and [3], exhibits a characteristic cusp-like maximum at λ = 0. The shape of the spectrum changes strongly with temperature. This is what we find as well in our simulation and what we want to explore further in our present contribution.In the present contribution we demonstrate that the strong change of the spectrum with temperature can be rather well explained in terms of a model, in which the instantaneous harmonic spectrum of the liquid is interpreted to be that of an elastic medium, in which the local shear moduli exhibit strong spatial fluctuations, which includes a large number of negative values. Because these fluctuations are just a snapshot of thermal fluctuations, we assume that they are obeying Gaussian statistics, the variance of which is proportional to the temperature.Evidence for a characteristic change in the liquid configurations in the temperature range above Tg has been obtained in recent simulation studies of the low-frequency vibrational spectrum of glasses, which have been rapidly quenched from a certain parental temperature T*. If T* is decreased from high temperatures toward TMCT, the low-frequency exponent of the vibrational DOS of the daughter glass (quenched from T* to T = 0) changed from Debye-like g(ω)ω2 to g(ω)ωs with s > 2. In our numerical investigation of the INM spectra we show a correlation of some details of the low-eigenvalue features of these spectra with the low-frequency properties of the daughter glasses obtained by quenching from the parental temperatures.The stochastic Helmholtz equations (Eq. 7) of an elastic model with spatially fluctuating shear moduli can be readily solved for the averaged Green’s functions by field theoretical techniques (3537). Via a saddle point approximation with respect to the resulting effective field theory one arrives at a mean-field theory (self-consistent Born approximation [SCBA]) for the self-energy of the averaged Green’s functions. The SCBA predicts a stable spectrum below a threshold value of the variance. Restricted to this stable regime, this theory, called heterogeneous elasticity theory (HET), was rather successful in explaining several low-frequency anomalies in the vibrational spectrum of glasses, including the so-called boson peak, which is an enhancement at finite frequencies over the Debye behavior of the DOS g(ω)ω2 (3741). We now explore the unstable regime of this theory and compare it to the INM spectrum of our simulated soft-sphere liquid.*We start Results by presenting a comparison of the simulated spectra of the soft-sphere liquid with those obtained by the unstable version of HET-SCBA theory. We then concentrate on some specific features of the INM spectra, namely, the low-eigenvalue slopes and the shift of the spectral maximum from λ = 0. Both features are accounted for by HET-SCBA. In particular, we find an interesting law for the difference between the slopes of the unstable and the stable parts of the spectrum, which behaves as T2/3, which, again, is accounted for by HET-SCBA.In the end we compare the shift of the spectral maximum with the low-frequency exponent of the DOS of the corresponding daughter glasses and find an empirical correlation. We discuss these results in connection with the saddle to minimum transformation near TMCT.  相似文献   

14.
It is a widely held belief that people’s choices are less sensitive to changes in value as value increases. For example, the subjective difference between $11 and $12 is believed to be smaller than between $1 and $2. This idea is consistent with applications of the Weber-Fechner Law and divisive normalization to value-based choice and with psychological interpretations of diminishing marginal utility. According to random utility theory in economics, smaller subjective differences predict less accurate choices. Meanwhile, in the context of sequential sampling models in psychology, smaller subjective differences also predict longer response times. Based on these models, we would predict decisions between high-value options to be slower and less accurate. In contrast, some have argued on normative grounds that choices between high-value options should be made with less caution, leading to faster and less accurate choices. Here, we model the dynamics of the choice process across three different choice domains, accounting for both discriminability and response caution. Contrary to predictions, we mostly observe faster and more accurate decisions (i.e., higher drift rates) between high-value options. We also observe that when participants are alerted about incoming high-value decisions, they exert more caution and not less. We rule out several explanations for these results, using tasks with both subjective and objective values. These results cast doubt on the notion that increasing value reduces discriminability.

Are decision-makers sensitive to the average value of their options? For example, when shopping for a car, does the choice process differ at a bargain lot compared to a luxury dealership? Is it easier to choose between two cars valued at $5,000 or $50,000?To answer this question, we must first define what we mean by “easier.” There are two basic features of easy decisions: they are consistent and fast. For instance, it is well established that choices are inconsistent and slow when the choice options are similar in value to each other, while they are consistent and fast when there is a large difference in the options’ values (15). The effect of value difference on the stochasticity of choice is predicted by many popular models, dating back at least to Luce (6), and the effect of value difference on response time (RT) is predicted by sequential sampling models (712). In fact, the effect of value difference on both choice frequencies and RT has been documented in many laboratory experiments (10, 13).In comparison, there has been much less research into the effects of overall value (OV), holding value difference constant. Among conventional stochastic choice models, a common assumption is that OV should be irrelevant. One popular economic model is the additive random utility model (2), which implies the probability of choosing an option i over another alternative j should be an increasing function of μi − μj, where for any option i the utility assigned to it is μi (before the addition of the random error term). Therefore, a constant utility difference should imply the same choice frequencies regardless of whether μi and μj are two small quantities or two large quantities. The logit (softmax) choice function, commonly used to fit preference models to experimental data, similarly posits choice frequencies of the formP[ij]=eλμieλμi+ eλμj=(1+eλ(μiμj))1for some “inverse temperature” parameter λ > 0. This model again implies that only utility differences matter. Finally, choice frequencies and RT are often jointly modeled using sequential sampling models. The most popular of these models, the drift diffusion model (DDM), commonly assumes that the drift rate of the decision variable is proportional to the difference in value between the two options (9, 10). Under this assumption, the DDM predicts that both choice frequencies and mean RT should depend only on the value difference and not on OV.The aforementioned models imply that OV is irrelevant only under the assumption that value representations (i.e., utilities) are linear, monotonic functions of the values measured by the experimenter. However, there are many theories of value representation that instead posit that utilities are nonlinear functions of the measured values, i.e., μi = μ(Vi). In this case, choice frequencies and RT would depend on more than just the value difference ΔV = ViVj measured by the experimenter.What form should the function μ(V) take? A natural proposal would be to assume that μ(V) is increasing but strictly concave, so that the marginal utility μ′(V) decreases as V increases. The assumption of diminishing marginal utility is commonplace in economic modeling, dating back to Bernoulli (14). It is typically invoked to explain the imperfect substitutability between different goods in a bundle (15), imperfect substitutability of consumption over time (16), or risk aversion (17)—contexts that might seem orthogonal to stochastic choice or issues of discriminability. Nonetheless, one might conjecture that the same mechanisms that generate diminishing marginal utility in these other contexts should also determine the relationship between measured values and utilities in a random utility model of stochastic choice.Similarly, Prospect Theory is predicated on the assumption that choices are made based on subjective values generated by nonlinear transformations of objective values (17). Notably, this value function is assumed to reflect diminishing marginal sensitivity to increasing values. Kahneman and Tversky use this value function to explain modal choices but do not propose any model of the stochasticity of observed choices or of RT. They motivate their incorporation of diminishing marginal sensitivity based on an analogy to the psychophysics of perceptual judgements, in which objective sensory magnitudes are often mapped onto an internal scale (18) with a nonlinear function that is typically expected to be concave (as with the logarithmic mapping postulated by the Weber-Fechner Law). The key evidence for such nonlinearity is the way in which the discriminability between two stimuli declines with increases in the absolute magnitudes of the two stimuli (holding the difference constant). Kahneman and Tversky also expected this to be true of comparisons involving economic values, and others have formalized this assumption within stochastic versions of Prospect Theory fit to experimental data (19).Another way to motivate this type of nonlinear function is with the theory of divisive normalization in neural coding. An influential literature in neuroscience has determined that neural firing rates that represent sensory magnitudes are normalized in such a way that a given difference in objective magnitudes results in a smaller difference in the respective firing rates when the two objective magnitudes increase (2023). Recent work in neuroeconomics has applied divisive normalization to stochastic, value-based choice under the assumption that there is a one-to-one relationship between the neural representation of value in firing rates and the choice behavior it generates (2430). A theory of stochastic choice predicated on divisive normalization thus predicts that option discriminability will decrease as OV increases (see SI Appendix for details).Despite the intuitive appeal of diminishing marginal sensitivity and the evidence for it in other sensory domains, there is little direct evidence that OV decreases discriminability once you control for value difference. The behavioral evidence on accuracy rates is controversial (31). Furthermore, the notion that utility differences decrease with OV is typically inferred from the presence of risk-averse behavior, which could arise for other reasons (3235).One possible reason for the mixed behavioral evidence is that increasing OV may also increase perceived importance, motivating decision-makers to approach high-value decisions more cautiously (3640). The well-known speed–accuracy tradeoff (5, 9) implies that more caution could counteract losses in discriminability. On the other hand, there is abundant evidence that high-value decisions tend to be fast (10, 4145). Even nonhuman primates will choose between juices (including identical ones) faster as the amount of juice increases (46). Based on these results, it appears unlikely that high-value decisions are made more cautiously, but we cannot be sure because both discriminability and response caution affect RT (47).To properly determine how OV influences discriminability while accounting for response caution, we require analyses that consider both accuracy and RT. Using the DDM, we can account for response caution while simultaneously estimating the effect of OV on discriminability (48).In this paper, we applied the DDM to behavior in three studies, each with the same structure but different types of decisions. Each experiment involved a series of binary choices, separated into blocks with three categories of OV (low, middle, and high). To study OV effects in naturalistic settings, studies 1 and 2 used snack foods and abstract art, respectively. Subjects first rated how much they liked various items, then later chose between them. These tasks are commonly used in the literature, but also come with a drawback: they rely on subjective ratings. Subjective ratings noisily represent subjects’ true values (49), and ratings on different parts of the scale may be more or less noisy (50). To rule out these concerns, study 3 used a paradigm with learned values that were objective and identically distributed in each OV condition.In each study, we first tested core predictions about discriminability varying with OV in a baseline condition. Specifically, we used the DDM to estimate discriminability (via drift rate) as a function of OV while accounting for response-caution differences (via boundary separation) between OV categories. We tested the hypothesis that discriminability would be reduced in higher OV contexts against the null hypothesis that OV would have no effect on discriminability.To investigate the impacts of OV on response caution, we included a condition with cues that indicated the value category for the upcoming block. These cues did not provide any additional information. We included the value cues because in the DDM framework, decision-makers adjust their decision boundaries at the block level. Thus, we reasoned that the value cues would allow subjects to set (and reveal to us) their desired level of response caution for each value category. If decision-makers view higher-value decisions as more (less) important, value cues should increase (decrease) boundaries in high-value blocks.To preview the results, across all three studies (for which studies 2 and 3 were preregistered), we find heightened, not reduced, discriminability as OV increases; we observe both faster and more accurate choices at high OV and a tendency toward slower and less accurate choices at low OV. However, we find that value cues increase response caution for high-value compared to middle-value trials, indicating that decision-makers are motivated to be slower and more accurate for high-value decisions. We find these same effects in all three studies, indicating that they are not due to familiarity/accessibility (51), different uses of the rating scale, or variability within value categories.  相似文献   

15.
16.
We present transport measurements of bilayer graphene with a 1.38 interlayer twist. As with other devices with twist angles substantially larger than the magic angle of 1.1, we do not observe correlated insulating states or band reorganization. However, we do observe several highly unusual behaviors in magnetotransport. For a large range of densities around half filling of the moiré bands, magnetoresistance is large and quadratic. Over these same densities, the magnetoresistance minima corresponding to gaps between Landau levels split and bend as a function of density and field. We reproduce the same splitting and bending behavior in a simple tight-binding model of Hofstadter’s butterfly on a triangular lattice with anisotropic hopping terms. These features appear to be a generic class of experimental manifestations of Hofstadter’s butterfly and may provide insight into the emergent states of twisted bilayer graphene.

The mesmerizing Hofstadter butterfly spectrum arises when electrons in a two-dimensional periodic potential are immersed in an out-of-plane magnetic field. When the magnetic flux Φ through a unit cell is a rational multiple p / q of the magnetic flux quantum Φ0=h/e, each Bloch band splits into q subbands (1). The carrier densities corresponding to gaps between these subbands follow straight lines when plotted as a function of normalized density n/ns and magnetic field (2). Here, ns is the density of carriers required to fill the (possibly degenerate) Bloch band. These lines can be described by the Diophantine equation (n/ns)=t(Φ/Φ0)+s for integers s and t. In experiments, they appear as minima or zeros in longitudinal resistivity coinciding with Hall conductivity quantized at σxy=te2/h (3, 4). Hofstadter originally studied magnetosubbands emerging from a single Bloch band on a square lattice. In the following decades, other authors considered different lattices (57), the effect of anisotropy (6, 810), next-nearest-neighbor hopping (1115), interactions (16, 17), density wave states (9), and graphene moirés (18, 19).It took considerable ingenuity to realize clean systems with unit cells large enough to allow conventional superconducting magnets to reach Φ/Φ01. The first successful observation of the butterfly in electrical transport measurements was in GaAs/AlGaAs heterostructures with lithographically defined periodic potentials (2022). These experiments demonstrated the expected quantized Hall conductance in a few of the largest magnetosubband gaps. In 2013, three groups mapped out the full butterfly spectrum in both density and field in heterostructures based on monolayer (23, 24) and bilayer (25) graphene. In all three cases, the authors made use of the 2% lattice mismatch between their graphene and its encapsulating hexagonal boron nitride (hBN) dielectric. With these layers rotationally aligned, the resulting moiré pattern was large enough in area that gated structures studied in available high-field magnets could simultaneously approach normalized carrier densities and magnetic flux ratios of 1. Later work on hBN-aligned bilayer graphene showed that, likely because of electron–electron interactions, the gaps could also follow lines described by fractional s and t (26).In twisted bilayer graphene (TBG), a slight interlayer rotation creates a similar-scale moiré pattern. Unlike with graphene–hBN moirés, in TBG there is a gap between lowest and neighboring moiré subbands (27). As the twist angle approaches the magic angle of 1.1 the isolated moiré bands become flat (28, 29), and strong correlations lead to fascinating insulating (3037), superconducting (3133, 3537), and magnetic (34, 35, 38) states. The strong correlations tend to cause moiré subbands within a fourfold degenerate manifold to move relative to each other as one tunes the density, leading to Landau levels that project only toward higher magnitude of density from charge neutrality and integer filling factors (37, 39). This correlated behavior obscures the single-particle Hofstadter physics that would otherwise be present.In this work, we present measurements from a TBG device twisted to 1.38. When we apply a perpendicular magnetic field, a complicated and beautiful fan diagram emerges. In a broad range of densities on either side of charge neutrality, the device displays large, quadratic magnetoresistance. Within the magnetoresistance regions, each Landau level associated with ν=±8,±12,±16, appears to split into a pair, and these pairs follow complicated paths in field and density, very different from those predicted by the usual Diophantine equation. Phenomenology similar in all qualitative respects appears in measurements on several regions of this same device with similar twist angles and in two separate devices, one at 1.59 and the other at 1.70 (see SI Appendix for details).We reproduce the unusual features of the Landau levels (LLs) in a simple tight-binding model on a triangular lattice with anisotropy and a small energetic splitting between two species of fermions. At first glance, this is surprising, because that model does not represent the symmetries of the experimental moiré structure. We speculate that the unusual LL features we experimentally observe can generically emerge from spectra of Hofstadter models that include the same ingredients we added to the triangular lattice model. With further theoretical work it may be possible to use our measurements to gain insight into the underlying Hamiltonian of TBG near the magic angle.  相似文献   

17.
Thermoelectric power generation is one of the most promising techniques to use the huge amount of waste heat and solar energy. Traditionally, high thermoelectric figure-of-merit, ZT, has been the only parameter pursued for high conversion efficiency. Here, we emphasize that a high power factor (PF) is equivalently important for high power generation, in addition to high efficiency. A new n-type Mg2Sn-based material, Mg2Sn0.75Ge0.25, is a good example to meet the dual requirements in efficiency and output power. It was found that Mg2Sn0.75Ge0.25 has an average ZT of 0.9 and PF of 52 μW⋅cm−1⋅K−2 over the temperature range of 25–450 °C, a peak ZT of 1.4 at 450 °C, and peak PF of 55 μW⋅cm−1⋅K−2 at 350 °C. By using the energy balance of one-dimensional heat flow equation, leg efficiency and output power were calculated with Th = 400 °C and Tc = 50 °C to be of 10.5% and 6.6 W⋅cm−2 under a temperature gradient of 150 °C⋅mm−1, respectively.Thermoelectric power generation from waste heat is attracting more and more attention. Potential fuel efficiency enhancement by recovering the waste heat is beneficial for automobiles and many other applications (1, 2). In addition, solar thermoelectric generator provides an alternative route to convert solar energy into electrical power besides the photovoltaic conversion (3). Thermoelectric generator (TEG) can be regarded as a heat engine using electrons/holes as the energy carrier. The conversion efficiency of a TEG is related to the Carnot efficiency and the material’s average thermoelectric figure of merit ZT (4):η=ThTcTh(1+ZT11+ZT+Tc/Th),[1]where ZT = (S2σ/κ)T, and S, σ, κ, and T are Seebeck coefficient, electrical conductivity, thermal conductivity, and absolute temperature, respectively. Pursuing high ZT has been the focus of the entire thermoelectric community by applying various phonon engineering via nanostructuring approaches to reduce the thermal conductivity (57), or by exploring new compounds with intrinsically low thermal conductivity, such as compounds having complex crystalline structure, local rattlers, liquid-like sublattice, and highly distorted lattice (811). However, for practical applications, efficiency is not the only concern, and high output power density is as important as efficiency when the capacity of the heat source is huge (such as solar heat), or the cost of the heat source is not a big factor (such as waste heat from automobiles, steel industry, etc.). The output power density ω is defined as the output power W divided by the cross-sectional area A of the leg, i.e., ω = W/A, which is related to power factor PF = S2σ by the following:ω=14(ThTc)2LPF.[2]Eq. 2 contains two main parts: square of the temperature difference divided by leg length, and material power factor PF = S2σ. Clearly, to achieve higher power density for a given heat source, we have to either increase the power factor PF or decrease the leg length. However, decreasing the leg length could cause severe consequences such as increase of large heat flux that will increase the cost of the heat management at the cold end, increase of percentage of contact resistance in the device that will increase the parasitic loss and consequently decrease the energy conversion efficiency, increase of the thermal stress due to the larger thermal gradient leading to device failure, etc. Therefore, it is better to increase the power factor PF. Because PF is a pure material parameter, we can use it as a criterion in searching for new thermoelectric materials for high output power.A useful thermoelectric material should possess high ZT value for high efficiency, and also very importantly high PF for high output power. Ideally, temperature-independent ZT and PF over the whole temperature range from cold side to hot side are desired. However, both the ZT and PF of all materials show strong temperature dependency, usually increasing first with temperature and then decreasing when bipolar effect starts to play a role. The working temperature of thermoelectric materials is limited by the band energy gap Eg; e.g., Bi2Te3, a well-known thermoelectric material for applications below 200 °C, has an Eg of ∼0.13 eV (12). PbTe and associated materials have much higher peak ZT in the temperature range of 400–600 °C due to its larger Eg of 0.32 eV (13). However, the toxicity of lead, poor mechanical properties, and thermal instability above 400 °C seriously limit the application of Pb-based thermoelectric materials. Even though Mg2Si, skutterudites, and half-Heuslers are promising for thermoelectric power generation at up to 500 °C [Mg2Si (14) and skutterudites (15, 16)] or 600–700 °C [half-Heuslers (17, 18)], the ZT values of these materials below 400 °C is relatively low (ZT < 1). Other materials, such as n-type In4Se3-δ (19), n-type Ba8Ga16Sn30 (20), and p-type Zn4-δSb3 (21) have higher average ZT values below 400 °C. However, the low power factors make them unsuitable for power generation applications below 400 °C. Because both efficiency and output power are equally important, new n- and p-type materials that can work up to 400 °C are more desirable for thermoelectric power generation.Here, we report a new Mg2Sn-based n-type thermoelectric material that shows promise to work below 400 °C for power generation due to the narrow band gap of ∼0.26 eV. Historically, Mg2Sn material has been investigated less than its analogous compound Mg2Si for thermoelectric applications due to its lower ZT (2225). Most of the research has been focused on the alloy of Mg2Si-Mg2Sn with a peak ZT value of ∼1 at 500 °C (2628). Recently, different groups have improved the peak ZT value to 1.1–1.3 by adjusting the x value in the Mg2Si1−xSnx solid solution (14, 29, 30). The challenges in preparing and handling these materials were the high vapor pressure and chemical activity of Mg. Methods of direct comelting with subsequent annealing, and solid-state reaction with subsequent annealing and Bridgman method were reported to synthesize Mg2Si-Mg2Sn alloys (2230). Powder metallurgy route, e.g., ball milling plus hot pressing, was widely used to fabricate a variety of high-performance thermoelectric bulk materials such as Bi2Te3 (6, 31), PbTe (32), PbSe (33), and skutterudites CoSb3 (16, 34). In fact, ball milling was reported to synthesize Mg2Si and its alloys Mg2Si-Mg2Sn (3538). However, the reported ZT was lower than 0.7 (27, 38), which may be due to the difficulty in avoiding oxidization of Mg. Here, we report a successful synthesis of an Sn-dominated composition Mg2Sn0.75Ge0.25 through ball milling and hot pressing to achieve a ZT of 1.4 at 450 °C and power factor PF of 55 μW⋅cm−1⋅K−2 at 350 °C. Calculations show that these could yield a leg efficiency η of 10.5%, and output power density ω of 6.6 W·cm−2 at Th = 400 °C and Tc = 50 °C, which will be very useful for the vast amount of waste heat sources at up to 400 °C and concentrated solar energy conversion applications.  相似文献   

18.
Our study of cholesteric lyotropic chromonic liquid crystals in cylindrical confinement reveals the topological aspects of cholesteric liquid crystals. The double-twist configurations we observe exhibit discontinuous layering transitions, domain formation, metastability, and chiral point defects as the concentration of chiral dopant is varied. We demonstrate that these distinct layer states can be distinguished by chiral topological invariants. We show that changes in the layer structure give rise to a chiral soliton similar to a toron, comprising a metastable pair of chiral point defects. Through the applicability of the invariants we describe to general systems, our work has broad relevance to the study of chiral materials.

Chiral liquid crystals (LCs) are ubiquitous, useful, and rich systems (14). From the first discovery of the liquid crystalline phase to the variety of chiral structures formed by biomolecules (59), the twisted structure, breaking both mirror and continuous spatial symmetries, is omnipresent. The unique structure also makes the chiral nematic (cholesteric) LC, an essential material for applications utilizing the tunable, responsive, and periodic modulation of anisotropic properties.The cholesteric is also a popular model system to study the geometry and topology of partially ordered matter. The twisted ground state of the cholesteric is often incompatible with confinement and external fields, exhibiting a large variety of frustrated and metastable director configurations accompanying topological defects. Besides the classic example of cholesterics in a Grandjean−Cano wedge (10, 11), examples include cholesteric droplets (1216), colloids (1719), shells (2022), tori (23, 24), cylinders (2529), microfabricated structures (30, 31), and films between parallel plates with external fields (3240). These structures are typically understood using a combination of nematic (achiral) topology (41, 42) and energetic arguments, for example, the highly successful Landau−de Gennes approach (43). However, traditional extensions of the nematic topological approach to cholesterics are known to be conceptually incomplete and difficult to apply in regimes where the system size is comparable to the cholesteric pitch (41, 44).An alternative perspective, chiral topology, can give a deeper understanding of these structures (4547). In this approach, the key role is played by the twist density, given in terms of the director field n by n×n. This choice is not arbitrary; the Frank free energy prefers n×nq0=2π/p0 with a helical pitch p0, and, from a geometric perspective, n×n0 defines a contact structure (48). This allows a number of new integer-valued invariants of chiral textures to be defined (45). A configuration with a single sign of twist is chiral, and two configurations which cannot be connected by a path of chiral configurations are chirally distinct, and hence separated by a chiral energy barrier. Within each chiral class of configuration, additional topological invariants may be defined using methods of contact topology (4548), such as layer numbers. Changing these chiral topological invariants requires passing through a nonchiral configuration. Cholesterics serve as model systems for the exploration of chirality in ordered media, and the phenomena we describe here—metastability in chiral systems controlled by chiral topological invariants—has applicability to chiral order generally. This, in particular, includes chiral ferromagnets, where, for example, our results on chiral topological invariants apply to highly twisted nontopological Skyrmions (49, 50) (“Skyrmionium”).Our experimental model to explore the chiral topological invariants is the cholesteric phase of lyotropic chromonic LCs (LCLCs). The majority of experimental systems hitherto studied are based on thermotropic LCs with typical elastic and surface-anchoring properties. The aqueous LCLCs exhibiting unusual elastic properties, that is, very small twist modulus K2 and large saddle-splay modulus K24 (5156), often leading to chiral symmetry breaking of confined achiral LCLCs (53, 54, 5661), may enable us to access uncharted configurations and defects of topological interests. For instance, in the layer configuration by cholesteric LCLCs doped with chiral molecules, their small K2 provides energetic flexibility to the thickness of the cholesteric layer, that is, the repeating structure where the director n twists by π. The large K24 affords curvature-induced surface interactions in combination with a weak anchoring strength of the lyotropic LCs (6264).We present a systematic investigation of the director configuration of cholesteric LCLCs confined in cylinders with degenerate planar anchoring, depending on the chiral dopant concentration. We show that the structure of cholesteric configurations is controlled by higher-order chiral topological invariants. We focus on two intriguing phenomena observed in cylindrically confined cholesterics. First, the cylindrical symmetry renders multiple local minima to the energy landscape and induces discontinuous increase of twist angles, that is, a layering transition, upon the dopant concentration increase. Additionally, the director configurations of local minima coexist as metastable domains with point-like defects between them. We demonstrate that a chiral layer number invariant distinguishes these configurations, protects the distinct layer configurations (45), and explains the existence of the topological defect where the invariant changes.  相似文献   

19.
Despite its importance for forest regeneration, food webs, and human economies, changes in tree fecundity with tree size and age remain largely unknown. The allometric increase with tree diameter assumed in ecological models would substantially overestimate seed contributions from large trees if fecundity eventually declines with size. Current estimates are dominated by overrepresentation of small trees in regression models. We combined global fecundity data, including a substantial representation of large trees. We compared size–fecundity relationships against traditional allometric scaling with diameter and two models based on crown architecture. All allometric models fail to describe the declining rate of increase in fecundity with diameter found for 80% of 597 species in our analysis. The strong evidence of declining fecundity, beyond what can be explained by crown architectural change, is consistent with physiological decline. A downward revision of projected fecundity of large trees can improve the next generation of forest dynamic models.

“Belgium, Luxembourg, and The Netherlands are characterized by “young” apple orchards, where over 60% of the trees are under 10 y old. In comparison, Estonia and the Czech Republic have relatively “old” orchard[s] with almost 60% and 43% over 25 y old” (1).
“The useful lives for fruit and nut trees range from 16 years (peach trees) to 37 years (almond trees)…. The Depreciation Analysis Division believes that 61 years is the best estimate of the class life of fruit and nut trees based on the information available” (2).
When mandated by the 1986 Tax Reform Act to depreciate aging orchards, the Office of the US Treasury found so little information that they ultimately resorted to interviews with individual growers (2). One thing is clear from the age distributions of fruit and nut orchards throughout the world (1, 3, 4): Standard practice often replaces trees long before most ecologists would view them to be in physiological decline, despite the interruption of profits borne by growers as transplants establish and mature. Although seed establishment represents the dominant mode for forest regeneration globally, and the seeds, nuts, and fruits of woody plants make up to 3% of the human diet (5, 6), change in fecundity with tree size and age is still poorly understood. We examine here the relationship between tree fecundity and diameter, which is related to tree age in the sense that trees do not shrink in diameter (cambial layers typically add a new increment annually), but growth rates can range widely. Still, it is important not to ignore the evidence that declines with size may also be caused by aging. Although most analyses do not separate effects of size from age (because age is often unknown and confounded with size), both may contribute to size–fecundity relationships (7). Grafting experiments designed to isolate extrinsic influences (size and/or environment) from age-related gene expression suggest that size alone can sometimes explain declines in growth rate and physiological performance (810), consistent with pruning/coppicing practice to extend the reproductive life of commercial fruit trees. Hydraulic limitation can affect physiological function, including reduced photosynthetic gain that might contribute to loss of apical dominance, or “flattening” of the crown with increasing height (1116). The slowing of height growth relative to diameter growth in large trees is observed in many species (12, 17). At least one study suggests that age by itself may not lead to decline in fecundity of open-grown, generally small-statured bristlecone pine (Pinus longaeva) (18). By contrast, some studies provide evidence of tree senescence, including age-related genetic changes in meristems of grafted scions that cause declines in physiological function (1922). Koenig et al. (23) found that fecundity declined in the 5 y preceding death in eight Quercus species, although cause of death here, as in most cases, is hard to identify. Fielding (24) found that cone size of Pinus radiata declines with tree age and smaller cones produce fewer seeds (25). Some studies support age-related fecundity declines in herbaceous species (2628). Thus, there is evidence to suggest the fecundity schedules might show declines with size, age, or both.The reproductive potential of trees as they grow and age is of special concern to ecologists because, despite being relatively rare, large trees can contribute disproportionately to forest biomass due to the allometric scaling that amplifies linear growth in diameter to a volume increase that is more closely related to biomass (29, 30). Understanding the role of large trees can also benefit management in recovering forests (31). If allometric scaling applies to fecundity, then these large individuals might determine the species and genetic composition of seeds that compete for dominance in future forests.Unfortunately, underrepresentation of big trees in forests frustrates efforts to infer how fecundity changes with size. Simple allometric relationships between seed production and tree diameter can offer useful predictions for the small- to intermediate-size trees that dominate observational data, so it is not surprising that modeling began with the assumption of allometric scaling (3236). Extrapolation from these models would predict that seed production by the small trees from which most observations come may be overwhelmed by big trees. Despite the increase with tree size assumed by ecologists (37), evidence for declining reproduction in large trees has continued to accumulate from horticultural practice (3, 4, 38, 39) and at least some ecological (4045) and forestry literature (46, 47). However, we are unaware of studies that evaluate changes in fecundity that include substantial numbers of large trees.Understanding the role of size and age is further complicated by the fact that tree fecundity ranges over orders of magnitude from tree to tree of the same species and within the same tree from year to year—a phenomenon known as “masting.” The variation in seed-production data requires large sample sizes not only to infer the effects of size, but also to account for local habitat and interannual climate variation. For example, a one-time destructive harvest to count seeds in felled trees (48, 49) misses the fact that the same trees would offer a different picture had they been harvested in a different year. An oak that produces 100 acorns this year may produce 10,000 next year. A pine that produces 500 cones this year can produce zero next year. Few datasets offer the sample sizes of trees and tree years needed to estimate effects of size and habitat conditions in the face of this high intertree and interyear variability (43).We begin this analysis by extending allometric scaling to better reflect the geometry of fecundity with tree size. We then reexamine the size–fecundity relationship using data from the Masting Inference and Forecasting (MASTIF) project (50), which includes substantial representation of large trees, and a modeling framework that allows for the possibility that fecundity plateaus or even declines in large trees. Unlike previous studies, we account for the nonallometric influences that come through competition and climate. We demonstrate that fecundity–diameter relationships depart substantially from allometric scaling in ways that are consistent with physiological senescence.Continuous increase with size has been assumed in most models of tree fecundity, supported in part by allometric regressions against diameter, typically of the formlogMf=β0+βDlogD[1]for fecundity mass Mf=m×f (48, 51), where D is tree diameter, m is mass per seed, and fecundity f is seeds per tree per year. Of course, this model cannot be used to determine whether or how fecundity changes with tree diameter unless expanded to include additional quadratic or higher-order terms (52).The assumption of continual increase in fecundity was interpreted from early seed-trap studies, which initially assumed that βD=2, i.e., fecundity proportional to stem basal area (3334, 51). Models subsequently became more flexible, first with βD values fitted, rather than fixed, yielding estimates in the range (0.3, 0.9) in one study (ref. 52, 18 species) and (0, 4.1) in another (ref. 56, 4 species). However, underrepresentation of large trees in typical datasets means that model fitting is dominated by the abundant small size classes.To understand why data and models could fail to accurately represent change in fecundity with size, consider that allometric scaling in Eq. 1 can be maintained dynamically only if change in both adheres to a strict proportionality1fdfdt1DdDdt[2](57). For allometric scaling, any variable that affects diameter growth has to simultaneously affect change in fecundity and in the same, proportionate way. In other words, allometric scaling cannot hold if there are selective forces on fecundity that do not operate through diameter growth and vice versa.On top of this awkward constraint that demands proportionate responses of growth and fecundity, consider further that standard arguments for allometric scaling are not directly relevant for tree fecundity. Allometry is invoked for traits that maintain relationships between body parts as an organism changes size (29). For example, a diameter increment translates to an increase in volume throughout the tree (58, 59). Because the cambial layer essentially blankets the tree, a volume increment cannot depart much from a simple allometric relationship with diameter. However, the same cannot be said for all plant parts, many of which clearly do not allometrically scale; for example, seed size does not scale with leaf size (60), presumably because structural constraints are not the dominant forces that relate them (61).To highlight why selective forces might not generate strict allometric scaling for reproduction, consider that a tree allocates a small fraction of potential buds to reproduction in a given year (62, 63). Still, if the number of buds on a tree bears some direct relationship to crown dimensions and, thus, diameter, there might be allometric scaling. However, the fraction of buds allocated to reproduction and their subsequent development to seed is affected by interannual weather and other selective forces (e.g., bud abortion, pollen limitation) in ways that diameter growth is not (6466). In fact, weather might have opposing effects on growth and reproduction (67). Furthermore, resources can change the relationship between diameter and fecundity, including light levels (52, 6870) and atmospheric CO2 (71).Some arguments based on carbon balance anticipate a decline in fecundity with tree size (72). Increased stomatal limitation (11) and reduced leaf turgor pressure (14, 73) from increasing hydraulic path length could reduce carbon gains in large trees. Assimilation rates on a leaf area basis can decline with tree size (74), while respiration rate per leaf area can increase [Sequoia sempervirens (75), Liquidambar styraciflua (76), and Pinus sylvestris (77)], consistent with the notion that whole-plant respiration rate may roughly scale with biomass (78). Maintenance respiration costs scale with diameter in some tropical species (79) but perhaps not in Pinus contorta and Picea engelmannii (80). Self-pruning of lower branches can reduce maintenance costs (81), but the ratio of carbon gain to respiration cost can still decline with size, especially where leaf area plateaus and per-area assimilation rates of leaves decline in large trees.The question of size–fecundity relationships is related indirectly to the large literature on interannual variation in growth–fecundity allocation (3, 4, 43, 67, 8287). The frequency and timing of mast years and species differences in the volatility of seed production can be related to short-term changes in physiological state and pollen limitation that might not predict the long-term relationships between size and reproductive effort. The interannual covariance in diameter growth and reproductive effort can range from strong in some species to weak in others (70, 87, 88). Understanding the relationships between short-term allocation and size–fecundity differences will be an important focus of future research.Estimating effects of size on fecundity depends on the distribution of diameter data, [D], where the bracket notation indicates a distribution or density. For some early-successional species, the size distribution changes from dominance by small trees in young stands to absence of small trees in old stands. If our goal was to describe the population represented by a forest inventory plot, we would typically think about the joint distribution of fecundity and diameter values, [f,D]=[f|D][D], that is represented by the sample. The size–fecundity relationship estimated for a stand at different successional stages would diverge simply due to the distribution of diameters, i.e., differences in [D]. For example, application of Eq. 1 to harvested trees selected to balance size classes (uniform [D]) (48) overpredicts fecundity for large trees (49), but the relevance of such regressions for natural stands, where large trees are often rare, is unclear. Studies that expand Eq. 1 to allow for changing relationships with tree size now provide increasing evidence for a departure from allometric scaling in large trees (43, 70), despite dominance by small- to intermediate-size trees in these datasets. Here our goal is to understand the size–fecundity relationship [f|D] as an attribute of a species, i.e., not tied to a specific distribution of size classes observed in a particular stand.The well-known weak relationship between tree size and age that comes from variable growth histories makes it important to clarify the implications of any finding of fecundity that declines with tree size: Can it happen if there are not also fecundity declines with tree age? The only argument for continuing increase in fecundity with age in the face of observed decreases with size would have to assume that the biggest trees are also the youngest trees. Of course, a large individual can be younger than a small individual. However, at the species level, integrating over populations sampled widely, mean diameter increases with age; at the species level, declines with size also imply declines with age. Estimating accurate species-level size effects requires distributed data and large sample sizes. The analysis here fits species-level parameters, with 585,670 trees and 10,542,239 tree years across 597 species.Phylogenetic analysis might provide insight into the pervasiveness of fecundity declines with size. Inferring change in fecundity with size necessarily requires more information than is needed to fit a single slope parameter βD in the simple allometric model. The noisier the data, the more difficult it becomes to estimate the additional parameters that are needed to describe changes in the fecundity relationship with size. We thus expect that noise alone will preclude finding size-related change in some species, depending on sample size and non–size-related variation. If the vagaries of noisy data and the distribution of diameters preclude estimation of declines in some species, then we do not expect that phylogeny will explain which species do and do not show these declines. Rather than phylogeny, this explanation would instead be tied to sample size and the distribution of diameter data. Conversely, phylogenetic conservatism, i.e., a tendency for declines to be clustered in related species, could suggest that fecundity declines are real.To understand how seed production changes with tree size, our approach combines theory and data to evaluate allometric scaling and the alternative that fecundity may decline in large trees, consistent with physiological decline and senescence. We exploit two advances that are needed to determine how fecundity scales with tree size. First, datasets are needed with large trees, because studies in the literature often include few or none (85, 89, 90). Second, methods are introduced that are flexible to the possibility that fecundity continues to increase with size or not. We begin with a reformulation of allometric scaling, recognizing that change in fecundity could be regulated by size, without taking the form of Eq. 1 (Materials and Methods and SI Appendix, section S2). In other words, there could be allometric scaling with diameter, but it is not the relationship that has been used for structural quantities like biomass. We then analyze the relationships in data using a model that not only allows for potential changes in fecundity with size, but at the same time accounts for self-shading and shading by neighbors and for environmental variables that can affect fecundity and growth (Materials and Methods and SI Appendix, section S3). The fitted model is compared with our expanded allometric model to identify potential agreement. Finally, we examined phylogenetic trends in the species that do and do not show declines.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号