首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study the instantaneous normal mode (INM) spectrum of a simulated soft-sphere liquid at different equilibrium temperatures T. We find that the spectrum of eigenvalues ρ(λ) has a sharp maximum near (but not at) λ=0 and decreases monotonically with |λ| on both the stable and unstable sides of the spectrum. The spectral shape strongly depends on temperature. It is rather asymmetric at low temperatures (close to the dynamical critical temperature) and becomes symmetric at high temperatures. To explain these findings we present a mean-field theory for ρ(λ), which is based on a heterogeneous elasticity model, in which the local shear moduli exhibit spatial fluctuations, including negative values. We find good agreement between the simulation data and the model calculations, done with the help of the self-consistent Born approximation (SCBA), when we take the variance of the fluctuations to be proportional to the temperature T. More importantly, we find an empirical correlation of the positions of the maxima of ρ(λ) with the low-frequency exponent of the density of the vibrational modes of the glasses obtained by quenching to T=0 from the temperature T. We discuss the present findings in connection to the liquid to glass transformation and its precursor phenomena.

The investigation of the potential energy surface (PES) V(r1(t)rN(t)) of a liquid (made up of N particles with positions r1(t)rN(t) at a time instant t) and the corresponding instantaneous normal modes (INMs) of the (Hessian) matrix of curvatures has been a focus of liquid and glass science since the appearance of Goldstein’s seminal article (1) on the relation between the PES and the liquid dynamics in the viscous regime above the glass transition (227).The PES has been shown to form a rather ragged landscape in configuration space (8, 28, 29) characterized by its stationary points. In a glass these points are minima and are called “inherent structures.” The PES is believed to contain important information on the liquid–glass transformation mechanism. For the latter a complete understanding is still missing (28, 30, 31). The existing molecular theory of the liquid–glass transformation is mode-coupling theory (MCT) (32, 33) and its mean-field Potts spin version (28, 34). MCT predicts a sharp transition at a temperature TMCT>Tg, where Tg is the temperature of structural arrest (glass transition temperature). MCT completely misses the heterogeneous activated relaxation processes (dynamical heterogeneities), which are evidently present around and below TMCT and which are related to the unstable (negative-λ) part of the INM spectrum (28, 30).Near and above TMCT, apparently, there occurs a fundamental change in the PES. Numerical studies of model liquids have shown that minima present below TMCT change into saddles, which then explains the absence of activated processes above TMCT (224). Very recently, it was shown that TMCT is related to a localization–delocalization transition of the unstable INM modes (25, 26).The INM spectrum is obtained in molecular dynamic simulations by diagonalizing the Hessian matrix of the interaction potential, taken at a certain time instant t:Hijαβ(t)=2xi(α)xj(β)V{r1(t)rN(t)},[1]with ri=(xi(1),xi(2),xi(3)). For large positive values of the eigenvalues λj (j=1N, N being the number of particles in the system) they are related to the square of vibrational frequencies λj=ωj2, and one can consider the Hessian as the counterpart of the dynamical matrix of a solid. In this high-frequency regime one can identify the spectrum with the density of vibrational states (DOS) of the liquid viag(ω)=2ωρ(λ(ω))=13Njδ(ωωj).[2]For small and negative values of λ this identification is not possible. For the unstable part of the spectrum (λ<0) it has become common practice to call the imaginary number λ=iω˜ and define the corresponding DOS asg(ω˜)2ω˜ρ(λ(ω˜)).[3]This function is plotted on the negative ω axis and the stable g(ω), according to [2], on the positive axis. However, the (as we shall see, very interesting) details of the spectrum ρ(λ) near λ = 0 become almost completely hidden by multiplying the spectrum with |ω|. In fact, it has been demonstrated by Sastry et al. (6) and Taraskin and Elliott (7) already 2 decades ago that the INM spectrum of liquids, if plotted as ρ(λ) and not as g(ω) according to [2] and [3], exhibits a characteristic cusp-like maximum at λ = 0. The shape of the spectrum changes strongly with temperature. This is what we find as well in our simulation and what we want to explore further in our present contribution.In the present contribution we demonstrate that the strong change of the spectrum with temperature can be rather well explained in terms of a model, in which the instantaneous harmonic spectrum of the liquid is interpreted to be that of an elastic medium, in which the local shear moduli exhibit strong spatial fluctuations, which includes a large number of negative values. Because these fluctuations are just a snapshot of thermal fluctuations, we assume that they are obeying Gaussian statistics, the variance of which is proportional to the temperature.Evidence for a characteristic change in the liquid configurations in the temperature range above Tg has been obtained in recent simulation studies of the low-frequency vibrational spectrum of glasses, which have been rapidly quenched from a certain parental temperature T*. If T* is decreased from high temperatures toward TMCT, the low-frequency exponent of the vibrational DOS of the daughter glass (quenched from T* to T = 0) changed from Debye-like g(ω)ω2 to g(ω)ωs with s > 2. In our numerical investigation of the INM spectra we show a correlation of some details of the low-eigenvalue features of these spectra with the low-frequency properties of the daughter glasses obtained by quenching from the parental temperatures.The stochastic Helmholtz equations (Eq. 7) of an elastic model with spatially fluctuating shear moduli can be readily solved for the averaged Green’s functions by field theoretical techniques (3537). Via a saddle point approximation with respect to the resulting effective field theory one arrives at a mean-field theory (self-consistent Born approximation [SCBA]) for the self-energy of the averaged Green’s functions. The SCBA predicts a stable spectrum below a threshold value of the variance. Restricted to this stable regime, this theory, called heterogeneous elasticity theory (HET), was rather successful in explaining several low-frequency anomalies in the vibrational spectrum of glasses, including the so-called boson peak, which is an enhancement at finite frequencies over the Debye behavior of the DOS g(ω)ω2 (3741). We now explore the unstable regime of this theory and compare it to the INM spectrum of our simulated soft-sphere liquid.*We start Results by presenting a comparison of the simulated spectra of the soft-sphere liquid with those obtained by the unstable version of HET-SCBA theory. We then concentrate on some specific features of the INM spectra, namely, the low-eigenvalue slopes and the shift of the spectral maximum from λ = 0. Both features are accounted for by HET-SCBA. In particular, we find an interesting law for the difference between the slopes of the unstable and the stable parts of the spectrum, which behaves as T2/3, which, again, is accounted for by HET-SCBA.In the end we compare the shift of the spectral maximum with the low-frequency exponent of the DOS of the corresponding daughter glasses and find an empirical correlation. We discuss these results in connection with the saddle to minimum transformation near TMCT.  相似文献   

2.
We present transport measurements of bilayer graphene with a 1.38 interlayer twist. As with other devices with twist angles substantially larger than the magic angle of 1.1, we do not observe correlated insulating states or band reorganization. However, we do observe several highly unusual behaviors in magnetotransport. For a large range of densities around half filling of the moiré bands, magnetoresistance is large and quadratic. Over these same densities, the magnetoresistance minima corresponding to gaps between Landau levels split and bend as a function of density and field. We reproduce the same splitting and bending behavior in a simple tight-binding model of Hofstadter’s butterfly on a triangular lattice with anisotropic hopping terms. These features appear to be a generic class of experimental manifestations of Hofstadter’s butterfly and may provide insight into the emergent states of twisted bilayer graphene.

The mesmerizing Hofstadter butterfly spectrum arises when electrons in a two-dimensional periodic potential are immersed in an out-of-plane magnetic field. When the magnetic flux Φ through a unit cell is a rational multiple p / q of the magnetic flux quantum Φ0=h/e, each Bloch band splits into q subbands (1). The carrier densities corresponding to gaps between these subbands follow straight lines when plotted as a function of normalized density n/ns and magnetic field (2). Here, ns is the density of carriers required to fill the (possibly degenerate) Bloch band. These lines can be described by the Diophantine equation (n/ns)=t(Φ/Φ0)+s for integers s and t. In experiments, they appear as minima or zeros in longitudinal resistivity coinciding with Hall conductivity quantized at σxy=te2/h (3, 4). Hofstadter originally studied magnetosubbands emerging from a single Bloch band on a square lattice. In the following decades, other authors considered different lattices (57), the effect of anisotropy (6, 810), next-nearest-neighbor hopping (1115), interactions (16, 17), density wave states (9), and graphene moirés (18, 19).It took considerable ingenuity to realize clean systems with unit cells large enough to allow conventional superconducting magnets to reach Φ/Φ01. The first successful observation of the butterfly in electrical transport measurements was in GaAs/AlGaAs heterostructures with lithographically defined periodic potentials (2022). These experiments demonstrated the expected quantized Hall conductance in a few of the largest magnetosubband gaps. In 2013, three groups mapped out the full butterfly spectrum in both density and field in heterostructures based on monolayer (23, 24) and bilayer (25) graphene. In all three cases, the authors made use of the 2% lattice mismatch between their graphene and its encapsulating hexagonal boron nitride (hBN) dielectric. With these layers rotationally aligned, the resulting moiré pattern was large enough in area that gated structures studied in available high-field magnets could simultaneously approach normalized carrier densities and magnetic flux ratios of 1. Later work on hBN-aligned bilayer graphene showed that, likely because of electron–electron interactions, the gaps could also follow lines described by fractional s and t (26).In twisted bilayer graphene (TBG), a slight interlayer rotation creates a similar-scale moiré pattern. Unlike with graphene–hBN moirés, in TBG there is a gap between lowest and neighboring moiré subbands (27). As the twist angle approaches the magic angle of 1.1 the isolated moiré bands become flat (28, 29), and strong correlations lead to fascinating insulating (3037), superconducting (3133, 3537), and magnetic (34, 35, 38) states. The strong correlations tend to cause moiré subbands within a fourfold degenerate manifold to move relative to each other as one tunes the density, leading to Landau levels that project only toward higher magnitude of density from charge neutrality and integer filling factors (37, 39). This correlated behavior obscures the single-particle Hofstadter physics that would otherwise be present.In this work, we present measurements from a TBG device twisted to 1.38. When we apply a perpendicular magnetic field, a complicated and beautiful fan diagram emerges. In a broad range of densities on either side of charge neutrality, the device displays large, quadratic magnetoresistance. Within the magnetoresistance regions, each Landau level associated with ν=±8,±12,±16, appears to split into a pair, and these pairs follow complicated paths in field and density, very different from those predicted by the usual Diophantine equation. Phenomenology similar in all qualitative respects appears in measurements on several regions of this same device with similar twist angles and in two separate devices, one at 1.59 and the other at 1.70 (see SI Appendix for details).We reproduce the unusual features of the Landau levels (LLs) in a simple tight-binding model on a triangular lattice with anisotropy and a small energetic splitting between two species of fermions. At first glance, this is surprising, because that model does not represent the symmetries of the experimental moiré structure. We speculate that the unusual LL features we experimentally observe can generically emerge from spectra of Hofstadter models that include the same ingredients we added to the triangular lattice model. With further theoretical work it may be possible to use our measurements to gain insight into the underlying Hamiltonian of TBG near the magic angle.  相似文献   

3.
4.
The transacting activator of transduction (TAT) protein plays a key role in the progression of AIDS. Studies have shown that a +8 charged sequence of amino acids in the protein, called the TAT peptide, enables the TAT protein to penetrate cell membranes. To probe mechanisms of binding and translocation of the TAT peptide into the cell, investigators have used phospholipid liposomes as cell membrane mimics. We have used the method of surface potential sensitive second harmonic generation (SHG), which is a label-free and interface-selective method, to study the binding of TAT to anionic 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-1′-rac-glycerol (POPG) and neutral 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) liposomes. It is the SHG sensitivity to the electrostatic field generated by a charged interface that enabled us to obtain the interfacial electrostatic potential. SHG together with the Poisson–Boltzmann equation yielded the dependence of the surface potential on the density of adsorbed TAT. We obtained the dissociation constants Kd for TAT binding to POPC and POPG liposomes and the maximum number of TATs that can bind to a given liposome surface. For POPC Kd was found to be 7.5 ± 2 μM, and for POPG Kd was 29.0 ± 4.0 μM. As TAT was added to the liposome solution the POPC surface potential changed from 0 mV to +37 mV, and for POPG it changed from −57 mV to −37 mV. A numerical calculation of Kd, which included all terms obtained from application of the Poisson–Boltzmann equation to the TAT liposome SHG data, was shown to be in good agreement with an approximated solution.The HIV type 1 (HIV-1) transacting activator of transduction (TAT) is an important regulatory protein for viral gene expression (13). It has been established that the TAT protein has a key role in the progression of AIDS and is a potential target for anti-HIV vaccines (4). For the TAT protein to carry out its biological functions, it needs to be readily imported into the cell. Studies on the cellular internalization of TAT have led to the discovery of the TAT peptide, a highly cationic 11-aa region (protein transduction domain) of the 86-aa full-length protein that is responsible for the TAT protein translocating across phospholipid membranes (58). The TAT peptide is a member of a class of peptides called cell-penetrating peptides (CPPs) that have generated great interest for drug delivery applications (ref. 9 and references therein). The exact mechanism by which the TAT peptide enters cells is not fully understood, but it is likely to involve a combination of energy-independent penetration and endocytosis pathways (8, 10). The first step in the process is high-affinity binding of the peptide to phospholipids and other components on the cell surface such as proteins and glycosaminoglycans (1, 9).The binding of the TAT peptide to liposomes has been investigated using a variety of techniques, each of which has its own advantages and limitations. Among the techniques are isothermal titration calorimetry (9, 11), fluorescence spectroscopy (12, 13), FRET (12, 14), single-molecule fluorescence microscopy (15, 16), and solid-state NMR (17). Second harmonic generation (SHG), as an interface-selective technique (1824), does not require a label, and because SHG is sensitive to the interface potential, it is an attractive method to selectively probe the binding of the highly charged (+8) TAT peptide to liposome surfaces. Although coherent SHG is forbidden in centrosymmetric and isotropic bulk media for reasons of symmetry, it can be generated by a centrosymmetric structure, e.g., a sphere, provided that the object is centrosymmetric over roughly the length scale of the optical coherence, which is a function of the particle size, the wavelength of the incident light, and the refractive indexes at ω and 2ω (2530). As a second-order nonlinear optical technique SHG has symmetry restrictions such that coherent SHG is not generated by the randomly oriented molecules in the bulk liquid, but can be generated coherently by the much smaller population of oriented interfacial species bound to a particle or planar surfaces. As a consequence the SHG signal from the interface is not overwhelmed by SHG from the much larger populations in the bulk media (2528).The total second harmonic electric field, E2ω, originating from a charged interface in contact with water can be expressed as (3133)E2ωiχc,i(2)EωEω+jχinc,j(2)EωEω+χH2O(3)EωEωΦ,[1]where χc,i(2) represents the second-order susceptibility of the species i present at the interface; χinc,j(2) represents the incoherent contribution of the second-order susceptibility, arising from density and orientational fluctuations of the species j present in solution, often referred to as hyper-Rayleigh scattering; χH2O(3) is the third-order susceptibility originating chiefly from the polarization of the bulk water molecules polarized by the charged interface; Φ is the potential at the interface that is created by the surface charge; and Eω is the electric field of the incident light at the fundamental frequency ω. The second-order susceptibility, χc,i(2), can be written as the product of the number of molecules, N, at the surface and the orientational ensemble average of the hyperpolarizability αi(2) of surface species i, yielding χc,i(2)=Nαi(2) (18). The bracket ?? indicates an orientational average over the interfacial molecules. The third term in Eq. 1 depicts a third-order process by which a second harmonic field is generated by a charged interface. This term is the focus of our work. The SHG signal is dependent on the surface potential created by the electrostatic field of the surface charges, often called the χ(3) contribution to the SHG signal. The χ(3) method has been used to extract the surface charge density of charged planar surfaces and microparticle surfaces, e.g., liposomes, polymer beads, and oil droplets in water (21, 25, 3439).In this work, the χ(3) SHG method is used to explore a biomedically relevant process. The binding of the highly cationic HIV-1 TAT peptide to liposome membranes changes the surface potential, thereby enabling the use of the χ(3) method to study the binding process in a label-free manner. Two kinds of liposomes, neutral 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and anionic 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-1′-rac-glycerol (POPG), were investigated. The chemical structures of TAT, POPC, and POPG lipids are shown in Scheme 1.Open in a separate windowScheme 1.Chemical structures of HIV-1 TAT (47–57) peptide and the POPC and POPG lipids.  相似文献   

5.
Fluids are known to trigger a broad range of slip events, from slow, creeping transients to dynamic earthquake ruptures. Yet, the detailed mechanics underlying these processes and the conditions leading to different rupture behaviors are not well understood. Here, we use a laboratory earthquake setup, capable of injecting pressurized fluids, to compare the rupture behavior for different rates of fluid injection, slow (megapascals per hour) versus fast (megapascals per second). We find that for the fast injection rates, dynamic ruptures are triggered at lower pressure levels and over spatial scales much smaller than the quasistatic theoretical estimates of nucleation sizes, suggesting that such fast injection rates constitute dynamic loading. In contrast, the relatively slow injection rates result in gradual nucleation processes, with the fluid spreading along the interface and causing stress changes consistent with gradually accelerating slow slip. The resulting dynamic ruptures propagating over wetted interfaces exhibit dynamic stress drops almost twice as large as those over the dry interfaces. These results suggest the need to take into account the rate of the pore-pressure increase when considering nucleation processes and motivate further investigation on how friction properties depend on the presence of fluids.

The close connection between fluids and faulting has been revealed by a large number of observations, both in tectonic settings and during human activities, such as wastewater disposal associated with oil and gas extraction, geothermal energy production, and CO2 sequestration (111). On and around tectonic faults, fluids also naturally exist and are added at depths due to rock-dehydration reactions (1215) Fluid-induced slip behavior can range from earthquakes to slow, creeping motion. It has long been thought that creeping and seismogenic fault zones have little to no spatial overlap. Nonetheless, growing evidence suggests that the same fault areas can exhibit both slow and dynamic slip (1619). The existence of large-scale slow slip in potentially seismogenic areas has been revealed by the presence of transient slow-slip events in subduction zones (16, 18) and proposed by studies investigating the physics of foreshocks (2022).Numerical and laboratory modeling has shown that such complex fault behavior can result from the interaction of fluid-related effects with the rate-and-state frictional properties (9, 14, 19, 23, 24); other proposed rheological explanations for complexities in fault stability include combinations of brittle and viscous rheology (25) and friction-to-flow transitions (26). The interaction of frictional sliding and fluids results in a number of coupled and competing mechanisms. The fault shear resistance τres is typically described by a friction model that linearly relates it to the effective normal stress σ^n via a friction coefficient f:τres=fσ^n=f(σnp),[1]where σn is the normal stress acting across the fault and p is the pore pressure. Clearly, increasing pore pressure p would reduce the fault frictional resistance, promoting the insurgence of slip. However, such slip need not be fast enough to radiate seismic waves, as would be characteristic of an earthquake, but can be slow and aseismic. In fact, the critical spatial scale h* for the slipping zone to reach in order to initiate an unstable, dynamic event is inversely proportional to the effective normal stress (27, 28) and hence increases with increasing pore pressure, promoting stable slip. This stabilizing effect of increasing fluid pressure holds for both linear slip-weakening and rate-and-state friction; it occurs because lower effective normal stress results in lower fault weakening during slip for the same friction properties. For example, the general form for two-dimensional (2D) theoretical estimates of this so-called nucleation size, h*, on rate-and-state faults with steady-state, velocity-weakening friction is given by:h*=(μ*DRS)/[F(a,b)(σnp)],[2]where μ*=μ/(1ν) for modes I and II, and μ*=μ for mode III (29); DRS is the characteristic slip distance; and F(a, b) is a function of the rate-and-state friction parameters a and b. The function F(a, b) depends on the specific assumptions made to obtain the estimate: FRR(a,b)=4(ba)/π (ref. 27, equation 40) for a linearized stability analysis of steady sliding, or FRA(a,b)=[π(ba)2]/2b, with a/b>1/2 for quasistatic crack-like expansion of the nucleation zone (ref. 30, equation 42).Hence, an increase in pore pressure induces a reduction in the effective normal stress, which both promotes slip due to lower frictional resistance and increases the critical length scale h*, potentially resulting in slow, stable fault slip instead of fast, dynamic rupture. Indeed, recent field and laboratory observations suggest that fluid injection triggers slow slip first (4, 9, 11, 31). Numerical modeling based on these effects, either by themselves or with an additional stabilizing effect of shear-layer dilatancy and the associated drop in fluid pressure, have been successful in capturing a number of properties of slow-slip events observed on natural faults and in field fluid-injection experiments (14, 24, 3234). However, understanding the dependence of the fault response on the specifics of pore-pressure increase remains elusive. Several studies suggest that the nucleation size can depend on the loading rate (3538), which would imply that the nucleation size should also depend on the rate of friction strength change and hence on the rate of change of the pore fluid pressure. The dependence of the nucleation size on evolving pore fluid pressure has also been theoretically investigated (39). However, the commonly used estimates of the nucleation size (Eq. 2) have been developed for faults under spatially and temporally uniform effective stress, which is clearly not the case for fluid-injection scenarios. In addition, the friction properties themselves may change in the presence of fluids (4042). The interaction between shear and fluid effects can be further affected by fault-gauge dilation/compaction (40, 4345) and thermal pressurization of pore fluids (42, 4648).Recent laboratory investigations have been quite instrumental in uncovering the fundamentals of the fluid-faulting interactions (31, 45, 4957). Several studies have indicated that fluid-pressurization rate, rather than injection volume, controls slip, slip rate, and stress drop (31, 49, 57). Rapid fluid injection may produce pressure heterogeneities, influencing the onset of slip. The degree of heterogeneity depends on the balance between the hydraulic diffusion rate and the fluid-injection rate, with higher injection rates promoting the transition from drained to locally undrained conditions (31). Fluid pressurization can also interact with friction properties and produce dynamic slip along rate-strengthening faults (50, 51).In this study, we investigate the relation between the rate of pressure increase on the fault and spontaneous rupture nucleation due to fluid injection by laboratory experiments in a setup that builds on and significantly develops the previous generations of laboratory earthquake setup of Rosakis and coworkers (58, 59). The previous versions of the setup have been used to study key features of dynamic ruptures, including sub-Rayleigh to supershear transition (60); rupture directionality and limiting speeds due to bimaterial effects (61); pulse-like versus crack-like behavior (62); opening of thrust faults (63); and friction evolution (64). A recent innovation in the diagnostics, featuring ultrahigh-speed photography in conjunction with digital image correlation (DIC) (65), has enabled the quantification of the full-field behavior of dynamic ruptures (6668), as well as the characterization of the local evolution of dynamic friction (64, 69). In these prior studies, earthquake ruptures were triggered by the local pressure release due to an electrical discharge. This nucleation procedure produced only dynamic ruptures, due to the nearly instantaneous normal stress reduction.To study fault slip triggered by fluid injection, we have developed a laboratory setup featuring a hydraulic circuit capable of injecting pressurized fluid onto the fault plane of a specimen and a set of experimental diagnostics that enables us to detect both slow and fast fault slip and stress changes. The range of fluid-pressure time histories produced by this setup results in both quasistatic and dynamic rupture nucleation; the diagnostics allows us to capture the nucleation processes, as well as the resulting dynamic rupture propagation. In particular, here, we explore two injection techniques: procedure 1, a gradual, and procedure 2, a sharp fluid-pressure ramp-up. An array of strain gauges, placed on the specimen’s surface along the fault, can capture the strain (translated into stress) time histories over a wide range of temporal scales, spanning from microseconds to tens of minutes. Once dynamic ruptures nucleate, an ultrahigh-speed camera records images of the propagating ruptures, which are turned into maps of full-field displacements, velocities, and stresses by a tailored DIC) analysis. One advantage of using a specimen made of an analog material, such as poly(methyl meth-acrylate) (PMMA) used in this study, is its transparency, which allows us to look at the interface through the bulk and observe fluid diffusion over the interface. Another important advantage of using PMMA is that its much lower shear modulus results in much smaller nucleation sizes h* than those for rocks, allowing the experiments to produce both slow and fast slip in samples of manageable sizes.We start by describing the laboratory setup and the diagnostics monitoring the pressure evolution and the slip behavior. We then present and discuss the different slip responses measured as a result of slow versus fast fluid injection and interpret our measurements by using the rate-and-state friction framework and a pressure-diffusion model.  相似文献   

6.
Advances in polymer chemistry over the last decade have enabled the synthesis of molecularly precise polymer networks that exhibit homogeneous structure. These precise polymer gels create the opportunity to establish true multiscale, molecular to macroscopic, relationships that define their elastic and failure properties. In this work, a theory of network fracture that accounts for loop defects is developed by drawing on recent advances in network elasticity. This loop-modified Lake–Thomas theory is tested against both molecular dynamics (MD) simulations and experimental fracture measurements on model gels, and good agreement between theory, which does not use an enhancement factor, and measurement is observed. Insight into the local and global contributions to energy dissipated during network failure and their relation to the bond dissociation energy is also provided. These findings enable a priori estimates of fracture energy in swollen gels where chain scission becomes an important failure mechanism.

Models that link materials structure to macroscopic behavior can account for multiple levels of molecular structure. For example, the statistical, affine deformation model connects the elastic modulus E to the molecular structure of a polymer chain,Eaff=3νkbT(ϕo13Roϕ13R)2,[1]where ν is density of chains, ϕ is polymer volume fraction, R is end-to-end distance, ϕo and Ro represent the parameters taken in the reference state that is assumed to be the reaction concentration in this work, and kbT is the available thermal energy where kb is Boltzmann’s constant and T is temperature (16). Refinements to this model that account for network-level structure, such as the presence of trapped entanglements or number of connections per junction, have been developed (711). Further refinements to the theory of network elasticity have been developed to account for dynamic processes such as chain relaxation and solvent transport (1217). Together these refinements link network elasticity to chain-level molecular structure, network-level structure, and the dynamic processes that occur at both size scales.While elasticity has been connected to multiple levels of molecular structure, models for network fracture have not developed to a similar extent. The fracture energy Gc typically relies upon the large strain deformation behavior of polymer networks, making it experimentally difficult to separate the elastic energy released upon fracture from that dissipated through dynamic processes (1826). In fact, most fracture theories have been developed at the continuum scale and have focused on modeling dynamic dissipation processes (27). An exception to this is the theory of Lake and Thomas that connects the elastic energy released during chain scission to chain-level structure,Gc,LT=ChainsArea×EnergyDissipatedChain=νRoNU,[2]where NU is the total energy released when a chain ruptures in which N represents the number of monomer segments in the chain and U the energy released per monomer (26).While this model was first introduced in 1967, experimental attempts to verify Lake–Thomas theory as an explicit model, as summarized in SI Appendix, have been unsuccessful. Ahagon and Gent (28) and Gent and Tobias (29) attempted to do this on highly swollen networks at elevated temperature but found that, while the scalings from Eq. 2 work well, an enhancement factor was necessary to observe agreement between theory and experiment. This led many researchers to conclude that Lake–Thomas theory worked only as a scaling argument. In 2008, Sakai et al. (30) introduced a series of end-linked tetrafunctional, star-like poly(ethylene glycol) (PEG) gels. Scattering measurements indicated a lack of nanoscale heterogeneities that are characteristic of most polymer networks (3032). Fracture measurements on these well-defined networks were performed and it was again observed that an enhancement factor was necessary to realize explicit agreement between experiment and theory (33). Arora et al. (34) recently attempted to address this discrepancy by accounting for loop defects; however, different assumptions were used when inputting U to calculate Lake–Thomas theory values that again required the use of an enhancement factor to achieve quantitative agreement. In this work we demonstrate that refining the Lake–Thomas theory to account for loop defects while using the full bond dissociation energy to represent U yields excellent agreement between the theory and both simulation and experimental data without the use of any adjustable parameters.PEG gels synthesized via telechelic end-linking reactions create the opportunity to build upon previous theory to establish true multiscale, molecular to macroscopic relationships that define the fracture response of polymer networks. This paper combines pure shear notch tests, molecular dynamics (MD) simulations, and theory to quantitatively extend the concept of network fracture without the use of an enhancement factor. First, the control of molecular-level structure in end-linked gel systems is discussed. Then, the choice of molecular parameters used to estimate chain- and network-level properties is discussed. Experimental and MD simulation methods used when fracturing model end-linked networks are then presented. A theory of network fracture that accounts for loop defects is developed, in the context of other such models that have emerged recently, and tested against data from experiments and MD simulations. Finally, a discussion of the local and global energy dissipated during failure of the network is presented.  相似文献   

7.
Mechanical properties are fundamental to structural materials, where dislocations play a decisive role in describing their mechanical behavior. Although the high-yield stresses of multiprincipal element alloys (MPEAs) have received extensive attention in the last decade, the relation between their mechanistic origins remains elusive. Our multiscale study of density functional theory, atomistic simulations, and high-resolution microscopy shows that the excellent mechanical properties of MPEAs have diverse origins. The strengthening effects through Shockley partials and stacking faults can be decoupled in MPEAs, breaking the conventional wisdom that low stacking fault energies are coupled with wide partial dislocations. This study clarifies the mechanistic origins for the strengthening effects, laying the foundation for physics-informed predictive models for materials design.

Multiprincipal element alloys (MPEAs) have triggered ever-increasing interest from the physics and materials science community due to their huge unexplored compositional space and superior physical, mechanical, and functional properties (112). They also provide an ideal platform to study fundamental physical mechanisms (6, 9, 13, 14). With the rise of MPEAs, understanding their mechanical properties has become a central topic in materials science in the last decade. In face-centered cubic (fcc) MPEAs, the motion of partial dislocations (Shockley partials) and their associated stacking faults (SF) defines their mechanical properties. Alloys with low SF energies (SFEs) have more extended SFs, which are generally believed to have more strength and ductility through twinning-induced plasticity (TWIP) and transformation-induced plasticity (TRIP) mechanisms (1517).Although extensive endeavors have been made, the commonalities in the origins of high-yield stresses shared by many MPEAs remain elusive. Among the most common intrinsic contributions of yield stresses are the lattice friction (or Peierls stress) and solute solution strengthening (1822). Since the birth of MPEAs, it has been a controversy about the relative importance of Peierls stress among the other contributions of yield stress, including the solid-solution strengthening effect (18, 2123). Many researchers assume small Peierls stresses based on the common wisdom of conventional alloys and pure metals (24, 25) and the low SFEs in MPEAs. Low SFEs usually accompany small Peierls stresses. Overall, this controversy originates from the lack of accurate dislocation geometry in MPEAs, which allows for a direct, critical evaluation of the Peierls stress. There are reports on the dislocation geometry in MPEAs, but almost all of them focused on the widths of SFs (2628). In contrast, the core widths of Shockley partials are rarely reported for MPEAs, partly due to the difficulty in measurements and partly due to unawareness of its importance. To address this issue, we need very accurate determination of the core width of the Shockley partials. It is an important input parameter for mechanical simulations and various theories and models (21, 2931). Here, we adopt three of the most extensively studied MPEAs, NiCoCr, VCoNi, and CoCrFeNiMn, and their only common fcc element, Ni, to address the above issues.The commonalities in the origins of high-yield stresses shared by the MPEAs can be indicated by the minimum energy profile along the dislocation motion path, i.e., the increased energies introduced by generalized SFEs (GSFEs; Fig. 1A). The local minima of the curves are SFEs, and the maxima are the theoretical energy barriers for pure shearing, which is a good indicator of the changes of Peierls stresses. Assisted by the accurate density functional theory (DFT), we compute GSFE curves for several representative MPEAs and their common fcc component Ni. This identifies a surprising fact: One of the representative MPEAs, NiCoCr, has a decoupled strengthening effect, i.e., it has a narrower dislocation core of Shockley partial than pure Ni, although its SF is much wider than Ni. Usually, in fcc alloys, when SFE is lower, its unstable SFE (USFE) (maximal GSFE) is also lower, which is coupled. Examples include the two other MPEAs, VCoNi and CoCrFeNiMn, and many Mg alloys (basal plane dislocations) (25) and Al alloys (32). However, NiCoCr does not follow this convention. The understanding from multiscale simulations, atomistic simulations, and the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) images rationalizes the narrow core of Shockley partials. These results clearly reveal the diverse and decoupled mechanistic origins for the strengthening effects in the MPEAs with excellent mechanical properties.Open in a separate windowFig. 1.GSFEs of three representative MPEAs and pure Ni. (A) The schematic for the generation of GSFs along the slip direction. The displacement 0.75 is equivalent to –0.25 due to the adopted periodic boundary condition. (B) The atom models at two representative displacements for GSFs. (C) The dashed lines are the fitting of the data points to equation γ=γ0sin2(πx)+(γuγ0/2)sin2(2πx) (64, 65). (D) The GSFEs in C are along the path indicated by the white arrows on the gamma surface, i.e., the minimum energy projected along the path denoted by the orange arrow. The GSFE curves reveal the origin for the wide SF and smaller half-width of Shockley partial of NiCoCr than Ni. We need to decrease SFE, while increasing γu, in order to optimize the mechanical properties.  相似文献   

8.
The Earth''s inner core started forming when molten iron cooled below the melting point. However, the nucleation mechanism, which is a necessary step of crystallization, has not been well understood. Recent studies have found that it requires an unrealistic degree of undercooling to nucleate the stable, hexagonal, close-packed (hcp) phase of iron that is unlikely to be reached under core conditions and age. This contradiction is referred to as the inner core nucleation paradox. Using a persistent embryo method and molecular dynamics simulations, we demonstrate that the metastable, body-centered, cubic (bcc) phase of iron has a much higher nucleation rate than does the hcp phase under inner core conditions. Thus, the bcc nucleation is likely to be the first step of inner core formation, instead of direct nucleation of the hcp phase. This mechanism reduces the required undercooling of iron nucleation, which provides a key factor in solving the inner core nucleation paradox. The two-step nucleation scenario of the inner core also opens an avenue for understanding the structure and anisotropy of the present inner core.

The core plays a key role in the Earth’s evolution. The present core contains two major parts, a solid inner core and a liquid outer core. Iron dominates both parts with a small amount of light elements (1). The solid core is generally believed to be hexagonal, close-packed (hcp) iron, while the possible existence of body-centered, cubic (bcc) iron has also been suggested (25). The growth of the solid inner core is believed to be the major driving force of the present geodynamo, providing the main power source for convection in the liquid core (6, 7). Despite its importance, the initial formation of the solid core, which directly relates to its thermal evolution and Earth’s history, is far from being completely understood (812). Most of Earth’s thermal history models assume that the inner core started to crystallize when molten iron cooled right below its melting temperature at the Earth’s center (7). However, in practice, nucleation does not happen at the melting point but requires some undercooling because of the formation of a solid–liquid interface (SLI) that accompanies it. While the bulk solid phase is thermodynamically favored, the SLI costs energy. These two factors lead to a nucleation barrier ΔG, which is described in classical nucleation theory (CNT) (13) asΔG=NΔμ+Aγ,[1]where N is the nucleus size, Δμ (<0) is the free energy difference between the bulk solid and liquid, γ (>0) is SLI free energy, and A is the SLI area. The liquid must be cooled sufficiently below the melting temperature to overcome the free-energy barrier during thermal fluctuations. After considering this mechanism, it was found that a very large undercooling of ∼1,000 K is required for the nucleation of hcp iron in the Earth’s core (14). However, considering the slow cooling rate of ∼100 K/Gyr throughout the core history (15), it is impossible to reach such a large degree of undercooling inside the Earth within the inner core’s age. This “inner core nucleation paradox,” recently described by Huguet et al. (14), strongly challenges the current understanding of the inner core formation process. While Huguet et al.’s argument relies on a few estimations of thermodynamic quantities, Davies et al. also confirmed the paradox with atomic-scale simulations (16). Even considering the effect of light elements on the nucleation process, it still requires 675 K undercooling to nucleate hcp iron, nearly impossible to reach in the Earth core (16).CNT was proposed more than a century ago, and its formalism is the most widely used to describe nucleation phenomena nowadays. The simplest scenario in CNT assumes a single-nucleation pathway where only the nucleus of the thermodynamically stable phase forms and grows toward the bulk phase. This was the situation considered in refs. 14 and 15), in which the authors assumed that the melt in the Earth’s core crystallized directly into the hcp phase. Recent studies have shown that nucleation can be a multistep process that includes multiple intermediate stages and phases (1719). While the CNT concept of nucleus formation is still valid under these situations, phase competition must be considered (18, 19). Therefore, instead of the single-pathway scenario, we can consider a complex process in which nucleation is facilitated by forming an intermediate phase with a high-nucleation rate. For example, it has been observed that the bcc phase can nucleate before the face-centered, cubic (fcc) or hcp phases in a few alloys in which the fcc/hcp phase is the most stable one (2024). Could the bcc phase also facilitate hcp iron nucleation and relate to the inner core nucleation paradox? Making a quantitative prediction on such complex nucleation processes is a challenging problem. In addition to the extreme conditions in the core, nucleation involves microscopic-length scales that are extremely hard to probe in real time, even with state-of-the-art measurements (25). Hence, it requires computer simulations, particularly large-scale molecular dynamics (MD), to reproduce the temporal evolution of the liquid into the crystal (26). Unfortunately, nucleation under Earth’s core conditions is a rare event that occurs on the geological time scale, far beyond the reach of conventional MD simulations. Besides, large-scale MD simulations require semiempirical potentials to describe atomic interactions, and the outcome may depend heavily on the potential’s quality (27). In this work, we assess the inner core nucleation process with the account of competition between bcc and hcp phases during the nucleation process using the persistent embryo method (PEM) (28) to overcome the significant time limitation in conventional MD simulation of nucleation.  相似文献   

9.
10.
We have combined ultrasensitive force-based spin detection with high-fidelity spin control to achieve NMR diffraction (NMRd) measurement of ~2 million 31P spins in a (50 nm)3 volume of an indium-phosphide (InP) nanowire. NMRd is a technique originally proposed for studying the structure of periodic arrangements of spins, with complete access to the spectroscopic capabilities of NMR. We describe two experiments that realize NMRd detection with subangstrom precision. In the first experiment, we encode a nanometer-scale spatial modulation of the z-axis magnetization of 31P spins and detect the period and position of the modulation with a precision of <0.8 Å. In the second experiment, we demonstrate an interferometric technique, utilizing NMRd, to detect an angstrom-scale displacement of the InP sample with a precision of 0.07 Å. The diffraction-based techniques developed in this work extend the Fourier-encoding capabilities of NMR to the angstrom scale and demonstrate the potential of NMRd as a tool for probing the structure and dynamics of nanocrystalline materials.

Scattering techniques that employ coherent sources, such as X-rays, neutrons, and electrons, are universal tools in many branches of natural science for exploring the structure of matter. In crystalline materials, these approaches provide a direct and efficient means of characterizing the periodicity of charge and magnetic order. MRI, like other scattering approaches, is a reciprocal space technique, in which the measured signal is proportional to the Fourier transform of the spin density. This similarity between MRI and scattering was recognized very early in the development of MRI and led Mansfield and Grannell in 1973 to propose NMR “diffraction” (NMRd) as a method for determining the lattice structure of crystalline solids (13), taking advantage of the chemical specificity of NMR.The main challenge to achieving atomic-scale NMRd lies in the difficulty of generating a sufficiently large wavenumber k, capable of encoding a relative phase difference as large as 2π between adjacent spins on a lattice, separated by angstrom-scale distances. For example, the largest encoding wavenumbers achieved in clinical high-resolution MRI scanners are of order k/(2π)~104m1, more than a factor of 105 smaller than what is needed to measure typical atomic spacings in condensed-matter systems (4). Consequently, while MRI has become a transformative technique in medical science, earning Sir Peter Mansfield and Paul Lauterbur the Nobel Prize in Physiology and Medicine, the original vision of NMRd as a method for exploring material structure has not yet been realized.The realization of atomic-scale NMRd would be a powerful tool for characterizing periodic nuclear spin structures, combining the spectroscopic capabilities of NMR with spatial encoding at condensed matter’s fundamental length scale. NMRd is a phase-sensitive technique that permits real-space reconstruction of the spin density, without the loss of phase information common to scattering techniques, such as X-rays, that measure the scattered-field intensity (5). Being nondestructive and particularly sensitive to hydrogen, NMRd could be of great importance in the study of ordered biological systems, such as protein nanocrystals that are of great interest in structural biology (6, 7). Furthermore, the combination of scattering with NMR’s rich repertoire of spectroscopic tools opens additional avenues for spatially resolved studies of nuclear-spin Hamiltonians (e.g., chemical shifts or spin–spin interactions), which are currently achieved only through increasingly complex and indirect methods (8). Finally, NMRd could be used to study quantum many-body dynamics on the atomic scale. NMR scattering experiments have previously been used in the direct measurement of spin diffusion in CaF2 on the micrometer scale (9). Experiments on many-body dynamics have also been conducted in engineered quantum simulators, such as ultracold atoms (1012), trapped ions (1315), superconducting circuits (1618), and quantum dots (19). However, these measurements have thus far been limited to small-scale quantum systems that are at most hundreds of qubits. Angstrom-scale NMRd measurements would permit studying the dynamics of complex large-scale spin networks in condensed-matter systems on length scales as short as the lattice spacing.Over the past two decades the principal technologies needed to encode nuclear spin states with wavenumbers of order 1 Å1 have been developed in the context of force-detected nanoMRI (2026). In this work, we report two experiments that utilize key advances in nanoMRI technology—namely the ability to generate large time-dependent magnetic-field gradients and the ability to detect and coherently control nanoscale ensembles of nuclear spins (2731)—to generate encoding wavenumbers as large as k/(2π)~0.48 Å1.Our first experiment demonstrates the use of spatial spin-state modulation that encodes position information the way it was envisioned in the initial NMRd proposal. Phase-sensitive NMRd detection enables us to determine the position and period of a “diffraction grating” with a precision of <0.8 Å. The diffraction grating itself is a z-axis 31P spin magnetization modulation, the mean period of which is 4.5 nm in our (50 nm)3 detection volume. Our second experiment utilizes spatially modulated spin phase in an alternative way—as a label for the physical displacement of the spins. Our interferometric technique detects an angstrom-scale displacement of the indium-phosphide (InP) sample with a precision of 0.07 Å.  相似文献   

11.
Our study of cholesteric lyotropic chromonic liquid crystals in cylindrical confinement reveals the topological aspects of cholesteric liquid crystals. The double-twist configurations we observe exhibit discontinuous layering transitions, domain formation, metastability, and chiral point defects as the concentration of chiral dopant is varied. We demonstrate that these distinct layer states can be distinguished by chiral topological invariants. We show that changes in the layer structure give rise to a chiral soliton similar to a toron, comprising a metastable pair of chiral point defects. Through the applicability of the invariants we describe to general systems, our work has broad relevance to the study of chiral materials.

Chiral liquid crystals (LCs) are ubiquitous, useful, and rich systems (14). From the first discovery of the liquid crystalline phase to the variety of chiral structures formed by biomolecules (59), the twisted structure, breaking both mirror and continuous spatial symmetries, is omnipresent. The unique structure also makes the chiral nematic (cholesteric) LC, an essential material for applications utilizing the tunable, responsive, and periodic modulation of anisotropic properties.The cholesteric is also a popular model system to study the geometry and topology of partially ordered matter. The twisted ground state of the cholesteric is often incompatible with confinement and external fields, exhibiting a large variety of frustrated and metastable director configurations accompanying topological defects. Besides the classic example of cholesterics in a Grandjean−Cano wedge (10, 11), examples include cholesteric droplets (1216), colloids (1719), shells (2022), tori (23, 24), cylinders (2529), microfabricated structures (30, 31), and films between parallel plates with external fields (3240). These structures are typically understood using a combination of nematic (achiral) topology (41, 42) and energetic arguments, for example, the highly successful Landau−de Gennes approach (43). However, traditional extensions of the nematic topological approach to cholesterics are known to be conceptually incomplete and difficult to apply in regimes where the system size is comparable to the cholesteric pitch (41, 44).An alternative perspective, chiral topology, can give a deeper understanding of these structures (4547). In this approach, the key role is played by the twist density, given in terms of the director field n by n×n. This choice is not arbitrary; the Frank free energy prefers n×nq0=2π/p0 with a helical pitch p0, and, from a geometric perspective, n×n0 defines a contact structure (48). This allows a number of new integer-valued invariants of chiral textures to be defined (45). A configuration with a single sign of twist is chiral, and two configurations which cannot be connected by a path of chiral configurations are chirally distinct, and hence separated by a chiral energy barrier. Within each chiral class of configuration, additional topological invariants may be defined using methods of contact topology (4548), such as layer numbers. Changing these chiral topological invariants requires passing through a nonchiral configuration. Cholesterics serve as model systems for the exploration of chirality in ordered media, and the phenomena we describe here—metastability in chiral systems controlled by chiral topological invariants—has applicability to chiral order generally. This, in particular, includes chiral ferromagnets, where, for example, our results on chiral topological invariants apply to highly twisted nontopological Skyrmions (49, 50) (“Skyrmionium”).Our experimental model to explore the chiral topological invariants is the cholesteric phase of lyotropic chromonic LCs (LCLCs). The majority of experimental systems hitherto studied are based on thermotropic LCs with typical elastic and surface-anchoring properties. The aqueous LCLCs exhibiting unusual elastic properties, that is, very small twist modulus K2 and large saddle-splay modulus K24 (5156), often leading to chiral symmetry breaking of confined achiral LCLCs (53, 54, 5661), may enable us to access uncharted configurations and defects of topological interests. For instance, in the layer configuration by cholesteric LCLCs doped with chiral molecules, their small K2 provides energetic flexibility to the thickness of the cholesteric layer, that is, the repeating structure where the director n twists by π. The large K24 affords curvature-induced surface interactions in combination with a weak anchoring strength of the lyotropic LCs (6264).We present a systematic investigation of the director configuration of cholesteric LCLCs confined in cylinders with degenerate planar anchoring, depending on the chiral dopant concentration. We show that the structure of cholesteric configurations is controlled by higher-order chiral topological invariants. We focus on two intriguing phenomena observed in cylindrically confined cholesterics. First, the cylindrical symmetry renders multiple local minima to the energy landscape and induces discontinuous increase of twist angles, that is, a layering transition, upon the dopant concentration increase. Additionally, the director configurations of local minima coexist as metastable domains with point-like defects between them. We demonstrate that a chiral layer number invariant distinguishes these configurations, protects the distinct layer configurations (45), and explains the existence of the topological defect where the invariant changes.  相似文献   

12.
Macromolecular phase separation is thought to be one of the processes that drives the formation of membraneless biomolecular condensates in cells. The dynamics of phase separation are thought to follow the tenets of classical nucleation theory, and, therefore, subsaturated solutions should be devoid of clusters with more than a few molecules. We tested this prediction using in vitro biophysical studies to characterize subsaturated solutions of phase-separating RNA-binding proteins with intrinsically disordered prion-like domains and RNA-binding domains. Surprisingly, and in direct contradiction to expectations from classical nucleation theory, we find that subsaturated solutions are characterized by the presence of heterogeneous distributions of clusters. The distributions of cluster sizes, which are dominated by small species, shift continuously toward larger sizes as protein concentrations increase and approach the saturation concentration. As a result, many of the clusters encompass tens to hundreds of molecules, while less than 1% of the solutions are mesoscale species that are several hundred nanometers in diameter. We find that cluster formation in subsaturated solutions and phase separation in supersaturated solutions are strongly coupled via sequence-encoded interactions. We also find that cluster formation and phase separation can be decoupled using solutes as well as specific sets of mutations. Our findings, which are concordant with predictions for associative polymers, implicate an interplay between networks of sequence-specific and solubility-determining interactions that, respectively, govern cluster formation in subsaturated solutions and the saturation concentrations above which phase separation occurs.

Phase separation of RNA-binding proteins with disordered prion-like domains (PLDs) and RNA-binding domains (RBDs) is implicated in the formation and dissolution of membraneless biomolecular condensates such as RNA–protein (RNP) granules (19). Macroscopic phase separation is a process whereby a macromolecule in a solvent separates into a dilute, macromolecule-deficient phase that coexists with a dense, macromolecule-rich phase (10, 11). In a binary mixture, the soluble phase, comprising dispersed macromolecules that are well mixed with the solvent, becomes saturated at a concentration designated as csat. Above csat, for total macromolecular concentrations ctot that are between the binodal and spinodal, phase separation of full-length RNA-binding proteins and PLDs is thought to follow classical nucleation theory (1215).In classical nucleation theories, clusters representing incipient forms of the new dense phase form within dispersed phases of supersaturated solutions defined by ctot > csat (16, 17). In the simplest formulation of classical nucleation theory (1618), the free energy of forming a cluster of radius a is ΔF=4π3a3Δμρn+4πa2γ. Here, Δµ is the difference in the chemical potential between the one-phase and two-phase regimes (see discussion in SI Appendix), which is negative in supersaturated solutions and positive in subsaturated solutions; ρn is the number of molecules per unit volume, and γ is the interfacial tension between dense and dilute phases. At temperature T, in a seed-free solution, the degree of supersaturation s is defined as sΔμRT=ln(ctotcsat), where R is the ideal gas constant. Here, s is positive for ctot > csat, and, as s increases, cluster formation becomes more favorable. Above a critical radius a*, the free energy of cluster formation can overcome the interfacial penalty, and the new dense phase grows in a thermodynamically downhill fashion. Ideas from classical nucleation theory have been applied to analyze and interpret the dynamics of phase separation in supersaturated solutions (12, 13, 15). Classical nucleation theories stand in contrast to two-step nucleation theories that predict the existence of prenucleation clusters in supersaturated solutions (1922). These newer theories hint at the prospect of there being interesting features in subsaturated solutions, where ctot < csat and s < 0.The subsaturated regime, where s is negative, corresponds to the one-phase regime. Ignoring the interfacial tension, the free energy of realizing clusters with n molecules in subsaturated solutions is: ΔF = –nΔµ. Therefore, the probability P(n) of forming a cluster of n molecules in a subsaturated solution is proportional to exp(sn). Accordingly, the relative probability P(n)/P(1) of forming clusters with n molecules will be exp(s(n – 1)). This quantity, which may be thought of as the concentration of clusters with n molecules, is negligibly small for clusters with more than a few molecules. This is true irrespective of the degree of subsaturation, s. Is this expectation from classical nucleation theories valid? We show here that subsaturated solutions feature a rich distribution of species not anticipated by classical nucleation theories. We report results from measurements of cluster size distributions in subsaturated solutions of phase-separating RNA-binding proteins from the FUS-EWSR1-TAF15 (FET) family. We find that these systems form clusters in subsaturated solutions, and that the cluster sizes follow heavy-tailed distributions. The abundant species are always small clusters. However, as total macromolecular concentration (ctot) increases, the distributions of cluster sizes shift continuously toward larger values. We discuss these findings in the context of theories for associative polymers (9, 2330).  相似文献   

13.
14.
Molecular, polymeric, colloidal, and other classes of liquids can exhibit very large, spatially heterogeneous alterations of their dynamics and glass transition temperature when confined to nanoscale domains. Considerable progress has been made in understanding the related problem of near-interface relaxation and diffusion in thick films. However, the origin of “nanoconfinement effects” on the glassy dynamics of thin films, where gradients from different interfaces interact and genuine collective finite size effects may emerge, remains a longstanding open question. Here, we combine molecular dynamics simulations, probing 5 decades of relaxation, and the Elastically Cooperative Nonlinear Langevin Equation (ECNLE) theory, addressing 14 decades in timescale, to establish a microscopic and mechanistic understanding of the key features of altered dynamics in freestanding films spanning the full range from ultrathin to thick films. Simulations and theory are in qualitative and near-quantitative agreement without use of any adjustable parameters. For films of intermediate thickness, the dynamical behavior is well predicted to leading order using a simple linear superposition of thick-film exponential barrier gradients, including a remarkable suppression and flattening of various dynamical gradients in thin films. However, in sufficiently thin films the superposition approximation breaks down due to the emergence of genuine finite size confinement effects. ECNLE theory extended to treat thin films captures the phenomenology found in simulation, without invocation of any critical-like phenomena, on the basis of interface-nucleated gradients of local caging constraints, combined with interfacial and finite size-induced alterations of the collective elastic component of the structural relaxation process.

Spatially heterogeneous dynamics in glass-forming liquids confined to nanoscale domains (17) play a major role in determining the properties of molecular, polymeric, colloidal, and other glass-forming materials (8), including thin films of polymers (9, 10) and small molecules (1115), small-molecule liquids in porous media (2, 4, 16, 17), semicrystalline polymers (18, 19), polymer nanocomposites (2022), ionomers (2325), self-assembled block and layered (2633) copolymers, and vapor-deposited ultrastable molecular glasses (3436). Intense interest in this problem over the last 30 y has also been motivated by the expectation that its understanding could reveal key insights concerning the mechanism of the bulk glass transition.Considerable progress has been made for near-interface altered dynamics in thick films, as recently critically reviewed (1). Large amplitude gradients of the structural relaxation time, τ(z,T), converge to the bulk value, τbulk(T), in an intriguing double-exponential manner with distance, z, from a solid or vapor interface (13, 3742). This implies that the corresponding effective activation barrier, Ftotal(z,T,H) (where H is film thickness), varies exponentially with z, as does the glass transition temperature, Tg (37). Thus the fractional reduction in activation barrier, ε(z,H), obeys the equation ε(z,H)1Ftotal(z,T,H)/Ftotal,bulk(T)=ε0exp(z/ξF), where Ftotal,bulk(T) is the bulk temperature-dependent barrier and ξF a length scale of modest magnitude. Although the gradient of reduction in absolute activation barriers becomes stronger with cooling, the amplitude of the fractional reduction of the barrier gradient, quantified by ε0, and the range ξF of this gradient, exhibit a weak or absent temperature dependence at the lowest temperatures accessed by simulations (typically with the strength of temperature dependence of ξF decreasing rather than increasing on cooling), which extend to relaxation timescales of order 105 ps. This finding raises questions regarding the relevance of critical-phenomena–like ideas for nanoconfinement effects (1). Partially due to this temperature invariance, coarse-grained and all-atom simulations (1, 37, 42, 43) have found a striking empirical fractional power law decoupling relation between τ(z,T) and τbulk(T):τ(T,z)τbulk(T)(τbulk(T))ε(z).[1]Recent theoretical analysis suggests (44) that this behavior is consistent with a number of experimental data sets as well (45, 46). Eq. 1 also corresponds to a remarkable factorization of the temperature and spatial location dependences of the barrier:Ftotal(z,T)=[1ε(z)]Ftotal,bulk(T).[2]This finding indicates that the activation barrier for near-interface relaxation can be factored into two contributions: a z-dependent, but T-independent, “decoupling exponent,” ε(z), and a temperature-dependent, but position-insensitive, bulk activation barrier, Ftotal,bulk(T). Eq. 2 further emphasizes that ε(z) is equivalent to an effective fractional barrier reduction factor (for a vapor interface), 1Ftotal(z,T,H)/Ftotal,bulk(T), that can be extracted from relaxation data.In contrast, the origin of “nanoconfinement effects” in thin films, and how much of the rich thick-film physics survives when dynamic gradients from two interfaces overlap, is not well understood. The distinct theoretical efforts for aspects of the thick-film phenomenology (44, 4750) mostly assume an additive summation of one-interface effects in thin films, thereby ignoring possibly crucial cooperative and whole film finite size confinement effects. If the latter involve phase-transition–like physics as per recent speculations (14, 51), one can ask the following: do new length scales emerge that might be truncated by finite film size? Alternatively, does ultrathin film phenomenology arise from a combination of two-interface superposition of the thick-film gradient physics and noncritical cooperative effects, perhaps in a property-, temperature-, and/or thickness-dependent manner?Here, we answer these questions and establish a mechanistic understanding of thin-film dynamics for the simplest and most universal case: a symmetric freestanding film with two vapor interfaces. We focus on small molecules (modeled theoretically as spheres) and low to medium molecular weight unentangled polymers, which empirically exhibit quite similar alterations in dynamics under “nanoconfinement.” We do not address anomalous phenomena [e.g., much longer gradient ranges (29), sporadic observation of two distinct glass transition temperatures (52, 53)] that are sometimes reported in experiments with very high molecular weight polymers and which may be associated with poorly understood chain connectivity effects that are distinct from general glass formation physics (5456).We employ a combination of molecular dynamics simulations with a zero-parameter extension to thin films of the Elastically Cooperative Nonlinear Langevin Equation (ECNLE) theory (57, 58). This theory has previously been shown to predict well both bulk activated relaxation over up to 14 decades (4446) and the full single-gradient phenomenology in thick films (1). Here, we extend this theory to treat films of finite thickness, accounting for coupled interface and geometric confinement effects. We compare predictions of ECNLE theory to our previously reported (37, 43) and new simulations, which focus on translational dynamics of films comprised of a standard Kremer–Grest-like bead-spring polymer model (see SI Appendix). These simulations cover a wide range of film thicknesses (H, from 4 to over 90 segment diameters σ) and extend to low temperatures where the bulk alpha time is ∼0.1 μs (105 Lennard Jones time units τLJ).The generalized ECNLE theory is found to be in agreement with simulation for all levels of nanoconfinement. We emphasize that this theory does not a priori assume any of the empirically established behaviors discovered using simulation (e.g., fractional power law decoupling, double-exponential barrier gradient, gradient flattening) but rather predicts these phenomena based upon interfacial modifications of the two coupled contributions to the underlying activation barrier– local caging constraints and a long-ranged collective elastic field. It is notable that this strong agreement is found despite the fact the dynamical ideas are approximate, and a simple hard sphere fluid model is employed in contrast to the bead-spring polymers employed in simulation. The basic unit of length in simulation (bead size σ) and theory (hard sphere diameter d) are expected to be proportional to within a prefactor of order unity, which we neglect in making comparisons.As an empirical matter, we find from simulation that many features of thin-film behavior can be described to leading order by a linear superposition of the thick-film gradients in activation barrier, that is:ε(z,H)=1Ftotal(z,T,H)/Ftotal,bulk(T)ε0[exp(z/ξF)+exp((Hz)/ξF)],[3]where the intrinsic decay length ξF is unaltered from its thick-film value and where ε0 is a constant that, in the hypothesis of literal gradient additivity, is invariant to temperature and film thickness. We employ this functional form [originally suggested by Binder and coworkers (59)], which is based on a simple superposition of the two single-interface gradients, as a null hypothesis throughout this study: this form is what one expects if no new finite-size physics enters the thin-film problem relative to the thick film.However, we find that the superposition approximation progressively breaks down, and eventually entirely fails, in ultrathin films as a consequence of the emergence of a finite size confinement effect. The ECNLE theory predicts that this failure is not tied to a phase-transition–like mechanism but rather is a consequence of two key coupled physical effects: 1) transfer of surface-induced reduction of local caging constraints into the film, and 2) interfacial truncation and nonadditive modifications of the collective elastic contribution to the activation barrier.  相似文献   

15.
16.
Lyotropic chromonic liquid crystals are water-based materials composed of self-assembled cylindrical aggregates. Their behavior under flow is poorly understood, and quantitatively resolving the optical retardance of the flowing liquid crystal has so far been limited by the imaging speed of current polarization-resolved imaging techniques. Here, we employ a single-shot quantitative polarization imaging method, termed polarized shearing interference microscopy, to quantify the spatial distribution and the dynamics of the structures emerging in nematic disodium cromoglycate solutions in a microfluidic channel. We show that pure-twist disclination loops nucleate in the bulk flow over a range of shear rates. These loops are elongated in the flow direction and exhibit a constant aspect ratio that is governed by the nonnegligible splay-bend anisotropy at the loop boundary. The size of the loops is set by the balance between nucleation forces and annihilation forces acting on the disclination. The fluctuations of the pure-twist disclination loops reflect the tumbling character of nematic disodium cromoglycate. Our study, including experiment, simulation, and scaling analysis, provides a comprehensive understanding of the structure and dynamics of pressure-driven lyotropic chromonic liquid crystals and might open new routes for using these materials to control assembly and flow of biological systems or particles in microfluidic devices.

Lyotropic chromonic liquid crystals (LCLCs) are aqueous dispersions of organic disk-like molecules that self-assemble into cylindrical aggregates, which form nematic or columnar liquid crystal phases under appropriate conditions of concentration and temperature (16). These materials have gained increasing attention in both fundamental and applied research over the past decade, due to their distinct structural properties and biocompatibility (4, 714). Used as a replacement for isotropic fluids in microfluidic devices, nematic LCLCs have been employed to control the behavior of bacteria and colloids (13, 1520).Nematic liquid crystals form topological defects under flow, which gives rise to complex dynamical structures that have been extensively studied in thermotropic liquid crystals (TLCs) and liquid crystal polymers (LCPs) (2129). In contrast to lyotropic liquid crystals that are dispersed in a solvent and whose phase can be tuned by either concentration or temperature, TLCs do not need a solvent to possess a liquid-crystalline state and their phase depends only on temperature (30). Most TLCs are shear-aligned nematics, in which the director evolves toward an equilibrium out-of-plane polar angle. Defects nucleate beyond a critical Ericksen number due to the irreconcilable alignment of the directors from surface anchoring and shear alignment in the bulk flow (24, 3133). With an increase in shear rate, the defect type can transition from π-walls (domain walls that separate regions whose director orientation differs by an angle of π) to ordered disclinations and to a disordered chaotic regime (34). Recent efforts have aimed to tune and control the defect structures by understanding the relation between the selection of topological defect types and the flow field in flowing TLCs. Strategies to do so include tuning the geometry of microfluidic channels, inducing defect nucleation through the introduction of isotropic phases or designing inhomogeneities in the surface anchoring (3539). LCPs are typically tumbling nematics for which α2α3 < 0, where α2 and α3 are the Leslie viscosities. This leads to a nonzero viscous torque for any orientation of the director, which allows the director to rotate in the shear plane (22, 29, 30, 40). The tumbling character of LCPs facilitates the nucleation of singular topological defects (22, 40). Moreover, the molecular rotational relaxation times of LCPs are longer than those of TLCs, and they can exceed the timescales imposed by the shear rate. As a result, the rheological behavior of LCPs is governed not only by spatial gradients of the director field from the Frank elasticity, but also by changes in the molecular order parameter (25, 4143). With increasing shear rate, topological defects in LCPs have been shown to transition from disclinations to rolling cells and to worm-like patterns (25, 26, 43).Topological defects occurring in the flow of nematic LCLCs have so far received much more limited attention (44, 45). At rest, LCLCs exhibit unique properties distinct from those of TLCs and LCPs (1, 2, 46, 44). In particular, LCLCs have significant elastic anisotropy compared to TLCs; the twist Frank elastic constant, K2, is much smaller than the splay and bend Frank elastic constants, K1 and K3. The resulting relative ease with which twist deformations can occur can lead to a spontaneous symmetry breaking and the emergence of chiral structures in static LCLCs under spatial confinement, despite the achiral nature of the molecules (4, 4651). When driven out of equilibrium by an imposed flow, the average director field of LCLCs has been reported to align predominantly along the shear direction under strong shear but to reorient to an alignment perpendicular to the shear direction below a critical shear rate (5254). A recent study has revealed a variety of complex textures that emerge in simple shear flow in the nematic LCLC disodium cromoglycate (DSCG) (44). The tumbling nature of this liquid crystal leads to enhanced sensitivity to shear rate. At shear rates γ˙<1s1, the director realigns perpendicular to the flow direction adapting a so-called log-rolling state characteristic of tumbling nematics. For 1s1<γ˙<10s1, polydomain textures form due to the nucleation of pure-twist disclination loops, for which the rotation vector is parallel to the loop normal, and mixed wedge-twist disclination loops, for which the rotation vector is perpendicular to the loop normal (44, 55). Above γ˙>10s1, the disclination loops gradually transform into periodic stripes in which the director aligns predominantly along the flow direction (44).Here, we report on the structure and dynamics of topological defects occurring in the pressure-driven flow of nematic DSCG. A quantitative evaluation of such dynamics has so far remained challenging, in particular for fast flow velocities, due to the slow image acquisition rate of current quantitative polarization-resolved imaging techniques. Quantitative polarization imaging traditionally relies on three commonly used techniques: fluorescence confocal polarization microscopy, polarizing optical microscopy, and LC-Polscope imaging. Fluorescence confocal polarization microscopy can provide accurate maps of birefringence and orientation angle, but the fluorescent labeling may perturb the flow properties (56). Polarizing optical microscopy requires a mechanical rotation of the polarizers and multiple measurements, which severely limits the imaging speed. LC-Polscope, an extension of conventional polarization optical microscopy, utilizes liquid crystal universal compensators to replace the compensator used in conventional polarization microscopes (57). This leads to an enhanced imaging speed and better compensation for polarization artifacts of the optical system. The need for multiple measurements to quantify retardance, however, still limits the acquisition rate of LC-Polscopes.We overcome these challenges by using a single-shot quantitative polarization microscopy technique, termed polarized shearing interference microscopy (PSIM). PSIM combines circular polarization light excitation with off-axis shearing interferometry detection. Using a custom polarization retrieval algorithm, we achieve single-shot mapping of the retardance, which allows us to reach imaging speeds that are limited only by the camera frame rate while preserving a large field-of-view and micrometer spatial resolution. We provide a brief discussion of the optical design of PSIM in Materials and Methods; further details of the measurement accuracy and imaging performance of PSIM are reported in ref. 58.Using a combination of experiments, numerical simulations and scaling analysis, we show that in the pressure-driven flow of nematic DSCG solutions in a microfluidic channel, pure-twist disclination loops emerge for a certain range of shear rates. These loops are elongated in the flow with a fixed aspect ratio. We demonstrate that the disclination loops nucleate at the boundary between regions where the director aligns predominantly along the flow direction close to the channel walls and regions where the director aligns predominantly perpendicular to the flow direction in the center of the channel. The large elastic stresses of the director gradient at the boundary are then released by the formation of disclination loops. We show that both the characteristic size and the fluctuations of the pure-twist disclination loops can be tuned by controlling the flow rate.  相似文献   

17.
18.
If dark energy is a form of quintessence driven by a scalar field ϕ evolving down a monotonically decreasing potential V(ϕ) that passes sufficiently below zero, the universe is destined to undergo a series of smooth transitions. The currently observed accelerated expansion will cease; soon thereafter, expansion will come to end altogether; and the universe will pass into a phase of slow contraction. In this paper, we consider how short the remaining period of expansion can be given current observational constraints on dark energy. We also discuss how this scenario fits naturally with cyclic cosmologies and recent conjectures about quantum gravity.

In the Λ cold dark matter (Λ CDM) model, dark energy takes the form of a positive cosmological constant, in which case the current period of accelerated expansion will endure indefinitely into the future (1). An alternative is that the current vacuum is metastable and has positive energy density. If it is separated by an energy barrier from a true vacuum phase with zero or negative vacuum density, then accelerated expansion will be ended by the nucleation of a bubble of true vacuum that grows to encompass us. Until that moment, cosmological observations will be indistinguishable from the Λ CDM picture. Without extreme fine tuning, the timescale before a bubble will nucleate (2) and pass our location can be exponentially many Hubble times in the future (for example, refs. 3 and 4). (Here and throughout, “Hubble time” refers to H0114 Gy, where H0 is the current Hubble expansion rate.) Also, the ultrarelativistic bubble wall will likely destroy all observers in its path, so there will be no surviving witnesses to the end of accelerated expansion (2).A third possibility, to be considered here, is that the dark energy is a type of quintessence due to a scalar field ϕ evolving down a monotonically decreasing potential V(ϕ) (5). Since the current value of V(ϕ0) is extraordinarily small today as measured in Planck mass units, there is a wide range of forms for V(ϕ) that pass through zero and continue to large negative values where V(ϕ)V(ϕ0). In this case, the equations of motion of Einstein’s general theory of relativity dictate that the universe is destined to undergo a remarkable series of smooth transitions (68).First, as the positive potential energy density decreases and the kinetic energy density comes to exceed it, the current phase of accelerated expansion will end and smoothly transition to a period of decelerated expansion. Next, as the scalar field continues to evolve down the potential, the potential energy density will become sufficiently negative that the total energy density [H2(t)] and, consequently, the Hubble parameter H(t), will reach zero. Consequently, expansion (H > 0) will stop altogether and smoothly change to contraction (H < 0). More precisely, the transition will be to a phase of slow contraction (7, 8) in which the Friedmann–Robertson–Walker (FRW) scale factor a(t)|H1|α, where α<1/3.In this paper, we consider how soon these transitions could begin. That is, What is the minimal time, beginning from the present (t = t0), before expansion ends and contraction begins given current observational constraints on dark energy and without introducing extreme fine tuning? One might imagine the answer is one or more Hubble times given how well Λ CDM is claimed to fit current cosmological data.  相似文献   

19.
20.
It is a widely held belief that people’s choices are less sensitive to changes in value as value increases. For example, the subjective difference between $11 and $12 is believed to be smaller than between $1 and $2. This idea is consistent with applications of the Weber-Fechner Law and divisive normalization to value-based choice and with psychological interpretations of diminishing marginal utility. According to random utility theory in economics, smaller subjective differences predict less accurate choices. Meanwhile, in the context of sequential sampling models in psychology, smaller subjective differences also predict longer response times. Based on these models, we would predict decisions between high-value options to be slower and less accurate. In contrast, some have argued on normative grounds that choices between high-value options should be made with less caution, leading to faster and less accurate choices. Here, we model the dynamics of the choice process across three different choice domains, accounting for both discriminability and response caution. Contrary to predictions, we mostly observe faster and more accurate decisions (i.e., higher drift rates) between high-value options. We also observe that when participants are alerted about incoming high-value decisions, they exert more caution and not less. We rule out several explanations for these results, using tasks with both subjective and objective values. These results cast doubt on the notion that increasing value reduces discriminability.

Are decision-makers sensitive to the average value of their options? For example, when shopping for a car, does the choice process differ at a bargain lot compared to a luxury dealership? Is it easier to choose between two cars valued at $5,000 or $50,000?To answer this question, we must first define what we mean by “easier.” There are two basic features of easy decisions: they are consistent and fast. For instance, it is well established that choices are inconsistent and slow when the choice options are similar in value to each other, while they are consistent and fast when there is a large difference in the options’ values (15). The effect of value difference on the stochasticity of choice is predicted by many popular models, dating back at least to Luce (6), and the effect of value difference on response time (RT) is predicted by sequential sampling models (712). In fact, the effect of value difference on both choice frequencies and RT has been documented in many laboratory experiments (10, 13).In comparison, there has been much less research into the effects of overall value (OV), holding value difference constant. Among conventional stochastic choice models, a common assumption is that OV should be irrelevant. One popular economic model is the additive random utility model (2), which implies the probability of choosing an option i over another alternative j should be an increasing function of μi − μj, where for any option i the utility assigned to it is μi (before the addition of the random error term). Therefore, a constant utility difference should imply the same choice frequencies regardless of whether μi and μj are two small quantities or two large quantities. The logit (softmax) choice function, commonly used to fit preference models to experimental data, similarly posits choice frequencies of the formP[ij]=eλμieλμi+ eλμj=(1+eλ(μiμj))1for some “inverse temperature” parameter λ > 0. This model again implies that only utility differences matter. Finally, choice frequencies and RT are often jointly modeled using sequential sampling models. The most popular of these models, the drift diffusion model (DDM), commonly assumes that the drift rate of the decision variable is proportional to the difference in value between the two options (9, 10). Under this assumption, the DDM predicts that both choice frequencies and mean RT should depend only on the value difference and not on OV.The aforementioned models imply that OV is irrelevant only under the assumption that value representations (i.e., utilities) are linear, monotonic functions of the values measured by the experimenter. However, there are many theories of value representation that instead posit that utilities are nonlinear functions of the measured values, i.e., μi = μ(Vi). In this case, choice frequencies and RT would depend on more than just the value difference ΔV = ViVj measured by the experimenter.What form should the function μ(V) take? A natural proposal would be to assume that μ(V) is increasing but strictly concave, so that the marginal utility μ′(V) decreases as V increases. The assumption of diminishing marginal utility is commonplace in economic modeling, dating back to Bernoulli (14). It is typically invoked to explain the imperfect substitutability between different goods in a bundle (15), imperfect substitutability of consumption over time (16), or risk aversion (17)—contexts that might seem orthogonal to stochastic choice or issues of discriminability. Nonetheless, one might conjecture that the same mechanisms that generate diminishing marginal utility in these other contexts should also determine the relationship between measured values and utilities in a random utility model of stochastic choice.Similarly, Prospect Theory is predicated on the assumption that choices are made based on subjective values generated by nonlinear transformations of objective values (17). Notably, this value function is assumed to reflect diminishing marginal sensitivity to increasing values. Kahneman and Tversky use this value function to explain modal choices but do not propose any model of the stochasticity of observed choices or of RT. They motivate their incorporation of diminishing marginal sensitivity based on an analogy to the psychophysics of perceptual judgements, in which objective sensory magnitudes are often mapped onto an internal scale (18) with a nonlinear function that is typically expected to be concave (as with the logarithmic mapping postulated by the Weber-Fechner Law). The key evidence for such nonlinearity is the way in which the discriminability between two stimuli declines with increases in the absolute magnitudes of the two stimuli (holding the difference constant). Kahneman and Tversky also expected this to be true of comparisons involving economic values, and others have formalized this assumption within stochastic versions of Prospect Theory fit to experimental data (19).Another way to motivate this type of nonlinear function is with the theory of divisive normalization in neural coding. An influential literature in neuroscience has determined that neural firing rates that represent sensory magnitudes are normalized in such a way that a given difference in objective magnitudes results in a smaller difference in the respective firing rates when the two objective magnitudes increase (2023). Recent work in neuroeconomics has applied divisive normalization to stochastic, value-based choice under the assumption that there is a one-to-one relationship between the neural representation of value in firing rates and the choice behavior it generates (2430). A theory of stochastic choice predicated on divisive normalization thus predicts that option discriminability will decrease as OV increases (see SI Appendix for details).Despite the intuitive appeal of diminishing marginal sensitivity and the evidence for it in other sensory domains, there is little direct evidence that OV decreases discriminability once you control for value difference. The behavioral evidence on accuracy rates is controversial (31). Furthermore, the notion that utility differences decrease with OV is typically inferred from the presence of risk-averse behavior, which could arise for other reasons (3235).One possible reason for the mixed behavioral evidence is that increasing OV may also increase perceived importance, motivating decision-makers to approach high-value decisions more cautiously (3640). The well-known speed–accuracy tradeoff (5, 9) implies that more caution could counteract losses in discriminability. On the other hand, there is abundant evidence that high-value decisions tend to be fast (10, 4145). Even nonhuman primates will choose between juices (including identical ones) faster as the amount of juice increases (46). Based on these results, it appears unlikely that high-value decisions are made more cautiously, but we cannot be sure because both discriminability and response caution affect RT (47).To properly determine how OV influences discriminability while accounting for response caution, we require analyses that consider both accuracy and RT. Using the DDM, we can account for response caution while simultaneously estimating the effect of OV on discriminability (48).In this paper, we applied the DDM to behavior in three studies, each with the same structure but different types of decisions. Each experiment involved a series of binary choices, separated into blocks with three categories of OV (low, middle, and high). To study OV effects in naturalistic settings, studies 1 and 2 used snack foods and abstract art, respectively. Subjects first rated how much they liked various items, then later chose between them. These tasks are commonly used in the literature, but also come with a drawback: they rely on subjective ratings. Subjective ratings noisily represent subjects’ true values (49), and ratings on different parts of the scale may be more or less noisy (50). To rule out these concerns, study 3 used a paradigm with learned values that were objective and identically distributed in each OV condition.In each study, we first tested core predictions about discriminability varying with OV in a baseline condition. Specifically, we used the DDM to estimate discriminability (via drift rate) as a function of OV while accounting for response-caution differences (via boundary separation) between OV categories. We tested the hypothesis that discriminability would be reduced in higher OV contexts against the null hypothesis that OV would have no effect on discriminability.To investigate the impacts of OV on response caution, we included a condition with cues that indicated the value category for the upcoming block. These cues did not provide any additional information. We included the value cues because in the DDM framework, decision-makers adjust their decision boundaries at the block level. Thus, we reasoned that the value cues would allow subjects to set (and reveal to us) their desired level of response caution for each value category. If decision-makers view higher-value decisions as more (less) important, value cues should increase (decrease) boundaries in high-value blocks.To preview the results, across all three studies (for which studies 2 and 3 were preregistered), we find heightened, not reduced, discriminability as OV increases; we observe both faster and more accurate choices at high OV and a tendency toward slower and less accurate choices at low OV. However, we find that value cues increase response caution for high-value compared to middle-value trials, indicating that decision-makers are motivated to be slower and more accurate for high-value decisions. We find these same effects in all three studies, indicating that they are not due to familiarity/accessibility (51), different uses of the rating scale, or variability within value categories.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号