首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 59 毫秒
1.
Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve—from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4–0.5.Phase curves provide unique insight into the atmosphere of a planet, a fact well known and tested in solar system exploration (13). Disentangling the information encoded in a phase curve is a complex process however, and interpretations can be faced with degeneracies. The potential of phase curves to characterize exoplanet atmospheres, particularly in combination with other techniques, is tantalizing. Phase curves observed over all orbital phases (OPs) are available for a few close-in planets in the optical (passband central wavelengths λ < 0.8 μm) (415) and the infrared (1 μm ≤ λ ≤ 24 μm) (1619). At infrared wavelengths the measured flux from hot planets is typically dominated by thermal emission. In the optical, both thermal emission and reflected starlight contribute, with the relative size of the contributions dependent on the measurement wavelength as well as on the temperature of the atmosphere and the occurrence of condensates (2025).Kepler-7b (26) is one of the ∼1,000 planets discovered by the Kepler mission. Its inferred mass Mp (= 0.44 MJ; J for Jupiter) and radius Rp (= 1.61 RJ) result in an unusually low bulk density (0.14 g⋅cm−3) that is inconsistent with current models of giant planet interiors (27, 28). Kepler-7b orbits a quiet G-type star of effective temperature T? = 5,933 K every 4.89 d (orbital distance a = 0.062 astronomical units) (6, 7), and tidal forces have likely synchronized its orbit and spin motions. Taken together these set a planet equilibrium temperature Teq ≤  1,935 K.Kepler photometry (0.4–0.9 μm) of the star–planet system has enabled the optical study of Kepler-7b (47, 10, 14). The inferred geometric albedo, Ag = 0.25–0.38 (4, 6, 7, 10, 14), reveals a planet of reflectivity comparable to the solar system giants (Ag = 0.4–0.5), which is unexpectedly high for a close-in gas planet. Theory indeed predicts that the strong stellar irradiation that a planet in such an orbit experiences strips off reflective clouds, rendering the planet dark (Ag <  0.1) (22, 25). The prediction is largely consistent with empirical evidence, and dark planets dominate the sample of known close-in giant planets (8, 13, 21, 29, 30). Exceptions exist, and other planets [51 Peg b, Ag = 0.5  ×  (1.9/(Rp/RJ))2 at 0.38–0.69 μm (31); HD 189733b, Ag = 0.40 ± 0.12 at 0.29–0.45 μm (32); and KOI-196b, Ag = 0.30 ± 0.08 at 0.4–0.9 μm (33)] with elevated albedos suggest that we are beginning to sample the diversity of exoplanet atmospheres. Potentially compensating for strong stellar irradiation, Kepler-7b’s low surface gravity (417 cm s−2) may help sustain reflective condensates lofted in the upper atmosphere that would increase the planet albedo (25).Brightness temperatures for Kepler-7b inferred from occultations at 3.6 μm and 4.5 μm with Spitzer [<1,700 K and 1,840 K, respectively (7)] are well below the equivalent brightness temperature deduced from Kepler data (∼2,600 K). This key constraint, placed in the framework of heat recirculation in the atmospheres of close-in giants, is evidence that the Kepler optical phase curve is dominated by reflected starlight rather than by thermal emission (7, 21, 34). Interestingly, the peak of the optical phase curve occurs after secondary eclipse (OP >  0.5), when the planet as viewed from Earth is not fully illuminated and longitudes westward of the substellar point are preferentially probed. This asymmetry hints at a spatial structure in Kepler-7b’s envelope caused by horizontally inhomogenous clouds (7, 21, 34). Subsequent investigations have identified other planets that show similar offset between occultation and peak brightness (4, 10). However, the lack of infrared measurements for these means that it has not been possible to rule out contamination in the optical by a thermal component as the cause of the asymmetry.Recent work has used the optical phase curve of Kepler-7b to build brightness maps (7, 34), investigate the prevalence of reflected starlight over thermal emission (34), and explore plausible cloud configurations (35). No previous study has systematically connected the extent, location, and optical thickness of the cloud, or the composition and size of the suspended particles, to the measured phase curve. That exercise is the objective of this paper.  相似文献   

2.
3.
4.
5.
Increased exposure to extreme heat from both climate change and the urban heat island effect—total urban warming—threatens the sustainability of rapidly growing urban settlements worldwide. Extreme heat exposure is highly unequal and severely impacts the urban poor. While previous studies have quantified global exposure to extreme heat, the lack of a globally accurate, fine-resolution temporal analysis of urban exposure crucially limits our ability to deploy adaptations. Here, we estimate daily urban population exposure to extreme heat for 13,115 urban settlements from 1983 to 2016. We harmonize global, fine-resolution (0.05°), daily temperature maxima and relative humidity estimates with geolocated and longitudinal global urban population data. We measure the average annual rate of increase in exposure (person-days/year−1) at the global, regional, national, and municipality levels, separating the contribution to exposure trajectories from urban population growth versus total urban warming. Using a daily maximum wet bulb globe temperature threshold of 30 °C, global exposure increased nearly 200% from 1983 to 2016. Total urban warming elevated the annual increase in exposure by 52% compared to urban population growth alone. Exposure trajectories increased for 46% of urban settlements, which together in 2016 comprised 23% of the planet’s population (1.7 billion people). However, how total urban warming and population growth drove exposure trajectories is spatially heterogeneous. This study reinforces the importance of employing multiple extreme heat exposure metrics to identify local patterns and compare exposure trends across geographies. Our results suggest that previous research underestimates extreme heat exposure, highlighting the urgency for targeted adaptations and early warning systems to reduce harm from urban extreme heat exposure.

Increased exposure to extreme heat from both climate change (15) and the urban heat island (UHI) effect (69) threaten the sustainability of rapidly growing urban settlements worldwide. Exposure to dangerously high temperatures endangers urban health and development, driving reductions in labor productivity and economic output (10, 11) and increases in morbidity (1) and mortality (2, 3, 12). Within urban settlements, extreme heat exposure is highly unequal and most severely impacts the urban poor (13, 14). Despite the harmful and inequitable risks, we presently lack a globally comprehensive, fine-resolution understanding of where urban population growth intersects with increases in extreme heat (2, 6, 15). Without this knowledge, we have limited ability to tailor adaptations to reduce extreme heat exposure across the planet’s diverse urban settlements (6, 15, 16).Reducing the impacts of extreme heat exposure to urban populations requires globally consistent, accurate, and high-resolution measurement of both climate and demographic conditions that drive exposure (5, 15, 17). Such analysis provides decision makers with information to develop locally tailored interventions (7, 18, 19) and is also sufficiently broad in spatial coverage to transfer knowledge across urban geographies and climates (6). Information about exposures and interventions from diverse contexts is vital for the development of functional early warning systems (20) and can help guide risk assessments and inform future scenario planning (21). Existing global extreme heat exposure assessments (1, 2), however, do not meet these criteria (SI Appendix, Table S1) and are insufficient for decision makers. These studies are coarse grained (>0.5° spatial resolution), employ disparate or single metrics that do not capture the complexities of heat-health outcomes (22), do not separate urban from rural exposure (19), and rely on climate reanalysis products that can be substantially (∼1 to 3 °C) cooler than in situ data observations (5, 23, 24). In fact, widely cited benchmarks (25) that estimate extreme heat with the version 5 of the European Centre for Medium-Range Weather Forecasts Reanalysis (ERA5) (26) may greatly underestimate total global exposure to extreme heat (5, 23, 24). Using a 40.6 °C daily maximum 2-m air temperature threshold (Tmax), recent analysis found that ERA5 Tmax drastically underestimated the number of extreme heat days per year compared to in situ observations (23). Finally, few studies (2, 18) have assessed urban extreme heat exposure across data-sparse (23) rapidly urbanizing regions, such as sub-Saharan Africa, the Middle East, and Southern Asia (27), that may be most impacted by increased extreme heat events due to climate change (3, 5, 28).Here, we present a globally comprehensive, fine-resolution, and longitudinal estimate of urban population exposure to extreme heat––referred to henceforth as exposure––for 13,115 urban settlements from 1983 to 2016. To accomplish this, we harmonize global, fine-grained (0.05° spatial resolution) Tmax estimates (23) with global urban population and spatial extent data (29). For each urban settlement, we calculate area-averaged daily wet bulb globe temperature (WBGTmax) (30) and heat index (HImax) (31) maxima using Climate Hazards Center InfraRed Temperature with Stations Daily (CHIRTS-daily) Tmax (23) and down-scaled daily minimum relative humidity (RHmin) estimates (32). CHIRTS-daily is better suited to measure urban extreme heat exposure than other gridded temperature datasets used in recent global extreme heat studies (SI Appendix, Table S1) for two reasons. First, it is more accurate, especially at long distances (refer to figure 3 in ref. 23), than widely used gridded temperature datasets to estimate urban temperature signals worldwide (SI Appendix, Figs. S1 and S2). Second, it better captures the spatial heterogeneity of Tmax across diverse urban contexts (SI Appendix, Fig. S3). These factors are key for measuring extreme heat exposure in rapidly urbanizing, data-sparse regions.As discussed in refs. 23 and 24, the number of in situ temperature observations is far too low across rapidly urbanizing (27) regions to resolve spatial and temporal urban extreme heat fluctuations, which can vary dramatically over small distances and time periods. For example, of the more than 3,000 urban settlements in India (29), only 111 have reliable station observations (SI Appendix, Fig. S3). While climate reanalyses can help overcome these limitations, they are coarse grained (SI Appendix, Table S1) and suffer from mean bias, and, to a lesser degree, temporal fidelity. ERA5 has been shown to substantially underestimate the increasing frequencies of heat extremes (figure 4 in ref. 23), while Modern-Era Retrospective analysis for Research and Applications Version 2 (MERRA2) fails to represent the substantial increase in recent monthly Tmax values (figure 8 in ref. 24). These datasets dramatically underestimate increases in warming. CHIRTS-daily overcomes these limitations by coherently stacking information from a high-resolution (0.05°) climatology-derived surface emission temperature (24), interpolated in situ observations, and ERA5 reanalysis to produce a product that has been explicitly developed to monitor and assess temperature related hazards (23). As such, CHIRTS-daily is best suited to capture variation in exposure across urban settlements in rapidly urbanizing (27), data-sparse regions such as sub-Saharan Africa, the Middle East, and Southern Asia (SI Appendix, Fig. S3) (24).We measure exposure in person-days/year−1—the number of days per year that exceed a heat exposure threshold multiplied by the total urban population exposed (5). We then estimate annual rates of increase in exposure at the global (Fig. 1), regional (SI Appendix, Table S2), national (SI Appendix, Table S3), and municipality levels from 1983 to 2016 (SI Appendix, Table S4). At each spatial scale, we separate the contribution to exposure trajectories from total urban warming and population growth (5). For clarity, total urban warming refers to the combined increase of extreme heat in urban settlements from both the UHI effect and anthropogenic climate change. We do not decouple these two forcing agents (33, 34). However, we identify which urban settlements have warmed the fastest by measuring the rate of increase in the number of days per year that exceed the two extreme heat thresholds described below (15). Our main findings use an extreme heat exposure threshold defined as WBGTmax > 30 °C, the International Standards Organization (ISO) occupational heat stress threshold for risk of heat-related illness among acclimated persons at low metabolic rates (100 to 115 W) (30). WBGTmax is a widely used heat stress metric (35) that captures the biophysical response (36) of hot temperature–humidity combinations (3, 17) that reduce labor output (36), lead to heat-related illness (36), and can cause death (23). In using a threshold WBGTmax > 30 °C, which has been associated with higher mortality rates among vulnerable populations (37), we aim to identify truly extremely hot temperature–humidity combinations (17) that can harm human health and well-being. We recognize, however, that strict exposure thresholds do not account for individual-level risks and vulnerabilities related to acclimatization, socio-economic, or health status or local infrastructure (18, 19, 38). We also note that there are a range of definitions of exposure, and we provide further analysis identifying 2-d or longer periods during which the maximum heat index (HImax) (31) exceeded 40.6 °C (SI Appendix, Figs. S4–S6) following the US National Weather Service’s definition for an excessive heat warning (39).Open in a separate windowFig. 1.Global urban population exposure to extreme heat, defined by 1-d or longer periods when WBGTmax > 30 °C, from 1983 to 2016 (A), with the contribution from population growth (B), and total urban warming (C) decoupled.  相似文献   

6.
Widespread tree mortality caused by outbreaks of native bark beetles (Circulionidae: Scolytinae) in recent decades has raised concern among scientists and forest managers about whether beetle outbreaks fuel more ecologically severe forest fires and impair postfire resilience. To investigate this question, we collected extensive field data following multiple fires that burned subalpine forests in 2011 throughout the Northern Rocky Mountains across a spectrum of prefire beetle outbreak severity, primarily from mountain pine beetle (Dendroctonus ponderosae). We found that recent (2001–2010) beetle outbreak severity was unrelated to most field measures of subsequent fire severity, which was instead driven primarily by extreme burning conditions (weather) and topography. In the red stage (0–2 y following beetle outbreak), fire severity was largely unaffected by prefire outbreak severity with few effects detected only under extreme burning conditions. In the gray stage (3–10 y following beetle outbreak), fire severity was largely unaffected by prefire outbreak severity under moderate conditions, but several measures related to surface fire severity increased with outbreak severity under extreme conditions. Initial postfire tree regeneration of the primary beetle host tree [lodgepole pine (Pinus contorta var. latifolia)] was not directly affected by prefire outbreak severity but was instead driven by the presence of a canopy seedbank and by fire severity. Recent beetle outbreaks in subalpine forests affected few measures of wildfire severity and did not hinder the ability of lodgepole pine forests to regenerate after fire, suggesting that resilience in subalpine forests is not necessarily impaired by recent mountain pine beetle outbreaks.Natural disturbances (e.g., wildfires, floods, storms, insect outbreaks) play a central role in structuring ecosystems worldwide (1, 2), but multiple disturbances can potentially interact in synergistic (i.e., compound) ways that alter ecosystem resilience (the capacity to tolerate disturbance without shifting to a different state) (3, 4). Understanding these potential interactions and their consequences is critical for conserving and managing ecosystems in a period of increasing climate-driven disturbance activity (5, 6). Widespread outbreaks of native bark beetles (Circulionidae: Scolytinae) during the last decade have caused extensive tree mortality over tens of millions of hectares of conifer forests in North America (7, 8) and Eurasia (9, 10). Forest fire activity (occurrence, area burned) has also increased in these regions during this time (11), and concern has grown about whether the recent pulse of beetle-killed trees will increase the ecological severity of subsequent wildfires and/or decrease postfire forest resilience (12, 13).Most tree mortality in the recent North American beetle outbreaks is attributable to mountain pine beetles (Dendroctonus ponderosae; MPB), primarily attacking lodgepole pine (Pinus contorta var. latifolia) (8). Severe MPB outbreaks can result in up to 90% mortality of tree basal area (1418), which could compromise postfire resilience by increasing the severity of subsequent wildfires, decreasing seed sources (thus diminishing postfire tree regeneration), or both.Tree mortality caused by MPB outbreaks alters the fuel structure of forests (i.e., the quantity, quality, and distribution of biomass) (1417) in ways that could affect fire severity (defined as the degree of short-term ecological change caused by a fire, typically measured by the proportion of biomass lost, or vegetation killed by fire) (19). Increases in dead and flammable fuels in postoutbreak forests can influence fire behavior (e.g., energy release and spread rate, see ref. 12 for a recent review) and present operational challenges for wildland firefighting (20, 21). However, less is known about whether wildfires that burn postoutbreak forests are more ecologically severe and have important consequences for ecosystem function compared with forests unaffected by recent outbreaks, despite heightened concern among scientists and forest managers (12, 13).In contrast to studies of fire behavior, studies of fire severity use retrospective (i.e., postfire) data, as ecological effects of fire (e.g., vegetation mortality, biomass loss) manifest after the fire has ended (19). Studies that have evaluated effects of MPB outbreaks on fire severity have typically compared the presence (or absence) of either disturbance or used remotely sensed indices of disturbance severity (2224). Most studies have not assessed wildfire severity across the spectrum of beetle outbreak severity (amount of basal area or trees killed by beetles), limiting the ability to detect complex disturbance interactions. Other studies (22, 24) have lacked controls (i.e., stands of similar structure that were unaffected by recent prefire outbreaks and burned under similar conditions), making it difficult to separate effects of beetle outbreaks from other factors that affect fire severity, such as topography, weather, fuels, and prefire vegetation adaptations to fire (19). Recent case studies near Yellowstone National Park have begun to assess single fires using detailed field data on outbreak and fire severity (25), but consistent trends across many fire events remain untested.By killing large mature trees in a forest stand, MPB outbreaks may also limit the availability of key seed sources that would otherwise contribute to postfire tree establishment, therefore reducing forest resilience. For example, lodgepole pine is adapted to high-severity wildfires by storing seeds in serotinous (i.e., closed) cones until heat from fire opens the cones, leading to abundant postfire tree regeneration soon after fires (2628). If forests do not regenerate naturally following wildfire in areas where prefire trees are killed by MPB outbreaks, postfire planting or seeding may be needed to recover carbon stocks and prevent transitions to nonforest (13). Regional-scale field measures of prefire outbreak severity, wildfire severity, and postfire response are needed in wildfires that occurred in recent beetle-affected forests to resolve key uncertainties and contribute to more general understanding of disturbance interactions (12).In this study, we used field data to ask whether recent bark-beetle outbreaks affected wildfire severity (canopy, forest floor, and tree mortality; Methods and SI Text) or initial postfire tree regeneration in six wildfires that burned a total of >30,000 ha during summer 2011 in the Northern Rocky Mountains (Fig. S1 and Table S1). The study fires included variation in prefire beetle-outbreak severity [0–84% of tree basal area killed by bark beetles, primarily MPB-attacked lodgepole pine and to a lesser degree whitebark pine (Pinus albicaulis); Tables S2 and S3], typical of the range observed in many North American forests (8). Such variation allowed us to assess fire severity across the spectrum of recent prefire outbreak severity, including stands unaffected by the recent outbreaks (effectively serving as a control). Three fires burned forests where most attacked stands were in the red postoutbreak stage (0–2 y after beetle attack, ∼50% retention of largely red needles on beetle-killed trees) (12, 14, 15), considered to be most vulnerable to increased crown fire because canopy fuels are drier and more flammable (21, 29). Three fires burned forests where most attacked stands were in the gray postoutbreak stage (3–10 y after beetle attack, <5% needle retention on beetle-killed trees, most beetle-killed trees still standing) (12, 14, 15). Gray-stage forests are considered less vulnerable to increased crown fire because canopy fuels are substantially reduced (1416, 30), although increased surface fuels from needle and branch fall could increase surface fire severity (1517). Portions of fires burned during moderate (low temperature and wind and high relative humidity) or extreme (high temperature and wind and low relative humidity) weather conditions, and across a range of slope positions, allowing us to test for effects of MPB outbreaks while accounting for other factors known to affect fire severity (Table S4 and SI Text).Using established protocols (Tables S3S7 and SI Text) (25, 31), we sampled burned areas in 2012 (1 y after fire). We reconstructed prefire forest structure and outbreak severity and measured fire severity in 0.07-ha plots (n = 105). In plots (n = 70) of stand-replacing fire (i.e., all live prefire trees were killed by fire), we also measured postfire tree seedling establishment. To test whether prefire beetle outbreaks affected fire severity, we regressed eight field measures of fire severity [char height, bole scorch, fine fuels (needles and small branches) remaining in the canopy for trees that were alive at the time of fire, percentage of tree basal area with deep charring into the crown and <5% of branches remaining, tree mortality (basal area and number of trees), postfire litter + duff depth, and charred surface cover] against prefire outbreak severity (percentage of stand basal area killed by bark beetles before fire) using general linear mixed models that accounted for topography and burning conditions. To test whether the compound effects of beetle outbreaks and fire reduced postfire regeneration (thus decreasing resilience) in areas of stand-replacing fire, we used nonparametric analyses (random forests and regression trees, Spearman’s rank correlations) to assess the relationship between prefire outbreak severity and postfire lodgepole pine seedling density. Because our field study captured wide natural variability across stands, we considered P < 0.05 as strong evidence of effects and P < 0.10 as suggestive/moderate evidence of effects in all models and statistical tests. See Methods and SI Text for further details on field measurements and analyses.  相似文献   

7.
Active nitrifiers and rapid nitrification are major contributing factors to nitrogen losses in global wheat production. Suppressing nitrifier activity is an effective strategy to limit N losses from agriculture. Production and release of nitrification inhibitors from plant roots is termed “biological nitrification inhibition” (BNI). Here, we report the discovery of a chromosome region that controls BNI production in “wheat grass” Leymus racemosus (Lam.) Tzvelev, located on the short arm of the “Lr#3Nsb” (Lr#n), which can be transferred to wheat as T3BL.3NsbS (denoted Lr#n-SA), where 3BS arm of chromosome 3B of wheat was replaced by 3NsbS of L. racemosus. We successfully introduced T3BL.3NsbS into the wheat cultivar “Chinese Spring” (CS-Lr#n-SA, referred to as “BNI-CS”), which resulted in the doubling of its BNI capacity. T3BL.3NsbS from BNI-CS was then transferred to several elite high-yielding hexaploid wheat cultivars, leading to near doubling of BNI production in “BNI-MUNAL” and “BNI-ROELFS.” Laboratory incubation studies with root-zone soil from field-grown BNI-MUNAL confirmed BNI trait expression, evident from suppression of soil nitrifier activity, reduced nitrification potential, and N2O emissions. Changes in N metabolism included reductions in both leaf nitrate, nitrate reductase activity, and enhanced glutamine synthetase activity, indicating a shift toward ammonium nutrition. Nitrogen uptake from soil organic matter mineralization improved under low N conditions. Biomass production, grain yields, and N uptake were significantly higher in BNI-MUNAL across N treatments. Grain protein levels and breadmaking attributes were not negatively impacted. Wide use of BNI functions in wheat breeding may combat nitrification in high N input–intensive farming but also can improve adaptation to low N input marginal areas.

Nitrification and denitrification are critical soil biological processes, which, left unchecked, can accelerate generation of harmful reactive nitrogen (N) forms (NO3 , N2O, and NOx) that trigger a “nitrogen cascade,” damaging ecosystems, water systems, and soil fertility (1 8). Excessive nitrifier activity and a rapid generation of soil nitrates plague modern cereal production systems. This has led to shifting crop N nutrition toward an “all nitrate form,” which is largely responsible for N losses and a decline in agronomic nitrogen-use efficiency (NUE) (6, 7, 9 11).Wheat, one of the three founding crops for food security (12), consumes nearly a fifth of factory-produced N fertilizers, and it has an average NUE of 33%, which has remained unchanged for the last two decades (13 15). Regulating soil nitrifier activity to slow the rate of soil nitrate formation should provide more balanced N forms (NH4 + and NO3 ) for plant uptake (rather than nearly “all NO3 ” at present), reduce N losses, and facilitate the assimilation of dual N forms. This optimizes the utilization of biochemical machinery for N assimilation, improving stability and possibly enhancing yield potential (16). In addition, the assimilation of NH4 + is energetically more efficient (requiring 40% less metabolic energy) than NO3 assimilation (16). Often, a stimulatory growth response is observed in wheat, when 15 to 30% of NO3 is replaced with NH4 + in nutrient solutions (17, 18).Synthetic nitrification inhibitors (SNIs) have been shown to suppress N2O emissions, reduce N losses, and improve agronomic NUE in several cereal crops including wheat (6, 19 21). However, the lack of cost effectiveness, inconsistency in field performance, inability to function in tropical environments, and the concerns related to the entering of SNIs into food chains have limited their adoption in production agriculture (6, 7, 19, 20).Biological nitrification inhibition (BNI) is a plant function whereby nitrification inhibitors (BNIs) are produced from root systems to suppress soil nitrifier activity (22 26). Earlier, we reported that the BNI capacity in the root systems of cultivated wheat lack adequate strength to effectively suppress soil nitrifier activity in the rhizosphere (24, 25). Leymus racemosus (hereafter referred to as “wild grass”), a perennial Triticeae evolutionarily related to wheat, produces extensive root systems ( SI Appendix, Fig. S1) and was discovered to have a high BNI capacity several times higher than cultivated wheat. It was also effective in suppressing soil nitrifier activity and in reducing soi -nitrate formation ( SI Appendix, Fig. S2) (25). Subsequently, the chromosome Lr#n = 3Nsb was found to be controlling a major part of BNI capacity in wild grass, and it is the focus of our current research (25, 27, 28). Earlier, we reported that Lr#I and Lr#J had a minor impact on BNI capacity, but they are not the focus of this research (25).We transferred the Lr#n chromosome (Lr#n-SA = T3BL.3NsbS) controlling BNI capacity (hereafter referred to as BNI trait) into the cultivated wheat, Chinese Spring (CS). The results of the transfer of this BNI trait into several elite wheat types with a grain-yield (GY) potential >10 t ha−1, resulting in substantial improvements of BNI capacity in root systems, are reported in this paper.  相似文献   

8.
9.
Coffinite, USiO4, is an important U(IV) mineral, but its thermodynamic properties are not well-constrained. In this work, two different coffinite samples were synthesized under hydrothermal conditions and purified from a mixture of products. The enthalpy of formation was obtained by high-temperature oxide melt solution calorimetry. Coffinite is energetically metastable with respect to a mixture of UO2 (uraninite) and SiO2 (quartz) by 25.6 ± 3.9 kJ/mol. Its standard enthalpy of formation from the elements at 25 °C is −1,970.0 ± 4.2 kJ/mol. Decomposition of the two samples was characterized by X-ray diffraction and by thermogravimetry and differential scanning calorimetry coupled with mass spectrometric analysis of evolved gases. Coffinite slowly decomposes to U3O8 and SiO2 starting around 450 °C in air and thus has poor thermal stability in the ambient environment. The energetic metastability explains why coffinite cannot be synthesized directly from uraninite and quartz but can be made by low-temperature precipitation in aqueous and hydrothermal environments. These thermochemical constraints are in accord with observations of the occurrence of coffinite in nature and are relevant to spent nuclear fuel corrosion.In many countries with nuclear energy programs, spent nuclear fuel (SNF) and/or vitrified high-level radioactive waste will be disposed in an underground geological repository. Demonstrating the long-term (106–109 y) safety of such a repository system is a major challenge. The potential release of radionuclides into the environment strongly depends on the availability of water and the subsequent corrosion of the waste form as well as the formation of secondary phases, which control the radionuclide solubility. Coffinite (1), USiO4, is expected to be an important alteration product of SNF in contact with silica-enriched groundwater under reducing conditions (28). It is also found, accompanied by thorium orthosilicate and uranothorite, in igneous and metamorphic rocks and ore minerals from uranium and thorium sedimentary deposits (2, 4, 5, 816). Under reducing conditions in the repository system, the uranium solubility (very low) in aqueous solutions is typically derived from the solubility product of UO2. Stable U(IV) minerals, which could form as secondary phases, would impart lower uranium solubility to such systems. Thus, knowledge of coffinite thermodynamics is needed to constrain the solubility of U(IV) in natural environments and would be useful in repository assessment.In natural uranium deposits such as Oklo (Gabon) (4, 7, 11, 12, 14, 17, 18) and Cigar Lake (Canada) (5, 13, 15), coffinite has been suggested to coexist with uraninite, based on electron probe microanalysis (EPMA) (4, 5, 7, 11, 13, 17, 19, 20) and transmission electron microscopy (TEM) (8, 15). However, it is not clear whether such apparent replacement of uraninite by a coffinite-like phase is a direct solid-state process or occurs mediated by dissolution and reprecipitation.The precipitation of USiO4 as a secondary phase should be favored in contact with silica-rich groundwater (21) [silica concentration >10−4 mol/L (22, 23)]. Natural coffinite samples are often fine-grained (4, 5, 8, 11, 13, 15, 24), due to the long exposure to alpha-decay event irradiation (4, 6, 25, 26) and are associated with other minerals and organic matter (6, 8, 12, 18, 27, 28). Hence the determination of accurate thermodynamic data from natural samples is not straightforward. However, the synthesis of pure coffinite also has challenges. It appears not to form by reacting the oxides under dry high-temperature conditions (24, 29). Synthesis from aqueous solutions usually produces UO2 and amorphous SiO2 impurities, with coffinite sometimes being only a minor phase (24, 3035). It is not clear whether these difficulties arise from kinetic factors (slow reaction rates) or reflect intrinsic thermodynamic instability (33). Thus, there are only a few reported estimates of thermodynamic properties of coffinite (22, 3640) and some of them are inconsistent. To resolve these uncertainties, we directly investigated the energetics of synthetic coffinite by high-temperature oxide melt solution calorimetry to obtain a reliable enthalpy of formation and explored its thermal decomposition.  相似文献   

10.
11.
In a fundamental process throughout nature, reduced iron unleashes the oxidative power of hydrogen peroxide into reactive intermediates. However, notwithstanding much work, the mechanism by which Fe2+ catalyzes H2O2 oxidations and the identity of the participating intermediates remain controversial. Here we report the prompt formation of O=FeIVCl3 and chloride-bridged di-iron O=FeIV·Cl·FeIICl4 and O=FeIV·Cl·FeIIICl5 ferryl species, in addition to FeIIICl4, on the surface of aqueous FeCl2 microjets exposed to gaseous H2O2 or O3 beams for <50 μs. The unambiguous identification of such species in situ via online electrospray mass spectrometry let us investigate their individual dependences on Fe2+, H2O2, O3, and H+ concentrations, and their responses to tert-butanol (an ·OH scavenger) and DMSO (an O-atom acceptor) cosolutes. We found that (i) mass spectra are not affected by excess tert-butanol, i.e., the detected species are primary products whose formation does not involve ·OH radicals, and (ii) the di-iron ferryls, but not O=FeIVCl3, can be fully quenched by DMSO under present conditions. We infer that interfacial Fe(H2O)n2+ ions react with H2O2 and O3 >103 times faster than Fe(H2O)62+ in bulk water via a process that favors inner-sphere two-electron O-atom over outer-sphere one-electron transfers. The higher reactivity of di-iron ferryls vs. O=FeIVCl3 as O-atom donors implicates the electronic coupling of mixed-valence iron centers in the weakening of the FeIV–O bond in poly-iron ferryl species.High-valent FeIV=O (ferryl) species participate in a wide range of key chemical and biological oxidations (14). Such species, along with ·OH radicals, have long been deemed putative intermediates in the oxidation of FeII by H2O2 (Fenton’s reaction) (5, 6), O3, or HOCl (7, 8). The widespread availability of FeII and peroxides in vivo (912), in natural waters and soils (13), and in the atmosphere (1418) makes Fenton chemistry and FeIV=O groups ubiquitous features in diverse systems (19). A lingering issue regarding Fenton’s reaction is how the relative yields of ferryls vs. ·OH radicals depend on the medium. For example, by assuming unitary ·OH radical yields, some estimates suggest that Fenton’s reaction might account for ∼30% of the ·OH radical production in fog droplets (20). Conversely, if Fenton’s reaction mostly led to FeIV=O species, atmospheric chemistry models predict that their steady-state concentrations would be ∼104 times larger than [·OH], thereby drastically affecting the rates and course of oxidative chemistry in such media (20). FeIV=O centers are responsible for the versatility of the family of cytochrome P450 enzymes in catalyzing the oxidative degradation of a vast range of xenobiotics in vivo (2128), and the selective functionalization of saturated hydrocarbons (29). The bactericidal action of antibiotics has been linked to their ability to induce Fenton chemistry in vivo (9, 3034). Oxidative damage from exogenous Fenton chemistry likely is responsible for acute and chronic pathologies of the respiratory tract (3538).Despite its obvious importance, the mechanism of Fenton’s reaction is not fully understood. What is at stake is how the coordination sphere of Fe2+ (3946) under specific conditions affects the competition between the one-electron transfer producing ·OH radicals (the Haber–Weiss mechanism) (47), reaction R1, and the two-electron oxidation via O-atom transfer (the Bray–Gorin mechanism) into FeIVO2+, reaction R2 (6, 23, 26, 27, 45, 4851):Ozone reacts with Fe2+ via analogous pathways leading to (formally) the same intermediates, reactions R3a, R3b, and R4 (8, 49, 52, 53):At present, experimental evidence about these reactions is indirect, being largely based on the analysis of reaction products in bulk water in conjunction with various assumptions. Given the complex speciation of aqueous Fe2+/Fe3+ solutions, which includes diverse poly-iron species both as reagents and products, it is not surprising that classical studies based on the identification of reaction intermediates and products via UV-absorption spectra and the use of specific scavengers have fallen short of fully unraveling the mechanism of Fenton’s reaction. Herein we address these issues, focusing particularly on the critically important interfacial Fenton chemistry that takes place at boundaries between aqueous and hydrophobic media, such as those present in atmospheric clouds (16), living tissues, biomembranes, bio-microenvironments (38, 54, 55), and nanoparticles (56, 57).We exploited the high sensitivity, surface selectivity, and unambiguous identification capabilities of a newly developed instrument based on online electrospray mass spectrometry (ES-MS) (5862) to identify the primary products of reactions R1R4 on aqueous FeCl2 microjets exposed to gaseous H2O2 and O3 beams under ambient conditions [in N2(g) at 1 atm at 293 ± 2 K]. Our experiments are conducted by intersecting the continuously refreshed, uncontaminated surfaces of free-flowing aqueous microjets with reactive gas beams for τ ∼10–50 μs, immediately followed (within 100 μs; see below) by in situ detection of primary interfacial anionic products and intermediates via ES-MS (Methods, SI Text, and Figs. S1 and S2). We have previously demonstrated that online mass spectrometric sampling of liquid microjets under ambient conditions is a surface-sensitive technique (58, 6267).  相似文献   

12.
Aeolian sand beds exhibit regular patterns of ripples resulting from the interaction between topography and sediment transport. Their characteristics have been so far related to reptation transport caused by the impacts on the ground of grains entrained by the wind into saltation. By means of direct numerical simulations of grains interacting with a wind flow, we show that the instability turns out to be driven by resonant grain trajectories, whose length is close to a ripple wavelength and whose splash leads to a mass displacement toward the ripple crests. The pattern selection results from a compromise between this destabilizing mechanism and a diffusive downslope transport which stabilizes small wavelengths. The initial wavelength is set by the ratio of the sediment flux and the erosion/deposition rate, a ratio which increases linearly with the wind velocity. We show that this scaling law, in agreement with experiments, originates from an interfacial layer separating the saltation zone from the static sand bed, where momentum transfers are dominated by midair collisions. Finally, we provide quantitative support for the use of the propagation of these ripples as a proxy for remote measurements of sediment transport.Observers have long recognized that wind ripples (1, 2) do not form via the same dynamical mechanism as dunes (3). Current explanations ascribe their emergence to a geometrical effect of solid angle acting on sediment transport. The motion of grains transported in saltation is composed of a series of asymmetric trajectories (47) during which they are accelerated by the wind. These grains, in turn, decelerate the airflow inside the transport layer (1, 712). On hitting the sand bed, they release a splash-like shower of ejected grains that make small hops from the point of impact (1, 13, 14). This process is called reptation. Previous wind ripple models assume that saltation is insensitive to the sand bed topography and forms a homogeneous rain of grains approaching the bed at a constant oblique angle (1520). Upwind-sloping portions of the bed would then be submitted to a higher impacting flux than downslopes (1). With a number of ejecta proportional to the number of impacting grains, this effect would produce a screening instability with an emergent wavelength λ determined by the typical distance over which ejected grains are transported (1517), a few grain diameters d. However, observed sand ripple wavelengths are about 1,000 times larger than d, on Earth. The discrepancy is even more pronounced on Mars, where regular ripples are 20–40 times larger than those on a typical Earth sand dune (21, 22). Moreover, the screening scenario predicts a wavelength independent of the wind shear velocity u?, in contradiction with field and wind tunnel measurements that exhibit a linear dependence of λ with u? (2325).  相似文献   

13.
Atoms and molecules are too small to act as efficient antennas for their own emission wavelengths. By providing an external optical antenna, the balance can be shifted; spontaneous emission could become faster than stimulated emission, which is handicapped by practically achievable pump intensities. In our experiments, InGaAsP nanorods emitting at ∼200 THz optical frequency show a spontaneous emission intensity enhancement of 35× corresponding to a spontaneous emission rate speedup ∼115×, for antenna gap spacing, d = 40 nm. Classical antenna theory predicts ∼2,500× spontaneous emission speedup at d ∼ 10 nm, proportional to 1/d2. Unfortunately, at d < 10 nm, antenna efficiency drops below 50%, owing to optical spreading resistance, exacerbated by the anomalous skin effect (electron surface collisions). Quantum dipole oscillations in the emitter excited state produce an optical ac equivalent circuit current, Io = |xo|/d, feeding the antenna-enhanced spontaneous emission, where q|xo| is the dipole matrix element. Despite the quantum-mechanical origin of the drive current, antenna theory makes no reference to the Purcell effect nor to local density of states models. Moreover, plasmonic effects are minor at 200 THz, producing only a small shift of antenna resonance frequency.Antennas emerged at the dawn of radio, concentrating electromagnetic energy within a small volume <<λ3, enabling nonlinear radio detection. Such coherent detection is essential for radio receivers and has been used since the time of Hertz (1). Conversely, an antenna can efficiently extract radiation from a subwavelength source, such as a small cellphone. Despite the importance of radio antennas, 100 y went by before optical antennas began to be used to help extract optical frequency radiation from very small sources such as dye molecules (210) and quantum dots (1114).In optics, spontaneous emission is caused by dipole oscillations in the excited state of atoms, molecules, or quantum dots. The main problem is that a molecule is far too small to act as an efficient antenna for its own electromagnetic radiation. Antenna length, l, makes a huge difference in radiation rate. An ideal antenna would preferably be λ/2, a half-wavelength in size. To the degree that an atomic dipole of length l is smaller than λ/2, the antenna radiation rate Δω is proportional to ω(l/λ)3, as given by the Wheeler limit (15). Spontaneous emission from molecular-sized radiators is thus slowed by many orders of magnitude, because radiation wavelengths are much larger than the atoms themselves. Therefore, the key to speeding up spontaneous emission is to couple the radiating molecule to a proper antenna of sufficient size.Since the emergence of lasers in 1960, stimulated emission has been faster than spontaneous emission. Now the opposite is possible. In the right circumstances, antenna-enhanced spontaneous emission could become faster than stimulated emission. Theoretically, very large bandwidth >100 GHz or >1 THz is possible when the light emitter is coupled to a proper optical antenna (16).Metal optics have been able to shrink lasers to the nanoscale (1720), but high losses in metal-based cavities make it increasingly difficult to achieve desirable performance. Metal structures have also been used to enhance the spontaneous emission rate, such as by coupling excited material to flat surface plasmon waves (2128). Flat metal surfaces are far from ideal antennas, resulting in low radiation efficiencies and large ohmic losses. Semiconductor emitters have been further limited by large surface recombination losses and by processing difficulties at the extremely small dimensions. Semiconductor experiments (29, 30) show weak antenna–emitter coupling, with the antenna enhancement sometimes masked by metal-induced elastic scattering that enhances light extraction from the semiconductor substrate. Light extraction alone can increase optical emission by 4n2, as often used in commercial light-emitting diodes (LEDs), without necessarily modifying the spontaneous emission rate (31, 32).In this article, we elucidate the physics of antenna-enhanced spontaneous emission, using a traditional antenna circuit model, not the Purcell effect (33) nor a local density-of-states model (34). We use the circuit approach to analyze for the maximum possible spontaneous emission enhancement in the presence of spreading resistance losses (35) and the nonlocal anomalous skin effect (36) in the metal.We experimentally tested an optical dipole antenna, coupled to a “free-standing” 40-nm nanorod of semiconductor material. Thus far, optical emission measurements show a >115× antenna spontaneous emission rate enhancement factor compared with no antenna at all. At smaller dimensions, circuit theory predicts a spontaneous emission rate enhancement >104×, but at the penalty of decreased antenna efficiency. Nonetheless, we will derive that >2,500× rate enhancement should be possible, while still maintaining antenna efficiency >50%.  相似文献   

14.
The dorsal root ganglia–localized voltage-gated sodium (Nav) channel Nav1.8 represents a promising target for developing next-generation analgesics. A prominent characteristic of Nav1.8 is the requirement of more depolarized membrane potential for activation. Here we present the cryogenic electron microscopy structures of human Nav1.8 alone and bound to a selective pore blocker, A-803467, at overall resolutions of 2.7 to 3.2 Å. The first voltage-sensing domain (VSDI) displays three different conformations. Structure-guided mutagenesis identified the extracellular interface between VSDI and the pore domain (PD) to be a determinant for the high-voltage dependence of activation. A-803467 was clearly resolved in the central cavity of the PD, clenching S6IV. Our structure-guided functional characterizations show that two nonligand binding residues, Thr397 on S6I and Gly1406 on S6III, allosterically modulate the channel’s sensitivity to A-803467. Comparison of available structures of human Nav channels suggests the extracellular loop region to be a potential site for developing subtype-specific pore-blocking biologics.

Voltage-gated sodium (Nav) channels govern membrane excitability in neurons and muscles (1, 2). Despite high-degree sequence and architectural similarity, different subtypes of Nav channels have specific tissue distributions and distinct voltage dependence and kinetics for activation, inactivation, and recovery (3). Among the nine mammalian Nav channels (SI Appendix, Fig. S1), Nav1.8, a tetrodotoxin (TTX)-resistant subtype encoded by SCN10A, is primarily expressed in the sensory neurons, exemplified by the dorsal root ganglia (DRG) neurons (46). Compared with other Nav subtypes, Nav1.8 has several unique biophysical properties, such as activation at more depolarized voltage and slower inactivation with persistent current, which enable the hyperexcitability of the DRG neurons (4, 5, 711).Nav1.8 functions in pain sensation (1216). Proexcitatory mutations of Nav1.8 were identified in patients with painful small fiber neuropathy (1719). On the other hand, a natural variant, A1073V, that shifts the voltage dependence of activation to more depolarized direction appeared to ameliorate pain symptoms (20). Specific inhibition of the peripheral Nav1.8 thus represents a potential strategy for developing nonaddictive pain killers (21, 22).Several Nav1.8-selective blockers, such as VX-150 and PF-06305591, had been tested in clinical trials. However, most of the drug candidates failed to meet the endpoint(s) of phase II trials for various reasons, such as unsatisfactory efficacy or selectivity (21, 2326). Structures of Nav1.8 bound to lead compounds will shed light on drug optimization for improving potency and selectivity. We focused on A-803467, a Nav1.8-selective blocker, for structural analysis. A-803467 was shown to inhibit Nav1.8 in both the resting state and inactivated state. Despite a wide range of concentration that inhibits response by 50% (IC50) from several nanomolar to 1 μM measured by different groups, A-803467 consistently shows a higher affinity for the inactivated channel (2731).In this study, we report the structures of full-length human Nav1.8 alone and bound to A-803467. The first voltage-sensing domain (VSDI) was resolved in multiple conformations. Based on the structural and electrophysiological characterizations, we attempt to address two questions: What underlies the high-voltage activation of Nav1.8, and what determines the subtype specificity of A-803467?  相似文献   

15.
Tropical forests are the global cornerstone of biological diversity, and store 55% of the forest carbon stock globally, yet sustained provisioning of these forest ecosystem services may be threatened by hunting-induced extinctions of plant–animal mutualisms that maintain long-term forest dynamics. Large-bodied Atelinae primates and tapirs in particular offer nonredundant seed-dispersal services for many large-seeded Neotropical tree species, which on average have higher wood density than smaller-seeded and wind-dispersed trees. We used field data and models to project the spatial impact of hunting on large primates by ∼1 million rural households throughout the Brazilian Amazon. We then used a unique baseline dataset on 2,345 1-ha tree plots arrayed across the Brazilian Amazon to model changes in aboveground forest biomass under different scenarios of hunting-induced large-bodied frugivore extirpation. We project that defaunation of the most harvest-sensitive species will lead to losses in aboveground biomass of between 2.5–5.8% on average, with some losses as high as 26.5–37.8%. These findings highlight an urgent need to manage the sustainability of game hunting in both protected and unprotected tropical forests, and place full biodiversity integrity, including populations of large frugivorous vertebrates, firmly in the agenda of reducing emissions from deforestation and forest degradation (REDD+) programs.Tropical forests worldwide store >460 billion tons of carbon—over half of the total atmospheric storage (1)—and tropical forest conversion and degradation account for as much as 20% of global anthropogenic greenhouse gas emissions (2). Tropical forests are also the most species-rich ecosystems on Earth, yet the role of species interactions in stabilizing tropical forest dynamics and maintaining the flow of natural ecosystem services, including long-term forest carbon pools, remains poorly understood. Over 80–96% of all woody plant species in tropical forests produce vertebrate-dispersed fleshy fruits (3, 4), yet many large-bodied frugivore populations in tropical forest regions have already been severely overhunted (5), resulting in functionally “empty” or “half-empty” forests with subsequent disruptions in seed dispersal mutualisms (6). Indeed, the total forest area degraded by unsustainable hunting in the largest remaining tropical forest regions may exceed the combined extent of deforestation, selective logging, and wildfires (7, 8). Even formally decreed forest reserves in remote areas have succumbed to population declines and local extinctions of large vertebrates (9, 10), yet the consequences of this pervasive defaunation process to the persistence of tropical forest ecosystem services remains poorly explored.Overhunting can amplify dispersal limitation in many large-seeded plant species relying primarily or exclusively on harvest-sensitive large-bodied frugivores. The causal mechanisms through which hunting leads to altered phytodemographics—recruitment bottlenecks resulting from replacement of seedlings from species dispersed by large frugivores with those dispersed by wind, small birds, and bats—has been established in many parts of the humid tropics (e.g., refs. 3 and 1116). Because stem wood density is a strong predictor of aboveground forest biomass (AGB) across stands with similar basal areas (1719), overhunting could eventually lead to reduced forest carbon stocks if nonrandom compositional turnover penalizes large-seeded, heavy-wooded species that are primarily dispersed by megafrugivores susceptible to overhunting, thereby favoring wind-dispersed or small-seeded species associated with lower wood density (2023), as hypothesized by Brodie and Gibbs (24). Evolutionary selection pressure on wood density, or wood-specific gravity (WSG), operates on a trade-off whereby high-WSG trees can achieve a competitive advantage by supporting crowns with greater lateral spread for canopy space, but fast-growing low-WSG trees can reach the canopy and reproduce more quickly (25). Such a competition-colonization trade-off is related to seed dispersal mode because of the observation that small-seeded, often wind-dispersed, trees that are efficient gap colonizers have lower WSG than those bearing large animal-dispersed seeds experiencing greater dispersal limitation (20, 23). However, the trophic cascade between overhunting and reduced stand-scale carbon storage capacity remains controversial in both disturbed and undisturbed tropical forests because: (i) volumetric compensation by species unaffected by this form of dispersal limitation can have the opposite effect (11, 26), (ii) hunting can suppress both plant mutualists (e.g., effective seed dispersers) and antagonists (e.g., seed predators and seedling herbivores) (11), and (iii) several exceptionally large-seeded, heavy-wooded species may continue to be successfully dispersed by large scatter-hoarding rodents that are able to persist in large tracts of overhunted forests (5, 27; but see refs. 28 and 29).Amazonian forests store ∼125 Pg C in live biomass, or nearly half of the global terrestrial carbon in tropical forests (30), and contribute with ∼15% of the global terrestrial photosynthesis (31). The forests also sustain the highest diversity of fruiting plants (32, 33) and associated mutualists. Harvest-sensitive large-bodied seed-dispersal agents have been extirpated in most tropical forest areas through the combined effects of overhunting, habitat fragmentation, and wildfires (7, 34). However, the degree to which local extinctions of plant–frugivore interactions will destabilize long-term forest ecosystem services, such as high carbon stocks, is yet to be assessed at large spatial scales. Here we use 166 line-transect surveys throughout the Amazon basin to quantitatively assess the degree to which unregulated subsistence hunting affects a key group of forest frugivores (arboreal primates) throughout lowland Amazonia. Based on a spatially explicit biodemographic model (35, 36), we then predict the spatial footprint of hunting-induced population depletion envelopes for a large primate throughout the Brazilian Amazon. Next, we simulate the impact of large-bodied frugivore extirpation on changes in AGB throughout Amazonian forests based on one of the largest tree plot networks available in the tropics, where 129,720 trees ≥ 100 cm in circumference at breast height (CBH) [or ≥31.8 cm in diameter at breast height (DBH)] were inventoried. These large canopy and emergent trees comprise the most important component of tropical forests in terms of phytomass and carbon storage (37). We further model the geographic variation in stand-scale basal area of the most sensitive morphological guild of fruiting trees that is largely dispersed by a small group of large-bodied frugivores, and explain these changes based on a number of physical and floristic variables across the region. Our approach involves multiscale steps ranging from primate population-density estimates derived from the largest standardized series of line-transect censuses ever conducted in a tropical forest region; mapping of population depletion and extinction envelopes throughout the entire Brazilian Amazon; stand-scale inventories (conducted by Projeto RADAMBRASIL since the early 1970s) of tree species composition and size structure across 2,345 1-ha plots distributed throughout the Brazilian Amazon; to simulations of changes in aboveground biomass and carbon stocks throughout this large tree-plot network. These combined approaches provide spatially explicit projections of how aboveground carbon densities may change in overhunted Amazonian forests (SI Results).  相似文献   

16.
Deciphering the origin of seismic velocity heterogeneities in the mantle is crucial to understanding internal structures and processes at work in the Earth. The spin crossover in iron in ferropericlase (Fp), the second most abundant phase in the lower mantle, introduces unfamiliar effects on seismic velocities. First-principles calculations indicate that anticorrelation between shear velocity (VS) and bulk sound velocity (Vφ) in the mantle, usually interpreted as compositional heterogeneity, can also be produced in homogeneous aggregates containing Fp. The spin crossover also suppresses thermally induced heterogeneity in longitudinal velocity (VP) at certain depths but not in VS. This effect is observed in tomography models at conditions where the spin crossover in Fp is expected in the lower mantle. In addition, the one-of-a-kind signature of this spin crossover in the RS/P (??ln?VS/??ln?VP) heterogeneity ratio might be a useful fingerprint to detect the presence of Fp in the lower mantle.Ferropericlase (Fp) is believed to be the second most abundant phase in the lower mantle (1, 2). Since the discovery of the high-spin (HS) to low-spin (LS) crossover in iron in Fp (3), this phenomenon has been investigated extensively experimentally and theoretically (414). Most of its properties are affected by the spin crossover. In particular, thermodynamics (14) and thermal elastic properties (1520) are modified in unusual ways that can change profoundly our understanding of the Earth’s mantle. However, this is a broad and smooth crossover that takes place throughout most of the lower mantle and might not produce obvious signatures in radial velocity or density profiles (20, 21) (see Figs. S1 and S2). Therefore, its effects on aggregates are more elusive and indirect. For instance, the associated density anomaly can invigorate convection, as demonstrated by geodynamics simulations in a homogeneous mantle (2224). The bulk modulus anomaly may decrease creep activation parameters and lower mantle viscosity (10, 24, 25) promoting mantle homogenization in the spin crossover region (24), and anomalies in elastic coefficients can enhance anisotropy in the lower mantle (16). Less understood are its effects on seismic velocities produced by lateral temperature variations.The present analysis is based on our understanding of thermal elastic anomalies caused by the spin crossover. It has been challenging for both experiments (1519) and theory (20) to reach a consensus on this topic. Measurements often seemed to include extrinsic effects, making it difficult to confirm the spin crossover signature by different techniques and across laboratories. A theoretical framework had to be developed to address these effects. However, an agreeable interpretation of data and results has emerged recently (20). With increasing pressure, nontrivial behavior is observed in all elastic coefficients, aggregate moduli, and density throughout the spin crossover—the mixed spin (MS) state. In an ideal crystal or aggregate, bulk modulus (KS), C11, and C12 are considerably reduced in the MS state, whereas shear modulus (G), C44, and density (ρ) are enhanced. The pressure range of these anomalies broadens with increasing temperature whereas the magnitude decreases. With respect to the HS state, all these properties are enhanced in the LS state.  相似文献   

17.
Two-dimensional (2D) optical spectroscopy contains cross-peaks that are helpful features for determining molecular structure and monitoring energy transfer, but they can be difficult to resolve from the much more intense diagonal peaks. Transient absorption (TA) spectra contain transitions similar to cross-peaks in 2D spectroscopy, but in most cases they are obscured by the bleach and stimulated emission peaks. We report a polarization scheme, <0°,0°,+θ2(t2),-θ2(t2)>, that can be easily implemented in the pump-probe beam geometry, used most frequently in 2D and TA spectroscopy. This scheme removes the diagonal peaks in 2D spectroscopies and the intense bleach/stimulated emission peaks in TA spectroscopies, thereby resolving the cross-peak features. At zero pump-probe delay, θ2 = 60° destructively interferes two Feynman paths, eliminating all signals generated by field interactions with four parallel transition dipoles, and the intense diagonal and bleach/stimulated emission peaks. At later delay times, θ2(t2) is adjusted to compensate for anisotropy caused by rotational diffusion. When implemented with TA spectroscopy or microscopy, the pump-probe spectrum is dominated by the cross-peak features. The local oscillator is also attenuated, which enhances the signal two times. This overlooked polarization scheme reduces spectral congestion by eliminating diagonal peaks in 2D spectra and enables TA spectroscopy to measure similar information given by cross-peaks in 2D spectroscopy.

Transient absorption (TA) spectroscopy and microscopy are ubiquitously used for measuring kinetics in chemical, biological, and material sciences. TA spectroscopy initiates excited-state dynamics with a pump pulse and tracks their evolution with a probe pulse, yielding kinetic information. The polarization of the pump and probe fields strongly affects the utility and interpretation of TA data (15). The choice of pulse polarization can be employed to ease interpretation or extract particular information. For example, under three-dimensional (3D) isotropic conditions, kinetics measured at 54.7° relative angle (magic angle) between pump and probe polarizations are insensitive to molecular rotation (611). Alternatively, the anisotropy can be calculated after independently measuring parallel and perpendicularly polarized pulses, giving a signal that depends on rotational diffusion and not population relaxation (611). Magic angle and anisotropy measurements are textbook experiments.A technique closely related to TA spectroscopy is two-dimensional (2D) spectroscopy, such as 2D infrared (IR) and 2D electronic spectroscopy. TA and 2D spectroscopies are alike in that they both measure a signal created by three electric field interactions from the pulse sequence, which makes them third-order nonlinear techniques (3, 12, 13). Because TA and 2D spectroscopy are both third-order techniques, the polarization dependence of their spectra is identical. However, the way in which the experiments are implemented puts physical limitations on the polarizations that can be applied. For TA spectroscopy, the first two interactions (E1 and E2) are created by the pump pulse and third by the probe pulse (E3), followed by the emitted field (Eemit) that ultimately becomes the signal. For time-domain 2D spectroscopy, there are also three interactions, one each from two separate pump pulses (E1 and E2) and the third from the probe pulse (E3), followed by Eemit. The signals of both experiments depend on the orientational average of four electric fields with the sample, which is often written as the four-point orientational average <E1,E2,E3,Eemit>.Since TA spectroscopy only uses two laser pulses, polarization control is traditionally limited to the relative angle between pump and probe polarizations. When 2D spectroscopy was first developed, it was implemented in a four-wave mixing geometry that allowed all three pulse polarizations to be individually set along with the polarization of the emitted field (1416). This new capability led to the derivation of the full fourth-order orientational correlation function (4, 10, 17) and more advanced polarization schemes that determined the angle between coupled oscillators (17), suppressed peaks (18), and enhanced signal-to-noise (19, 20). One of the most unique polarization schemes was <E1,E2,E3,Eemit> = <−45°,+45°,90°,0°>, with the angles defined in the laboratory-fixed frame. This polarization scheme eliminates the diagonal peaks from the 2D spectrum under 3D isotropic conditions, isolating the desired cross-peak features (18). This method works by destructively interfering the <0°,90°,0°,90°> and the <0°,90°,90°,0°> signals in situ. In the same publication, Hochstrasser and coworkers proposed <−60°,+60°,0°,0°>, which removes diagonal peaks but does not compensate for rotational diffusion (18, 21). Ginsberg et al. and Read et al. implemented <−60°,+60°,0°,0°> in the visible (22, 23). One can independently measure and subtract spectra collected for each of these polarizations (24), but subtraction afterward is usually less accurate and sometimes very difficult, such as when measuring the kinetics of protein aggregation. Besides polarization schemes, peaks can be removed by fitting or by isolating interstate coherences (2527).Diagonal peak suppression was widely implemented in 2D spectroscopy (18, 22, 23, 2838) until pump-probe beam geometries began replacing the four-wave mixing geometry. The most common pump-probe implementation of 2D spectroscopy is using a pulse shaper to create the two pump pulses (39, 40). Pulse-shaping 2D spectroscopy has many advantages over four-wave mixing 2D spectroscopy, such as phase stability, shot-to-shot readout, and absorptive line shapes (41, 42). One drawback has been, like TA spectroscopy, that the pump pulses are collinear and so their polarizations are difficult to control independently (43). As a result, the <−45°,+45°,90°,0°> polarization scheme that was so useful for visualizing cross-peaks is now less utilized. We note that <−45°,+45°,90°,0°> can be implemented in pump-probe 2D spectroscopies that use interferometers (4446), birefringent wedges (37, 47), and polarization pulse shapers (43).In this paper we report a polarization scheme that can be implemented in the pump-probe geometry used by 2D and TA spectroscopies. Spectra collected in this polarization scheme only contain features from nonparallel transition dipoles. For 2D spectroscopy, this scheme eliminates the diagonal peaks so that only cross-peaks remain in the spectra. For TA spectroscopy, this scheme means that TA spectra provide the same information as the cross-peaks in 2D spectra, as we demonstrate. The polarization scheme we implement is <0°,0°,+60°,−60°>, or more generally, <0°,0°,+θ2(t2),−θ2(t2)> [where θ2(t2) depends on the pump-probe delay, t2]. Permutational symmetry holds for the fourth-rank orientational response at t2 = 0 and so <0°,0°,+60°,−60°> gives an equivalent 2D spectrum to <−60°,+60°,0°,0°>. What has been overlooked is that <0°,0°,+60°,−60°> can be experimentally implemented in the pump-probe geometry by adding two polarizers in the probe beam (SI Appendix, Fig. S1), whereas <−60°,+60°,0°,0°> cannot. As a result, the <0°,0°,+60°,−60°> polarization allows TA spectroscopy to obtain coupling information that, until now, could only be resolved by 2D spectroscopy. The method promises to revive 2D spectra of cross-peaks, enable TA spectroscopy to measure couplings, and permit new experiments like TA imaging of coupled modes. In what follows, we first qualitatively describe the method and experimentally demonstrate it then present the theoretical underpinnings and discussion of its strengths, weaknesses, and potential uses.  相似文献   

18.
19.
20.
Neutrophils sense and migrate through an enormous range of chemoattractant gradients through adaptation. Here, we reveal that in human neutrophils, calcium-promoted Ras inactivator (CAPRI) locally controls the GPCR-stimulated Ras adaptation. Human neutrophils lacking CAPRI (caprikd) exhibit chemoattractant-induced, nonadaptive Ras activation; significantly increased phosphorylation of AKT, GSK-3α/3β, and cofilin; and excessive actin polymerization. caprikd cells display defective chemotaxis in response to high-concentration gradients but exhibit improved chemotaxis in low- or subsensitive-concentration gradients of various chemoattractants, as a result of their enhanced sensitivity. Taken together, our data reveal that CAPRI controls GPCR activation-mediated Ras adaptation and lowers the sensitivity of human neutrophils so that they are able to chemotax through a higher-concentration range of chemoattractant gradients.

Neutrophils provide first-line host defense and play pivotal roles in innate and adaptive immunity (13). The inappropriate recruitment and dysregulated activation of neutrophils contribute to tissue damage and cause autoimmune and inflammatory diseases (1, 4). Neutrophils sense chemoattractants and migrate to sites of inflammation using G protein–coupled receptors (GPCRs). To accurately navigate through an enormous concentration–range gradient of various chemoattractants (10−9 to ∼10−5 M; SI Appendix, Fig. S1), neutrophils employ a mechanism called adaptation, in which they no longer respond to present stimuli but remain sensitive to stronger stimuli. Homogeneous, sustained chemoattractant stimuli trigger transient, adaptive responses in many steps of the GPCR-mediated signaling pathway downstream of heterotrimeric G proteins (5, 6). Adaptation provides a fundamental strategy for eukaryotic cell chemotaxis through large concentration–range gradients of chemoattractants. Abstract models and computational simulations have proposed mechanisms generating the temporal dynamics of adaptation: An increase in receptor occupancy activates two antagonistic signaling processes, namely, a rapid “excitation” that triggers cellular responses and a temporally delayed “inhibition” that terminates the responses and results in adaptation (5, 713). Many excitatory components have been identified during last two decades; however, the inhibitor(s) have just begun to be revealed (11, 1417). It has been recently shown that an elevated Ras activity increases the sensitivity and changes migration behavior (18, 19). However, the molecular connection between the GPCR-mediated adaptation and the cell sensitivity remains missing.The small GTPase Ras mediates multiple signaling pathways that control directional cell migration in both neutrophils and Dictyostelium discoideum (17, 2024). In D. discoideum, Ras is the first signal event that displays GPCR-mediated adaptation (20). Ras signaling is mainly regulated through its activator, guanine nucleotide exchange factor (GEF), and its inactivator, GTPase-activating proteins (GAP) (16, 17, 25). In D. discoideum, the roles of DdNF1 and an F-actin–dependent, negative feedback mechanism have been previously reported (14, 17). We have previously demonstrated the involvement of locally recruited inhibitors that act on upstream of PI3K in the sensing of chemoattractant gradients (11, 26). Recently, we identified a locally recruited RasGAP protein, C2GAP1, that is essential for F-actin–independent Ras adaptation and long-range chemotaxis in Dictyostelium (16). Active Ras proteins enrich at the leading edge in both D. discoideum cells and neutrophils (17, 27, 28). It has been reported that a RasGEF, RasGRP4, plays a critical role in Ras activation in murine neutrophil chemotaxis (21, 29). However, the components involved in the GPCR-mediated deactivation of Ras and their function in neutrophil chemotaxis are still not known.In the present study, we show that a calcium-promoted Ras inactivator (CAPRI) locally controls the GPCR-mediated Ras adaptation in human neutrophils. In response to high-concentration stimuli, cells lacking CAPRI (caprikd) exhibit nonadaptive Ras activation; significantly increased activation of AKT, GSK-3α/3β, and cofilin; excessive actin polymerization; and subsequent defective chemotaxis. Unexpectedly, caprikd cells display enhanced sensitivity toward chemoattractants and an improved chemotaxis in low- or subsensitive-concentration gradients. Taken together, our findings show that CAPRI functions as an inhibitory component of Ras signaling, plays a critical role in controlling the concentration range of chemoattractant sensing, and is important for the proper adaptation during chemotaxis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号