首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
NASA’s current mandate is to land humans on Mars by 2033. Here, we demonstrate an approach to produce ultrapure H2 and O2 from liquid-phase Martian regolithic brine at ∼−36 °C. Utilizing a Pb2Ru2O7−δ pyrochlore O2-evolution electrocatalyst and a Pt/C H2-evolution electrocatalyst, we demonstrate a brine electrolyzer with >25× the O2 production rate of the Mars Oxygen In Situ Resource Utilization Experiment (MOXIE) from NASA’s Mars 2020 mission for the same input power under Martian terrestrial conditions. Given the Phoenix lander’s observation of an active water cycle on Mars and the extensive presence of perchlorate salts that depress water’s freezing point to ∼−60 °C, our approach provides a unique pathway to life-support and fuel production for future human missions to Mars.

Life-support O2 and fuel (e.g., H2) are indispensable for human space exploration. The electrolysis of extraterrestrial liquid water can be a significant concurrent source of H2 and O2. NASA’s Phoenix lander has found evidence of an active water cycle (1), extensive subsurface ice (2), and the presence of soluble perchlorates (3) on the Martian surface (SI Appendix, section S1). Spectral evidence from the Mars Odyssey Gamma Ray Spectrometer points to the existence of large quantities of water-ice in the northern polar region of Mars (4) and the Mars Reconnaissance Orbiter has also found indications of contemporary local flows of liquid regolithic brines shaping Martian geography (5). Martian regolithic brines with dissolved perchlorates (see “Martian regolith composition” in SI Appendix, Table S1) can exist in the liquid phase since perchlorates significantly depress the freezing point of water (6). Based on compositional analysis by the wet chemistry instrument on the Phoenix lander, Mg(ClO4)2 is reported to be a major constituent of the Martian regolith and its concentrated solutions remain in the liquid phase up to ∼−70 °C. This offers a temperature window for the existence of liquid brine on the Martian surface and subsurface as the mean annual terrestrial temperature on Mars is ∼−63 °C (7) with a wide (>100 °C) average diurnal variation (8). The hygroscopic nature of these perchlorates also enables the entrainment of atmospheric water vapor to produce concentrated brine solutions (9). Recently published data obtained by the Mars Advanced Radar for Subsurface and Ionosphere Sounding instrument onboard the Mars Express spacecraft shows that multiple subglacial water bodies presently exist underneath the Martian south pole deposits at Ultimi Scopuli (10).In support of NASA’s mandate to send humans to Mars by 2033 (11), we demonstrate that the electrolysis of these brines at ultralow temperatures is a route to the concurrent production of H2 as fuel and O2 for life-support in practical quantities and rates under Martian conditions. NASA has incorporated the Mars Oxygen In-Situ Resource Utilization Experiment (MOXIE) (12) as a part of its Mars 2020 mission (13), as a feasibility study of the electrolysis of CO2 into CO and O2 (SI Appendix, section S2). As an alternative, we show that regolithic brine electrolysis under Martian conditions will enable the production of ultrapure O2 for life-support and H2 for energy production (SI Appendix, section S3), with no additional purification requirement for CO removal. The H2 produced in tandem can serve as a clean-burning fuel with a superior calorific value to CO (SI Appendix, section S2). Our electrolyzer system has a 25-fold higher production rate of O2 when compared to MOXIE while consuming the same amount of power (or, put another way, our system consumes 25× less power than MOXIE for the same O2 production rate).  相似文献   

2.
The 2020 fire season punctuated a decades-long trend of increased fire activity across the western United States, nearly doubling the total area burned in the central Rocky Mountains since 1984. Understanding the causes and implications of such extreme fire seasons, particularly in subalpine forests that have historically burned infrequently, requires a long-term perspective not afforded by observational records. We place 21st century fire activity in subalpine forests in the context of climate and fire history spanning the past 2,000 y using a unique network of 20 paleofire records. Largely because of extensive burning in 2020, the 21st century fire rotation period is now 117 y, reflecting nearly double the average rate of burning over the past 2,000 y. More strikingly, contemporary rates of burning are now 22% higher than the maximum rate reconstructed over the past two millennia, during the early Medieval Climate Anomaly (MCA) (770 to 870 Common Era), when Northern Hemisphere temperatures were ∼0.3 °C above the 20th century average. The 2020 fire season thus exemplifies how extreme events are demarcating newly emerging fire regimes as climate warms. With 21st century temperatures now surpassing those during the MCA, fire activity in Rocky Mountain subalpine forests is exceeding the range of variability that shaped these ecosystems for millennia.

The 2020 fire season punctuated a trend of increasing wildfire activity throughout the 21st century across the western United States (“the West”). This trend is well-linked to increasingly fire-conducive climate conditions (1) and anthropogenic climate change (2), and it is coming with devastating human impacts (3).Across different ecosystems and regions of the West, the causes of increasing fire activity vary (46), and thus so too do potential management and policy solutions (7, 8). Over a century of policies have limited Indigenous fire stewardship and emphasized fire suppression, leading to significant fire deficits in low- and mid-elevation forests that historically burned frequently in low-intensity surface fires (9, 10). This differs from high-elevation subalpine forests, where fire history records show that large, stand-replacing fires typically burned once every one to several centuries over recent millennia (1117). Continued 21st century warming in these high-elevation forests is predicted to increase fire activity beyond the historical range of variability (18, 19). Detecting if and when such changes emerge, however, and understanding the magnitude of ongoing change requires placing contemporary burning in the context of the past.Here, we use a unique network of paleofire records spanning the past 2,000 y to test the hypothesis that 21st century climate change has led to unprecedented fire activity in Rocky Mountain subalpine forests. These high-elevation forests are useful sentinels of climate change impacts because their typically cool, moist climate limits frequent fire, and they have historically experienced less land-use change and fire suppression than lower-elevation forests. To place late 20th and 21st century wildfire activity in a millennial-scale context, we draw on existing tree-ring and lake sediment records of fire history from subalpine forests in a ∼30,000 km2 region in the central Rocky Mountains of Colorado and Wyoming (Fig. 1A), similar in size to the Greater Yellowstone Ecosystem.Open in a separate windowFig. 1.Wildfire and climate in the central Rocky Mountains. (A) Central Rocky Mountains (“Ecoregion”) and focal study area, with fire perimeters from 1984 to 2019 (thin, light red lines) and 2020 (thick, dark red lines; see also SI Appendix, Fig. S3). The 20 lakes with published paleofire records are shown with white circles; lakes recording fire events during the early MCA, c. 770 to 870 CE, are shown in red. The general locations of published tree-ring–based stand-age and fire-scar records used to reconstruct fire extent are shown with black plus symbols; the geographic extent represented by each study exceeds the extent of the symbols. (B) Ecoregion-wide area burned for fire perimeters displayed in A and average May to September VPD. Percentages above red bars are the proportion of total area burned (from 1984 to 2020) contributed by the given year.  相似文献   

3.
On April 20, 2010, the Deepwater Horizon (DWH) blowout occurred, releasing more oil than any accidental spill in history. Oil release continued for 87 d and much of the oil and gas remained in, or returned to, the deep sea. A coral community significantly impacted by the spill was discovered in late 2010 at 1,370 m depth. Here we describe the discovery of five previously unknown coral communities near the Macondo wellhead and show that at least two additional coral communities were impacted by the spill. Although the oil-containing flocullent material that was present on corals when the first impacted community was discovered was largely gone, a characteristic patchy covering of hydrozoans on dead portions of the skeleton allowed recognition of impacted colonies at the more recently discovered sites. One of these communities was 6 km south of the Macondo wellhead and over 90% of the corals present showed the characteristic signs of recent impact. The other community, 22 km southeast of the wellhead between 1,850 and 1,950 m depth, was more lightly impacted. However, the discovery of this site considerably extends the distance from Macondo and depth range of significant impact to benthic macrofaunal communities. We also show that most known deep-water coral communities in the Gulf of Mexico do not appear to have been acutely impacted by the spill, although two of the newly discovered communities near the wellhead apparently not impacted by the spill have been impacted by deep-sea fishing operations.The explosion of the Deepwater Horizon (DWH) drilling rig at the Macondo wellhead site created an oil spill with characteristics unlike those of previous major oil spills where the release occurred either on the ocean surface or at shallow depths (1, 2). Because of the physics of the release, as well as the extensive use of dispersants, much of the oil and gas remained at depth (36). In addition, weathering, burning, and application of dispersants to surface slicks resulted in a return of additional hydrocarbons to the deep sea (5, 7, 8). The potentially toxic hydrocarbons and dispersants had the potential to impact numerous deep-sea communities that are inherently difficult to assess. In October 2010, beginning 90 d after the wellhead was capped, we visited 13 deep-water coral sites spread over a depth range of 350–2,600 m and from 87.31° to 93.60° W in the Gulf of Mexico (GoM), and did not detect visual indications of acute effects to coral communities at any of these sites. However, on November 2, 2010, we discovered a previously unknown coral community 13 km away from the Macondo wellhead that had clearly suffered a recent severe adverse impact, and oil forensics indicated that hydrocarbons found on corals at the site originated from the Macondo wellhead (9, 10). Following that discovery, we made a systematic effort to discover additional communities in the vicinity of the wellhead and then determine the status of the corals in these communities.Locating deep-water coral communities in the GoM is a laborious process as these communities are rare, relatively small, and there is no known remote-sensing method to unambiguously locate them. Most corals require a stable, hard substrate upon which to settle and grow (11). However, most of the sea floor in the deep GoM is soft sediment. The primary exception in the deep northern Gulf are authigenic carbonates which are formed as an indirect byproduct of anaerobic hydrocarbon degradation by bacteria in areas with hydocarbon seepage (12, 13). Authigenic carbonates form hardgrounds that are often suitable for a variety of attached megafauna and associated biological communities, including in some cases, corals (14).  相似文献   

4.
Despite receiving just 30% of the Earth’s present-day insolation, Mars had water lakes and rivers early in the planet’s history, due to an unknown warming mechanism. A possible explanation for the >102-y-long lake-forming climates is warming by water ice clouds. However, this suggested cloud greenhouse explanation has proved difficult to replicate and has been argued to require unrealistically optically thick clouds at high altitudes. Here, we use a global climate model (GCM) to show that a cloud greenhouse can warm a Mars-like planet to global average annual-mean temperature (T¯) ∼265 K, which is warm enough for low-latitude lakes, and stay warm for centuries or longer, but only if the planet has spatially patchy surface water sources. Warm, stable climates involve surface ice (and low clouds) only at locations much colder than the average surface temperature. At locations horizontally distant from these surface cold traps, clouds are found only at high altitudes, which maximizes warming. Radiatively significant clouds persist because ice particles sublimate as they fall, moistening the subcloud layer so that modest updrafts can sustain relatively large amounts of cloud. The resulting climates are arid (area-averaged surface relative humidity ∼25%). In a warm, arid climate, lakes could be fed by groundwater upwelling, or by melting of ice following a cold-to-warm transition. Our results are consistent with the warm and arid climate favored by interpretation of geologic data, and support the cloud greenhouse hypothesis.

Mars is cold today, but early Mars was warm enough for lakes (e.g., ref. 1) that were habitable (2). These early (4 to <3 Ga) warm climates cannot be explained by basic models of the early Mars greenhouse effect (involving only CO2 and H2O vapor) because these predict climates that are too cold (3, 4). Hypotheses for solving this problem have difficulty in explaining the geologic evidence for >102-y-long lake-forming climates that persisted as late as <3 Ga (e.g., refs. 2, 4, and 5 and references therein). One hypothesis for reconciling models with data are greenhouse warming by H2–CO2 collision-induced absorption (e.g., refs. 68). Here, we demonstrate that a different mechanism can explain warm paleoclimates. Recently, warm (T¯ ∼265 K, where T¯ = annual mean temperature) early Mars climates were found in one three-dimensional (3D) global climate model (GCM) simulation of the greenhouse effect of high-altitude water ice clouds (9). However, Urata and Toon (9) did not check for steady-state mass balance for surface H2O reservoirs, and the high clouds produced by this model require an imposed cloud lifetime, adjusted to be longer than that of Earth clouds by a factor of 102. This and other choices have been described as “not physically reasonable” by subsequent work (10), and other studies also reached similar pessimistic conclusions about the potential of water ice clouds to explain warm paleoclimates (3, 11). For example, Wordsworth et al. (12) found in their model that even if cloud precipitation was (unrealistically) disabled, cloud radiative effects gave only a 1- to 2-y-long rise in surface temperature, too brief to explain geologic data. High clouds are needed for strong cloud warming because high clouds are cold relative to the surface, and the greenhouse-warming potential of clouds increases when the temperature difference between the cloud-forming altitudes and the surface increases (e.g., refs. 10 and 11). In one-dimensional (1D) models, strong H2O ice-cloud warming occurs if—and only if—radiatively significant clouds are located at high altitudes (11). Nevertheless, 1D calculations have consistently shown the potential of a tiny quantity of water (just ∼0.01 kg/m2 H2O in the form of cloud ice) to raise planet temperature by ∼50 K (10, 11). For comparison, Mars today has an average of ∼3 × 104 kg/m2 of surface H2O ice and ∼0.01 kg/m2 atmospheric H2O vapor. This motivates new 3D simulations in order to understand the discrepant results from earlier studies and test the cloud greenhouse hypothesis. Here, we present cloud greenhouse simulations run from geologically reasonable initial conditions, with physically based cloud microphysics, and run for long enough for the atmosphere to reach equilibrium with surface water reservoirs.To test the cloud greenhouse hypothesis, we use the MarsWRF GCM (13, 14), modified to include radiatively active water ice clouds. MarsWRF is the Martian implementation of the Planet Weather Research and Forecasting (PlanetWRF) GCM (15), itself derived from the terrestrial Weather Research and Forecasting (WRF) model (16, 17). For early Mars, with static cloud locations, we find maximum warming when cloud optical depth is of order unity and clouds are high (∼30 km altitude) (SI Appendix, Fig. S2). These 3D results with static clouds are consistent with the 1D results of refs. 10 and 11 (SI Appendix). In the remainder of this paper, we use a dynamic water cycle (dynamic clouds), including sedimentation of individual cloud particles, rapid snow-out above an autoconversion threshold, and exchange with surface water ice (Methods). We assume that relative humidity is buffered to ≲1 by rapid condensation of cloud particles; although air parcels can reach saturation at any temperature in our model, condensation occurs at ≳190 K in our output, consistent with our assumption that supersaturation is minor. Individual cloud particles undergo Stokes settling (gas viscosity 10−5 Pa s) at a rate set by their modal sizes, with a Cunningham slip correction. Our model has two types of particles: cloud particles that do not fall very fast and snow that does fall fast because the particles are much larger. Cloud particles can be converted to form fast-settling snow if the cloud-particle number density exceeds a threshold. This threshold represents the dependence of cloud-particle coalescence on cloud-particle number density. Specifically, in order to conservatively represent the cloud-depleting effect of mass transfer from slow-settling (cm/s) cloud particles to fast-settling snow (autoconversion), we increase the settling velocity to a fast value (1 m/s) when cloud-particle density exceeds a conservatively low threshold (3 × 10−5 kg/kg). In other words, the conversion of cloud ice to snow does not occur until the number density of cloud particles reaches a certain magnitude. Falling particles either reach the ground as snow or reevaporate when they descend into dry air. Consistent with output from another GCM (11), we find that snow descending into unsaturated air at >30 km will evaporate well before reaching the ground.  相似文献   

5.
Sleep is homeostatically regulated in all animal species that have been carefully studied so far. The best characterized marker of sleep homeostasis is slow wave activity (SWA), the EEG power between 0.5 and 4 Hz during nonrapid eye movement (NREM) sleep. SWA reflects the accumulation of sleep pressure as a function of duration and/or intensity of prior wake: it increases after spontaneous wake and short-term (3–24 h) sleep deprivation and decreases during sleep. However, recent evidence suggests that during chronic sleep restriction (SR) sleep may be regulated by both allostatic and homeostatic mechanisms. Here, we performed continuous, almost completely artifact-free EEG recordings from frontal, parietal, and occipital cortex in freely moving rats (n = 11) during and after 5 d of SR. During SR, rats were allowed to sleep during the first 4 h of the light period (4S+) but not during the following 20 h (20S). During the daily 20S most sleep was prevented, whereas the number of short (<20 s) sleep attempts increased. Low-frequency EEG power (1–6 Hz) in both sleep and wake also increased during 20S, most notably in the occipital cortex. In all animals NREM SWA increased above baseline levels during the 4S+ periods and in post-SR recovery. The SWA increase was more pronounced in frontal cortex, and its magnitude was determined by the efficiency of SR. Analysis of cumulative slow wave energy demonstrated that the loss of SWA during SR was compensated by the end of the second recovery day. Thus, the homeostatic regulation of sleep is preserved under conditions of chronic SR.  相似文献   

6.
In recent years, the Northern Hemisphere has suffered several devastating regional summer weather extremes, such as the European heat wave in 2003, the Russian heat wave and the Indus river flood in Pakistan in 2010, and the heat wave in the United States in 2011. Here, we propose a common mechanism for the generation of persistent longitudinal planetary-scale high-amplitude patterns of the atmospheric circulation in the Northern Hemisphere midlatitudes. Those patterns—with zonal wave numbers m = 6, 7, or 8—are characteristic of the above extremes. We show that these patterns might result from trapping within midlatitude waveguides of free synoptic waves with zonal wave numbers k ≈ m. Usually, the quasistationary dynamical response with the above wave numbers m to climatological mean thermal and orographic forcing is weak. Such midlatitude waveguides, however, may favor a strong magnification of that response through quasiresonance.  相似文献   

7.
Between September 2014 and February 2015, the number of Ebola virus disease (EVD) cases reported in Sierra Leone declined in many districts. During this period, a major international response was put in place, with thousands of treatment beds introduced alongside other infection control measures. However, assessing the impact of the response is challenging, as several factors could have influenced the decline in infections, including behavior changes and other community interventions. We developed a mathematical model of EVD transmission, and measured how transmission changed over time in the 12 districts of Sierra Leone with sustained transmission between June 2014 and February 2015. We used the model to estimate how many cases were averted as a result of the introduction of additional treatment beds in each area. Examining epidemic dynamics at the district level, we estimated that 56,600 (95% credible interval: 48,300–84,500) Ebola cases (both reported and unreported) were averted in Sierra Leone up to February 2, 2015 as a direct result of additional treatment beds being introduced. We also found that if beds had been introduced 1 month earlier, a further 12,500 cases could have been averted. Our results suggest the unprecedented local and international response led to a substantial decline in EVD transmission during 2014–2015. In particular, the introduction of beds had a direct impact on reducing EVD cases in Sierra Leone, although the effect varied considerably between districts.The 2013–2015 Ebola virus disease (EVD) epidemic in West Africa has seen more cases than all past outbreaks combined (1), and has triggered a major international response. In Sierra Leone, where there have been over 8,600 confirmed cases reported as of 2015 August 1, the Sierra Leone and UK governments and nongovernmental organizations have supported the gradual introduction of over 1,500 beds in Ebola Holding Centers (EHCs) and Community Care Centers (CCCs), as well as over 1,200 beds in larger-scale Ebola Treatment Units (ETUs) (2, 3). As well as the humanitarian value of providing treatment and care to sick patients, there is a secondary benefit to expanding bed capacity that is more difficult to quantify; by isolating the ill and removing them from the community, further infections might be prevented.Since the peak of the epidemic in Sierra Leone in November 2014, when there were over 500 confirmed EVD cases reported per week, the level of infection has dropped, with fewer than 100 confirmed cases reported per week in February 2015. Although the nationwide decline in cases coincided with an increase in the number of beds available (4), as well as improved case detection, tracing of contacts, and safe burials of patients who had died (3, 5), there has been criticism of the timing and focus of the international response in Sierra Leone (6, 7). To properly evaluate the control efforts, and plan for future outbreaks of EVD, it is therefore crucial to understand how many cases were likely averted as a result of the response.Mathematical models have been used prospectively to estimate the potential impact of additional beds (811). However, evaluating the effect of control measures retrospectively is more challenging, because a model must disentangle the reduction in transmission due to improved bed capacity from other factors. Behavior changes (12), community engagement, improved case finding, and an increase in safe burials (5) could all have contributed to a reduction in transmission. Indeed, many Ebola facilities were designed to be part of a package of interventions, combining treatment beds with community-based infection control (3).To estimate how EVD transmission changed as interventions were introduced, we developed a stochastic mathematical model of Ebola transmission in Sierra Leone. The model was stratified by district, and incorporated available data on bed capacity in ETUs, EHCs, and CCCs (13). As beds were not the only control measure in place, we also included a time-varying transmission rate in the model (4, 14) to capture any variation in transmission which was not explained by the introduction of beds.As not all new cases in Sierra Leone occurred among known contacts of EVD patients (15), we accounted for potential underreporting in our model. In our main analysis, we assumed that 60% of infectious individuals would be ascertained (i.e., would be reported and seek treatment), and that it took an average of 4.5 d for these individuals to be reported (16). We also included the possibility of variability in the accuracy of reporting, with weekly reported cases following a negative binomial distribution. In the model, stochasticity could therefore be generated by both the transmission process and the reporting process. We assumed infectious individuals who were ascertained attended EHCs/CCCs if beds were available (16); the average time between onset and attendance declined over time, based on reported values for Sierra Leone (SI Appendix, Fig. S1). Once test results were received, patients were transferred to an available ETU; we assumed this took 2 d on average. If no beds were available at any facility, cases remained in the community. The model structure is shown in Fig. 1, and the full set of parameter values in SI Appendix, Table S1.Open in a separate windowFig. 1.Model structure. Individuals start off susceptible to infection (S). Upon infection with Ebola they enter an incubation period (E), then at symptom onset they become infectious; these individuals either eventually become ascertained (IA) or do not (IM). Individuals who are ascertained initially seek health care in EHCs/CCCs (or ETUs if these are full); if no beds are available, they remain infectious in the community until the infection is resolved (R), i.e., they have recovered, or are dead and buried. Patients in EHC/CCCs are transferred to ETUs once they have been tested for Ebola, which takes an average of 2 d. Patients remain in ETUs until the infection is resolved. We assume the latent period is 9.4 d, the average time from onset to EHC/CCC attendance declines from an initial value of 4.6 d (SI Appendix, Fig. S1), and individuals who do not seek treatment are infectious for 10.9 d on average (details in SI Appendix).To allow for a time-varying community transmission rate, we used a flexible sigmoid function (14, 17); depending on parameter values, transmission could be constant over time, or increase or decline. Our model structure therefore made it possible to separate the reduction in infection as a result of additional treatment beds and variation resulting from other effects, such as behavior changes and implementation of safe burials.We used a Bayesian approach to fit the model to weekly EVD confirmed and probable case data reported in each district of Sierra Leone (18, 19), and to estimate how community transmission varied over time. We then used the fitted model to simulate multiple stochastic epidemic trajectories, and measured the number of cases that could have occurred in each district had additional beds not been introduced.  相似文献   

8.
The ability to hold multiple objects in memory is fundamental to intelligent behavior, but its neural basis remains poorly understood. It has been suggested that multiple items may be held in memory by oscillatory activity across neuronal populations, but yet there is little direct evidence. Here, we show that neuronal information about two objects held in short-term memory is enhanced at specific phases of underlying oscillatory population activity. We recorded neuronal activity from the prefrontal cortices of monkeys remembering two visual objects over a brief interval. We found that during this memory interval prefrontal population activity was rhythmically synchronized at frequencies around 32 and 3 Hz and that spikes carried the most information about the memorized objects at specific phases. Further, according to their order of presentation, optimal encoding of the first presented object was significantly earlier in the 32 Hz cycle than that for the second object. Our results suggest that oscillatory neuronal synchronization mediates a phase-dependent coding of memorized objects in the prefrontal cortex. Encoding at distinct phases may play a role for disambiguating information about multiple objects in short-term memory.  相似文献   

9.
We conducted randomized clinical trials to examine the impact of direct-to-consumer advertisements on the efficacy of a branded drug. We compared the objectively measured, physiological effect of Claritin (Merck & Co.), a leading antihistamine medication, across subjects randomized to watch a movie spliced with advertisements for Claritin or advertisements for Zyrtec (McNeil), a competitor antihistamine. Among subjects who test negative for common allergies, exposure to Claritin advertisements rather than Zyrtec advertisements increases the efficacy of Claritin. We conclude that branded drugs can interact with exposure to television advertisements.  相似文献   

10.
We describe a miniaturized head-mounted multiphoton microscope and its use for recording Ca2+ transients from the somata of layer 2/3 neurons in the visual cortex of awake, freely moving rats. Images contained up to 20 neurons and were stable enough to record continuously for >5 min per trial and 20 trials per imaging session, even as the animal was running at velocities of up to 0.6 m/s. Neuronal Ca2+ transients were readily detected, and responses to various static visual stimuli were observed during free movement on a running track. Neuronal activity was sparse and increased when the animal swept its gaze across a visual stimulus. Neurons showing preferential activation by specific stimuli were observed in freely moving animals. These results demonstrate that the multiphoton fiberscope is suitable for functional imaging in awake and freely moving animals.  相似文献   

11.
To develop effective environmental policies, we must understand the mechanisms through which the policies affect social and environmental outcomes. Unfortunately, empirical evidence about these mechanisms is limited, and little guidance for quantifying them exists. We develop an approach to quantifying the mechanisms through which protected areas affect poverty. We focus on three mechanisms: changes in tourism and recreational services; changes in infrastructure in the form of road networks, health clinics, and schools; and changes in regulating and provisioning ecosystem services and foregone production activities that arise from land-use restrictions. The contributions of ecotourism and other ecosystem services to poverty alleviation in the context of a real environmental program have not yet been empirically estimated. Nearly two-thirds of the poverty reduction associated with the establishment of Costa Rican protected areas is causally attributable to opportunities afforded by tourism. Although protected areas reduced deforestation and increased regrowth, these land cover changes neither reduced nor exacerbated poverty, on average. Protected areas did not, on average, affect our measures of infrastructure and thus did not contribute to poverty reduction through this mechanism. We attribute the remaining poverty reduction to unobserved dimensions of our mechanisms or to other mechanisms. Our study empirically estimates previously unidentified contributions of ecotourism and other ecosystem services to poverty alleviation in the context of a real environmental program. We demonstrate that, with existing data and appropriate empirical methods, conservation scientists and policymakers can begin to elucidate the mechanisms through which ecosystem conservation programs affect human welfare.Scholars and practitioners have begun to more carefully assess the causal effects of ecosystem conservation programs on environmental and social outcomes (e.g., land cover and local livelihoods; reviews in refs. 15) and how these effects vary spatially (6, 7). However, we still know very little about why these effects occur or fail to occur (8).Consider, for example, a bulwark of ecosystem conservation: the creation of protected area networks, like parks and reserves. Governments often establish these networks on marginal lands in rural areas where poor households reside (912). The effects of protection on poverty in neighboring communities are thus a subject of much concern and debate (text and references in refs. 12 and 13). Recent studies have estimated that protected areas reduced poverty in neighboring communities in Bolivia, Costa Rica, and Thailand (12, 14, 15). These studies, however, do not elucidate the specific mechanisms through which the protected areas reduced poverty.Understanding the mechanisms through which environmental programs work is crucial for sustainability science and practice. Armed with such knowledge, decision makers can design programs that foster the mechanisms that alleviate poverty and mitigate the mechanisms that exacerbate poverty. The ecosystem conservation literature, however, offers little guidance on how to empirically estimate the impacts of these mechanisms. To show how the causal mechanisms of protected areas (or of any environmental program) can be identified, we use data from Costa Rica and quantify the proportion of Andam et al.’s (12) estimated poverty reduction that can be attributed to changes in infrastructure, tourism services, and other ecosystem services.Ecosystem services are important in the lives of the rural poor (16, 17), and some have proposed that there may be strong links between protecting ecosystem services and sustainable development (18). In an essay on the relationship between ecosystem conservation and the Millennium Development Goals, the authors argue that, “[a]ction is urgently needed to identify and quantify the links between biodiversity and ecosystem services on the one hand, and poverty reduction on the other” (ref. 19, p. 1502). We agree, but argue that the focus should not be on poverty’s links to biodiversity and ecosystem services per se, but rather on poverty’s links to programs that aim to maintain or enhance biodiversity and ecosystem services. Although studies have tried to estimate the value of ecosystem services to the poor (for example, references in refs. 17 and 20), these studies do not measure the impacts of changes in ecosystem services that result from actual policies and programs.Conservationists cannot induce changes in ecosystem services by magic; they must use policies and programs. There is a difference between the statements “poor people depend on ecosystem services” and “poor people would be better off with a specific conservation program” (ref. 21, p. 1137). Poor people may indeed derive value from ecosystem services, but a protected area program, for example, may affect the poor very differently than a payment for environmental services program would affect them. The effects may differ because the programs operate through different mechanisms or affect the same mechanisms to different degrees. Our study seeks to measure the poverty impacts of changes in ecosystem services that result from an actual conservation program.Our study also measures the contribution to poverty alleviation from protected area-based ecotourism at a national scale. In December 2010, the United Nations General Assembly unanimously adopted a resolution stating that “ecotourism can … contribute to the fight against poverty, the protection of the environment and the promotion of sustainable development.” [Resolution 65/173, entitled “Promotion of ecotourism for poverty eradication and environment protection” (http://www.un.org/en/ga/search/view_doc.asp?symbol=A/RES/65/173).] However, the hypothesis that nature-based tourism can benefit the rural poor has been vigorously debated (9, 10, 2225), and the empirical evidence for or against it is weak. In one review, the authors note that “there is no way to know the extent of changes in poverty … that can be attributed to a specific ecotourism project because none of the studies provided baseline measures or established specific causal mechanisms to relate the implemented program with observed outcomes” (ref. 8, p. 21). Our study does both.  相似文献   

12.
Collecting and removing ocean plastics can mitigate their environmental impacts; however, ocean cleanup will be a complex and energy-intensive operation that has not been fully evaluated. This work examines the thermodynamic feasibility and subsequent implications of hydrothermally converting this waste into a fuel to enable self-powered cleanup. A comprehensive probabilistic exergy analysis demonstrates that hydrothermal liquefaction has potential to generate sufficient energy to power both the process and the ship performing the cleanup. Self-powered cleanup reduces the number of roundtrips to port of a waste-laden ship, eliminating the need for fossil fuel use for most plastic concentrations. Several cleanup scenarios are modeled for the Great Pacific Garbage Patch (GPGP), corresponding to 230 t to 11,500 t of plastic removed yearly; the range corresponds to uncertainty in the surface concentration of plastics in the GPGP. Estimated cleanup times depends mainly on the number of booms that can be deployed in the GPGP without sacrificing collection efficiency. Self-powered cleanup may be a viable approach for removal of plastics from the ocean, and gaps in our understanding of GPGP characteristics should be addressed to reduce uncertainty.

An estimated 4.8 million to 12.7 million tons of plastic enter the ocean each year, distributing widely across the ocean’s surface and water column, settling into sediments, and accumulating in marine life (13). Numerous studies have shown that plastics contribute to significant damages to marine life and birds, therefore motivating introduction of effective mitigation and removal measures (4). Reducing or eliminating the amount of plastic waste generated is critically important, especially when the current loading may persist for years to even decades (1, 5, 6).As a highly visible part of an integrated approach for removing plastics from the environment (1, 5, 6), efforts are underway to collect oceanic plastic from accumulation zones in gyres formed by ocean currents (3, 7). Present approaches to remove plastic from the open ocean utilize a ship that must store plastic on board until it returns to port, often thousands of kilometers away, to unload the plastic, refuel, and resupply.Optimistic evaluation of cleanup time using the harvest–return approach indicates that at least 50 y will be required for full plastic removal (7), with an annual cost of $36.2 million (8); more conservative estimates suggest that partial removal will require more than 130 y (7, 9). Cleanup times of decades mean that environmental degradation may have already reduced the existing plastics to microscopic and smaller forms that can no longer be harvested before cleanup is completed (1, 4, 9). These considerations underscore the massive challenge of removing plastics from the ocean and naturally raise the following question: Can any approach remove plastics from the ocean faster than they degrade?Some current plastic removal strategies involve accumulation via a system of booms, consisting of semicircular buoys fit with a fine mesh extending below the ocean surface (7, 10). These booms are positioned so that prevailing currents bring plastic to the boom, where it then accumulates. The currently envisioned approach is for a ship to steam to the boom system, collect plastic, and then return to port to offload and refuel before resuming collection activities.The time required for recovering plastics could be reduced if return trips to refuel and unload plastic were eliminated. Indeed, the harvested plastic has an energy density similar to hydrocarbon fuels; harnessing this energy to power the ship could thereby eliminate the need to refuel or unload plastic from the ship, reducing fossil fuel usage and potentially cleanup times.Self-powered harvesting may provide a way to accomplish cleanup using the passive boom collection approach at timescales less than environmental degradation. Unfortunately, cleanup itself is a moving target, as technology improves (7) and especially as plastic continues to accumulate. What is required, therefore, is a framework to evaluate the impact of self-powered harvesting on cleanup time and fuel usage. The framework can then be updated as more data becomes available.To be valuable, the cleanup framework must be reducible to practice using actual technology. A viable technology for converting plastics into a usable fuel is hydrothermal liquefaction (HTL), which utilizes high temperature (300 °C to 550 °C) and high pressure (250 bar to 300 bar) to transform plastics into monomers and other small molecules suitable as fuels (1113). Oil yields from HTL are typically >90% even in the absence of catalysts and, unlike pyrolysis, yields of solid byproducts—which would need to be stored or burned in a special combustor—are less than 5% (1113), thus conferring certain comparative advantages to HTL. Ideally, a vessel equipped with an HTL-based plastic conversion system could fuel itself, creating its fuel from recovered materials. The result could be termed “blue diesel,” to reference its marine origin and in contrast with both traditional marine diesel and “green diesel,” derived from land-based renewable resources (14).To make the HTL approach feasible, the work produced from the plastic must exceed that required by the process and, ideally, the ship’s engines so that fuel can be stockpiled during collection for later use. Exergy analysis provides a framework to determine the maximum amount of work that a complex process is capable of producing without violating the fundamental laws of thermodynamics (15). The reliability of an exergy analysis depends on the reliability of the data it uses as inputs, and key parameters describing HTL performance and ocean surface plastic concentration are currently not known with certainty. A rigorous and statistically meaningful analysis of shipboard plastic processing must therefore integrate uncertainty (16). Here, the Monte Carlo (MC) simulation method, which has proven its usefulness for similar types of analyses, is an appropriate tool for handling the uncertainties inherent in the current application (17) and allows for the integration of new information and data as further study of oceanic surface plastic is completed.Accordingly, the thermodynamic performance of a shipboard HTL process was evaluated to determine whether (and when) the process could provide sufficient energy to power itself plus the ship. A framework was then developed to evaluate the implications of shipboard plastic conversion on fuel use and cleanup times. The results provide valuable insight into the potential use of shipboard conversion technologies for accelerating removal of plastics from the ocean, and the framework should prove useful for guiding future work in this area.  相似文献   

13.
Despite our fluency in reading human faces, sometimes we mistakenly perceive illusory faces in objects, a phenomenon known as face pareidolia. Although illusory faces share some neural mechanisms with real faces, it is unknown to what degree pareidolia engages higher-level social perception beyond the detection of a face. In a series of large-scale behavioral experiments (ntotal = 3,815 adults), we found that illusory faces in inanimate objects are readily perceived to have a specific emotional expression, age, and gender. Most strikingly, we observed a strong bias to perceive illusory faces as male rather than female. This male bias could not be explained by preexisting semantic or visual gender associations with the objects, or by visual features in the images. Rather, this robust bias in the perception of gender for illusory faces reveals a cognitive bias arising from a broadly tuned face evaluation system in which minimally viable face percepts are more likely to be perceived as male.

Human faces convey a rich amount of social information beyond their identity (13). We are able to rapidly evaluate the age (4), gender (5, 6), and emotional expression (7) of the faces of individuals, even if they are not known to us, in addition to more abstract traits, such as trustworthiness and aggressiveness (8, 9). Although these judgements are based on visual information, biases have been identified that suggest that both perceptual and cognitive factors are involved in face evaluation (1013). For example, people tend to judge faces as closer to their own age (10, 13), and damage to the amygdala is associated with perceiving unfamiliar faces as more trustworthy and approachable (12). Biases in face perception have important implications for understanding the neural processing of faces and their role in complex social behaviors (3). However, it is still unknown to what extent these behavioral biases arise from the tuning of the underlying face-processing mechanisms or, alternatively, from the nature of the experimental stimuli and task (10, 11). Here we approach this question from a new angle by examining face evaluation for a different class of faces: illusory faces in inanimate objects.Face pareidolia is the spontaneous perception of illusory facial features in inanimate objects (Fig. 1), and can be thought of as a natural error of our face detection system (1418). It has recently been shown that nonhuman primates also experience face pareidolia (14, 15), and that illusory faces engage similar neural mechanisms to real faces in the human brain (18). However, it is unclear to what degree higher-level social perception beyond the detection of a face occurs in pareidolia. Investigation of face evaluation in illusory faces has the potential to reveal new insight into the underlying mechanisms of face perception. A key feature of face pareidolia is that it involves the spontaneous perception of a face in an inanimate object, and consequently it is an example of face perception that is divorced from many characteristics that typically accompany the faces of living organisms, such as the motion of facial muscles (e.g., to form emotional expressions), chronological age, and biological sex. The primary question we address here is whether illusory faces are perceived to have these traits even in the absence of their biological specification. As there is no a priori reason why an illusory face should be perceived to have a specific age, gender or expression, any reliable perception of these attributes would be informative about inherent properties of the underlying system.Open in a separate windowFig. 1.Distribution of face ratings for 10 representative illusory face images from the set of 256 images used in Exp. 1a. Each image [n(images) = 256] received 100 ratings [total n(participants) = 800] in Exp. 1a. Below each image is the median (x̃) face-rating score, and a frequency plot of the distribution of face ratings for each image. Note: the scale of the y axis is different across frequency plots.Studies using human faces have suggested potential biases in the perceived characteristics of human faces along dimensions such as age (10, 13) and gender (10, 11, 19) under conditions of visual uncertainty. However, determining the potential origin and generality of these biases has proven difficult and highlights the fundamental challenges inherent in understanding how the perception of specific traits is linked to face processing. Human faces are visually complex, and our brains are incredibly well-adapted to processing faces as a cohesive whole (20). Consequently, it is challenging to empirically isolate particular aspects of a human face (e.g., biological sex) from other interdependencies (e.g., identity). Additionally, since human faces have a biologically specified age and gender, it is necessary to introduce uncertainty via deliberate experimental manipulation of the stimuli. Studies of human faces have used various forms of image manipulation, including removing hair (21, 22), showing silhouettes of faces in profile (23), adding visual noise (24), and synthetically generating faces by morphing along stimulus dimensions, such as gender (10, 11, 19). A critical advantage of using pareidolia to probe the tuning of the face-processing system is that no decisions about stimulus manipulation need to be made, as attributes such as gender and age are unspecified for illusory faces: there is no ground truth. This circumvents the concern that any observable biases are due to choices made in stimulus manipulation (10, 11), and instead any biases observed in the characteristics perceived for these faces are likely to be reflective of the underlying tuning of the face-processing system.In a series of large-scale behavioral experiments (total n = 3,815) we show that illusory faces in objects are perceived to have a distinct emotional expression, age, and gender.* Furthermore, we discovered a clear bias to perceive illusory faces as male rather than female, at a ratio of ∼4:1. This male bias for pareidolia is highly robust across images and people (Exps. 1a, 3a, and 3b), and cannot be explained by the corresponding object identity (Exps. 1b and 4), object label (Exp. 1b), color (Exps. 2 and 3), or object image content (Exp. 4) of the illusory face images. In contrast, using the same paradigm, we find that human face morphs created from an equal contribution of male and female faces are more likely to be perceived as female than male, although the female bias is smaller in magnitude than the male bias observed for pareidolia (Exp. 5). Together, these results demonstrate that gender evaluation is inextricably linked to face detection, and reveal that these mechanisms are engaged not only by human faces, but also by examples in which the minimal amount of visual information required for face detection occurs. It is important to emphasize that no assignment of gender is necessary for illusory faces as they do not have a biological sex. The existence of a compelling and biased categorization of gender for illusory faces is suggestive of a broadly tuned face evaluation system in which the features that are sufficient for face detection are not generally sufficient for the perception of female.  相似文献   

14.
15.
The global terrestrial carbon sink offsets one-third of the world’s fossil fuel emissions, but the strength of this sink is highly sensitive to large-scale extreme events. In 2012, the contiguous United States experienced exceptionally warm temperatures and the most severe drought since the Dust Bowl era of the 1930s, resulting in substantial economic damage. It is crucial to understand the dynamics of such events because warmer temperatures and a higher prevalence of drought are projected in a changing climate. Here, we combine an extensive network of direct ecosystem flux measurements with satellite remote sensing and atmospheric inverse modeling to quantify the impact of the warmer spring and summer drought on biosphere-atmosphere carbon and water exchange in 2012. We consistently find that earlier vegetation activity increased spring carbon uptake and compensated for the reduced uptake during the summer drought, which mitigated the impact on net annual carbon uptake. The early phenological development in the Eastern Temperate Forests played a major role for the continental-scale carbon balance in 2012. The warm spring also depleted soil water resources earlier, and thus exacerbated water limitations during summer. Our results show that the detrimental effects of severe summer drought on ecosystem carbon storage can be mitigated by warming-induced increases in spring carbon uptake. However, the results also suggest that the positive carbon cycle effect of warm spring enhances water limitations and can increase summer heating through biosphere–atmosphere feedbacks.An increase in the intensity and duration of drought (1, 2), along with warmer temperatures, is projected for the 21st century (3). Warmer and drier summers can substantially reduce photosynthetic activity and net carbon uptake (4). In contrast, warmer temperatures during spring and autumn prolong the period of vegetation activity and increase net carbon uptake in temperate ecosystems (5), sometimes even during spring drought (6). Atmospheric CO2 concentrations suggest that warm-spring–induced increases in carbon uptake could be cancelled out by the effects of warmer and drier summers (7). However, the extent and variability of potential compensation on net annual uptake using direct observations of ecosystem carbon exchange have not yet been examined for specific climate anomalies.In addition to perturbations of the carbon cycle, warmer spring temperatures can have an impact on the water cycle by increasing evaporation from the soil and plant transpiration (810), which reduces soil moisture. Satellite observations suggest that warmer spring and longer nonfrozen periods enhance summer drying via hydrological shifts in soil moisture status (11). Climate model simulations also indicate a soil moisture–temperature feedback between early vegetation green-up in spring and extreme temperatures in summer (12, 13). Soil water deficits during drought impose a reduction in stomatal conductance, thereby reducing evaporative cooling and thus increasing near-surface temperatures (14). Stomatal closure also has a positive (enhancing) feedback with atmospheric water demand by increasing the vapor pressure deficit (VPD) of the atmosphere (15). The vegetation response thus plays a crucial role for temperature feedbacks during drought (16).Given the opposing effects of concurrent warmer spring and summer drought, and an increased frequency of these anomalies projected until the end of this century (SI Appendix, Fig. S1), it is imperative to understand (i) the response of the terrestrial carbon balance and (ii) the interaction of carbon uptake with water and energy fluxes that are associated with these seasonal climate anomalies.The year 2012 was among the warmest on record for the contiguous United States (CONUS), which experienced one of the most severe droughts since the Dust Bowl era of the 1930s (17, 18). The drought caused substantial economic damage, particularly for agricultural production (SI Appendix). Annual mean temperatures were 1.8 °C above average, with the warmest spring (+2.9 °C) and second warmest summer (+1.4 °C) in the period of 1895–2012 (19). Precipitation deficits started to evolve in May across the Great Plains and the Midwest (17), but eventually affected more than half of the United States (20). By July, 62% of the United States experienced moderate to exceptional drought, which was the largest spatial extent of drought for the United States since the Dust Bowl era (19). Severe drought conditions with depleted soil moisture persisted throughout summer, and unprecedented precipitation deficits of 47% below normal for May through August were observed in the central Great Plains (17).Here, we analyze the response of land-atmosphere carbon and water exchange for major ecosystems in the United States during the concurrent warmer spring and summer drought of 2012 at the ecosystem, regional, and continental scales. We combine direct measurements of land-atmosphere CO2, water vapor, and energy fluxes from 22 eddy-covariance (EC) towers across the United States (SI Appendix, Fig. S2 and Table S1) with large-scale satellite remote-sensing observations of gross primary production (GPP), evapotranspiration (ET), and enhanced vegetation index (EVI) derived from the space-borne Moderate Resolution Imaging Spectroradiometer (MODIS), and estimates of net ecosystem production (NEP; i.e., net carbon uptake) from an atmospheric CO2 inversion (CarbonTracker, CTE2014). This comprehensive suite of standardized analyses across sites and data streams was crucial to constrain the impact of such a large-scale drought event with bottom-up and top-down approaches (21), and something only a few synthesis studies have achieved so far (4, 22).We test the hypothesis that increased carbon uptake due to warm spring offset the negative impacts of severe summer drought during 2012, and examine the relationship between early-spring–induced soil water depletion and increased summer temperatures. When using the term “drought,” we refer to precipitation deficits that resulted in soil moisture deficiencies (9).  相似文献   

16.
Two fundamental constraints limit the number of characters in text that can be displayed at one time—print size and display size. These dual constraints conflict in two important situations—when people with normal vision read text on small digital displays, and when people with low vision read magnified text. Here, we describe a unified framework for evaluating the joint impact of these constraints on reading performance. We measured reading speed as a function of print size for three digital formats (laptop, tablet, and cellphone) for 30 normally sighted and 10 low-vision participants. Our results showed that a minimum number of characters per line is required to achieve a criterion of 80% of maximum reading speed: 13 characters for normally sighted and eight characters for low-vision readers. This critical number of characters is nearly constant across font and display format. Possible reasons for this required number of characters are discussed. Combining these character count constraints with the requirements for adequate print size reveals that an individual’s use of a small digital display or the need for magnified print can shrink or entirely eliminate the range of print size necessary for achieving maximum reading speed.

Two fundamental constraints limit the number of characters that can be displayed in text at one time—print size and display format. The print size must be legible for the reader, and the size of the display (or page) limits the amount of text that can be rendered at this print size. As the print size gets larger, the amount of displayable text (number of characters per line and number of lines per page or screen) shrinks. These dual constraints conflict in two important situations—when magnification is required for people with low vision, and when people with normal vision read text on small digital displays. In this paper, we provide a unified analysis of the joint impact of these constraints. We also present empirical evidence showing how these constraints limit reading performance in cases of reduced acuity and small displays.A widely used measure of text legibility is reading speed, measured in words per minute (13). Reading speed is straightforward to measure, is sensitive to changes in both eye condition and text properties, and is functionally significant to readers (4). The relationship between print size and reading speed has been studied in detail, reviewed by Legge and Bigelow (5). Numerous studies have shown that as angular print size (i.e., the visual angle subtended by text letters) increases from the reader’s acuity limit, reading speed increases until a critical print size (CPS) is reached and, then, levels off at a maximum reading speed (MRS) for print sizes larger than the CPS. An example of reading speed as a function of print size is shown in Fig. 1A. This typical reading speed curve has been verified by various studies, and the idea of CPS is widely used by researchers and clinicians. For normally sighted readers, the CPS is ∼0.2°, and reading speed remains maximum for a factor of 10 in print size from 0.2° to 2° (5).Open in a separate windowFig. 1.Illustration of the impact of print size and display size on reading speed. (A) A typical reading curve illustrating the impact of character print size on reading speed (3,4, and 22). (B) A hypothetical reading curve illustrating the impact of display format on reading speed. (C and D) The hypothesized reading curves showing the joint impact of print size and display format (C: laptop, D: phone) on reading speed. The number of characters per line is now expressed in terms of angular print size. The conversion is described in SI Appendix, Appendix 2. The CPS and the print size corresponding to the CCC determine the lower and upper bounds of the range of recommended print size that allows near-MRS. In the examples shown in C, a recommended print size range exists in the laptop format as indicated by the gray zone. However, as shown in D, near-maximum reading is not possible for the phone format because the CPS exceeds the CCC.Early studies by Tinker and Paterson showed that when print size is fixed, the length of text lines, measured in picas (1 pica = 0.167 in), affects reading speed, indicating physical line length is an important factor to be considered when deciding on typographical layout (6). Several later studies examined the effect on reading speed of “window size” or “field of view” of magnifiers (7, 8) and provided estimates of the minimum field size in terms of the number of characters visible in the magnifier’s field of view. These prior findings are suggestive of the impact of display format on reading speed but do not show how print size and display size jointly constrain reading performance for continuous text. In the current study, we first examined the hypothesis that, for an individual to achieve maximum reading speed, lines of text must include, at least, a critical number of characters. We term this hypothetical number the critical character count (CCC). Our hypothetical curve of reading speed as a function of character count per line is shown in Fig. 1B: The reading speed stays at its maximum for large character counts but drops for character counts below the CCC. We hypothesize that the CCC (green vertical line) determines the minimum size of displays for effective reading.Why would the number of characters per line affect reading speed? Some property of the text, unrelated to the reader’s vision status, might impose a constraint. For example, the distribution of word lengths might be crucial; reading speed may be unaffected as long as the line length can accommodate most or all of the words in the text, but may slow down when some individual words occupy more than one line. If an intrinsic text property is the limiting factor, we might expect similar CCC values for participants with both normal and low vision. Alternatively, the character count impact on reading speed might be determined by the perceptual span. McConkie and Rayner defined the perceptual span as the region around fixation in which printed information facilitates reading behavior (9). They introduced a “moving window” method to measure the perceptual span in which gaze-contingent eye tracking was used to distort text at varying distances from the point of fixation. Studies have shown that the perceptual span in normal vision includes three or four characters to the left of fixation and 14 to 15 characters to the right of fixation (see citations 10, 11). It is measured in terms of character spaces because it is independent of angular print size over a wide range (1214) and is not font dependent (15). Recent studies have shown that the perceptual spans of low-vision participants with macular degeneration are substantially smaller than in normal vision (see citations 1618). It is plausible that, if lines of text have fewer characters than the extent of a reader’s perceptual span, reading would slow down because less information is available on each eye fixation. Moreover, if the size of the perceptual span determines the critical character count, we would expect the CCC to be lower in low vision than in normal vision.Interest in the text capacity of small displays emerged with the advent of digital displays on microwave ovens and other appliances, and then with mobile devices such as cellphones and smart watches. Similar concerns exist with traffic displays and other electronic message signs viewed at a distance. For a given print size, the screen size determines the number of characters per line and the number of lines on the display and, therefore, the total text capacity for the display. When the display capacity is small, many pages will be required to render lengthy texts with associated time costs in line and page switching.The tradeoff between print size and screen size becomes particularly acute for people with low vision who require large print to read. By a recent estimate, there are 5.7 million Americans with impaired vision with the number expected to increase to 9.6 million by the year 2050 as the population ages (19). Most people with impaired vision are not blind but have low vision. They continue to read visually but require substantial magnification of print. There is an important need for enhancing accessibility of websites and other digital displays for low-vision users by providing customizable text formats in terms of the number of characters per line and lines per screen. The flexibility of digital displays for customizing print size, page layout, and other properties of text has substantial advantages for people with low vision (20, 21). However, digital displays on small mobile devices pose challenges for people with low vision. For example, suppose a small display can fit 10 lines of 60 characters per line at the CPS of a normally sighted reader. The same display might only accommodate one line of six characters for a person with 20/200 acuity.The major goal of the research presented in this paper was to establish how display format interacts with the need for adequate print size in constraining reading performance for people with both normal and low vision. Our hypothesized unified framework for evaluating the joint impact of these constraints is shown in Fig. 1 C and D. Critically, for a given display format and font, the angular print size (lower axis) determines the character count per line (top axis); as the print size increases, the character count decreases. This reciprocal relationship enables the independent constraints on reading speed of print size (red curve from Fig. 1A) and character count (green curve from Fig. 1B) to be represented in a unified framework (Fig. 1 C and D). The black curves show the impact of the joint constraints, indicating that reading speed is expected to fall on or below the red and green curves. The CPS (red vertical line) determines the smallest print, the CCC (green vertical line) determines the largest print, and the gray zone in between represents the range of print sizes for achieving near-maximum reading speed (Fig. 1C). When a large CPS is required on a small display, the gray zone will shrink, and entirely disappear if the CPS exceeds the print size associated with the CCC (Fig. 1D). In this case, readers cannot achieve their maximum reading speed.To examine this unified framework, we measured reading speed as a function of print size for participants with both normal and low vision. They were tested with eight print sizes and three display configurations simulating typical sizes for cellphones, tablets, and laptops. The eight print sizes were selected to approximately match the character counts across the three display formats (see Fig. 2 and Methods). Participants were instructed to read silently as quickly and accurately as possible while retaining good comprehension. Reading speed was calculated as the total number of words read within a 1-min time period. We compared how the joint impact of print size and character count on reading speed changes with vision status, display format, and font.Open in a separate windowFig. 2.Two sets of sample stimuli. The upper panel shows a story excerpt with equal character count per page across three display formats. The lower panel shows excerpts with similar print size across the three display formats. Display formats and print sizes in the figure are scaled in size to fit journal requirements. Here, the same story was used across the displays for demonstration purposes. In the actual experiment, no story was presented more than once to a given subject.  相似文献   

17.
In many mammals, early social experience is critical to developing species-appropriate adult behaviors. Although mother–infant interactions play an undeniably significant role in social development, other individuals in the social milieu may also influence infant outcomes. Additionally, the social skills necessary for adult success may differ between the sexes. In chimpanzees (Pan troglodytes), adult males are more gregarious than females and rely on a suite of competitive and cooperative relationships to obtain access to females. In fission–fusion species, including humans and chimpanzees, subgroup composition is labile and individuals can vary the number of individuals with whom they associate. Thus, mothers in these species have a variety of social options. In this study, we investigated whether wild chimpanzee maternal subgrouping patterns differed based on infant sex. Our results show that mothers of sons were more gregarious than mothers of daughters; differences were especially pronounced during the first 6 mo of life, when infant behavior is unlikely to influence maternal subgrouping. Furthermore, mothers with sons spent significantly more time in parties containing males during the first 6 mo. These early differences foreshadow the well-documented sex differences in adult social behavior, and maternal gregariousness may provide sons with important observational learning experiences and social exposure early in life. The presence of these patterns in chimpanzees raises questions concerning the evolutionary history of differential social exposure and its role in shaping sex-typical behavior in humans.Early socialization is critical to developing social competency later in life. In mammals, mothers have enormous influence on their offspring’s early social experience, with implications for adult social behavior. In humans, the relative contribution of parental and others’ social influence on the development of sex-typical behavior receives considerable attention and is often debated (14). Comparative research provides insight into the origins and development of sex differences in the absence of human cultural sex socialization. Decades of research in rodent and primate models have demonstrated that social deprivation curtails the development of species-appropriate behaviors (58) and cognition (9, 10). Primate mothers are critical for the normal social development of their infants (1113), with classic studies demonstrating that maternal deprivation is associated with intense anxiety (14), inappropriate aggression (15), and an inability to form social relationships (16). The mother–infant relationship is therefore critical to proper social development; however, research also demonstrates the importance of the larger social milieu (1720). For example, a recent study in mice found that early interactions with mothers and peers independently shape adult behavior (21). Likewise, some negative impacts of maternal deprivation are attenuated in macaques that are raised in peer groups (22).Most primates rear their offspring in a stable social group in which mothers can influence infant interactions with conspecifics. Restrictive or protective mothering styles have been observed in some Old World monkey species, and style correlates with maternal rank, parity, offspring sex, and perceived risk to the infant (2325). These patterns demonstrate maternal influence on the early social experiences of infants in species that live in cohesive groups. However, much less is known about how mothers influence infant social opportunities in species that live in fission–fusion social groups. Fission–fusion species, particularly those that have high fission–fusion dynamics such as humans and chimpanzees (26), are an excellent paradigm in which to consider individual differences in infant social exposure, as subgroup size and composition varies over time. This dynamic social system allows flexibility in the amount of social exposure that infants experience.Here, we examine how maternal gregariousness varies by infant sex in wild chimpanzees (Pan troglodytes). Chimpanzees live in permanent communities or unit groups with multiple males and females. Temporary subgroups, or parties, form within the community and often change composition over the course of the day. Studies have demonstrated that party size is related to food availability and the presence of fertile females (2729). The social flexibility characteristic of chimpanzees may allow mothers and infants to associate in parties of different sizes depending upon the optimal strategy for infant development.Given well-documented differences between adult males and females, particularly regarding social behavior, it seems possible that maternal subgrouping patterns vary by infant sex in a manner that foreshadows adult sex-specific behavioral strategies. Adult male east African chimpanzees (Pan troglodytes schweinfurthii) are more gregarious and aggressive than females, as they compete for high dominance rank, which affords greater access to estrous females (28, 30). Males also cooperate with each other to form coalitions for dominance rank acquisition, community defense, and communal hunting (31), with some male–male relationships enduring for years (32). Although intersite variation exists (e.g., refs. 33, 34), east African female chimpanzees are generally less gregarious than males (35), and at Gombe National Park, Tanzania, Kasekela community females spend ∼40–70% of their time alone or with adult daughters and dependents (3638). Females also exhibit comparatively low levels of physical aggression (28, 3941). Emerging evidence suggests that sex differences in social behavior appear early in life. For example, male infants have significantly more social partners than their female counterparts when they first spend the majority of time out of reach from their mother (30–36 mo) (42). Specifically, males in this age class interact more with unrelated individuals, particularly adult males, than do females. However, it is unclear to what extent mothers mediate social exposure based on infant sex. Differences in maternal gregariousness may predispose male and female infants to different levels of independence and sociability very early in life, which may influence development. In chimpanzees and other fission–fusion species, mothers may join or leave parties, thereby affording or restricting social exposure. Once infants are able to travel without being carried by their mother, they may also influence maternal patterns by leading mothers to join or remain in parties (43).As in most other primates, the chimpanzee mother–infant relationship is primary in early life. Infants are in almost constant contact with their mothers for the first 4–6 mo of life (28), when they have low levels of social interactions with nonmothers (this study). Mothers with infants later form nursery groups that contain several mothers (4447). Infants only begin traveling under their own power more than they are carried by their mother around the age of 3.5 y (48). Infants begin eating solid foods by 6 mo of age (49) yet remain nutritionally dependent upon their mother until they are weaned between 4 and 5 y old (50, 51).In this study, we investigated differences in maternal subgrouping patterns based on infant sex. We hypothesized that maternal gregariousness varies in ways that would foster sex-appropriate social development. Because male offspring will need to integrate into the adult male hierarchy and rely more on social skills and bonds for success as adults, whereas females will ultimately spend more of their time alone with dependent offspring, we predicted that mothers with male offspring would be more gregarious than mothers with female offspring. We tested this prediction with 37 y of data on maternal subgrouping among the wild chimpanzees at Gombe National Park, Tanzania. We considered three measures of gregariousness for mothers who were observed with both sons and daughters. The first measure was the time spent with another adult who is not an immediate female family member (mother or adult daughter). This measure allowed mother–adult daughter pairs to count as nonsocial time given that some study females do not emigrate and frequently associate with their mothers; excluding their time together allows us to address the question of infant exposure to the larger social milieu. Secondly, we examined average party size and composition. Specifically, we examined the average adult party size, as well as the average number of maternal kin (individual adults related through the matriline) and maternal nonkin (individual adults not related through the matriline) present in a mother’s party over the course of a day. Thirdly, we investigated subgrouping preferences by comparing the proportion of time mothers spent in mixed-sex (at least one adult male in the party) and female-only parties (at least one additional adult female, but no adult males, in the party). The second and third measures of gregariousness include mothers and adult daughters in the count of adults present to yield the actual adult party size and composition. We compared each of our metrics by infant sex during two time periods: the first 6 mo of life and from 6 mo to 3.5 y of age. Infants are in nearly constant contact with their mother during the first 6 mo of life such that they are unlikely to directly influence their mother’s subgrouping patterns, whereas patterns at older ages may reflect both maternal and infant social preferences and interactions. The first 6 mo postpartum is also distinct in that it corresponds to the period when mothers experience the highest metabolic costs of lactation (52), which may influence behavior. Females with infants less than 3.5 y of age in our study very rarely exhibited sexual swellings, which are known to influence female gregariousness (53). Finally, we investigated infant interactions with nonmothers across the entire infancy (infants aged ≤3.5 y) using a complementary 24-y dataset on infant behavior to investigate how maternal gregariousness relates to infant social interactions.  相似文献   

18.
Chronic media multitasking is quickly becoming ubiquitous, although processing multiple incoming streams of information is considered a challenge for human cognition. A series of experiments addressed whether there are systematic differences in information processing styles between chronically heavy and light media multitaskers. A trait media multitasking index was developed to identify groups of heavy and light media multitaskers. These two groups were then compared along established cognitive control dimensions. Results showed that heavy media multitaskers are more susceptible to interference from irrelevant environmental stimuli and from irrelevant representations in memory. This led to the surprising result that heavy media multitaskers performed worse on a test of task-switching ability, likely due to reduced ability to filter out interference from the irrelevant task set. These results demonstrate that media multitasking, a rapidly growing societal trend, is associated with a distinct approach to fundamental information processing.  相似文献   

19.
Chemical analyses of ancient organic compounds absorbed into the pottery fabrics of imported Etruscan amphoras (ca. 500–475 B.C.) and into a limestone pressing platform (ca. 425–400 B.C.) at the ancient coastal port site of Lattara in southern France provide the earliest biomolecular archaeological evidence for grape wine and viniculture from this country, which is crucial to the later history of wine in Europe and the rest of the world. The data support the hypothesis that export of wine by ship from Etruria in central Italy to southern Mediterranean France fueled an ever-growing market and interest in wine there, which, in turn, as evidenced by the winepress, led to transplantation of the Eurasian grapevine and the beginning of a Celtic industry in France. Herbal and pine resin additives to the Etruscan wine point to the medicinal role of wine in antiquity, as well as a means of preserving it during marine transport.  相似文献   

20.
All living systems perpetuate themselves via growth in or on the body, followed by splitting, budding, or birth. We find that synthetic multicellular assemblies can also replicate kinematically by moving and compressing dissociated cells in their environment into functional self-copies. This form of perpetuation, previously unseen in any organism, arises spontaneously over days rather than evolving over millennia. We also show how artificial intelligence methods can design assemblies that postpone loss of replicative ability and perform useful work as a side effect of replication. This suggests other unique and useful phenotypes can be rapidly reached from wild-type organisms without selection or genetic engineering, thereby broadening our understanding of the conditions under which replication arises, phenotypic plasticity, and how useful replicative machines may be realized.

Like the other necessary abilities life must possess to survive, replication has evolved into many diverse forms: fission, budding, fragmentation, spore formation, vegetative propagation, parthenogenesis, sexual reproduction, hermaphroditism, and viral propagation. These diverse processes however share a common property: all involve growth within or on the body of the organism. In contrast, a non–growth-based form of self-replication dominates at the subcellular level: molecular machines assemble material in their external environment into functional self-copies directly, or in concert with other machines. Such kinematic replication has never been observed at higher levels of biological organization, nor was it known whether multicellular systems were even capable of it.Despite this lack, organisms do possess deep reservoirs of adaptive potential at all levels of organization, allowing for manual or automated interventions that deflect development toward biological forms and functions different from wild type (1), including the growth and maintenance of organs independent of their host organism (24), or unlocking regenerative capacity (57). Design, if framed as morphological reconfiguration, can reposition biological tissues or redirect self-organizing processes to new stable forms without recourse to genomic editing or transgenes (8). Recent work has shown that individual, genetically unmodified prospective skin (9) and heart muscle (10) cells, when removed from their native embryonic microenvironments and reassembled, can organize into stable forms and behaviors not exhibited by the organism from which the cells were taken, at any point in its natural life cycle. We show here that if cells are similarly liberated, compressed, and placed among more dissociated cells that serve as feedstock, they can exhibit kinematic self-replication, a behavior not only absent from the donating organism but from every other known plant or animal. Furthermore, replication does not evolve in response to selection pressures, but arises spontaneously over 5 d given appropriate initial and environmental conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号