首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
El Niño Southern Oscillation (ENSO) is the most dominant interannual signal of climate variability and has a strong influence on climate over large parts of the world. In turn, it strongly influences many natural hazards (such as hurricanes and droughts) and their resulting socioeconomic impacts, including economic damage and loss of life. However, although ENSO is known to influence hydrology in many regions of the world, little is known about its influence on the socioeconomic impacts of floods (i.e., flood risk). To address this, we developed a modeling framework to assess ENSO’s influence on flood risk at the global scale, expressed in terms of affected population and gross domestic product and economic damages. We show that ENSO exerts strong and widespread influences on both flood hazard and risk. Reliable anomalies of flood risk exist during El Niño or La Niña years, or both, in basins spanning almost half (44%) of Earth’s land surface. Our results show that climate variability, especially from ENSO, should be incorporated into disaster-risk analyses and policies. Because ENSO has some predictive skill with lead times of several seasons, the findings suggest the possibility to develop probabilistic flood-risk projections, which could be used for improved disaster planning. The findings are also relevant in the context of climate change. If the frequency and/or magnitude of ENSO events were to change in the future, this finding could imply changes in flood-risk variations across almost half of the world’s terrestrial regions.El Niño Southern Oscillation (ENSO) is the most dominant interannual signal of climate variability on Earth (1) and influences climate over large parts of the Earth’s surface. In turn, ENSO is known to strongly influence many physical processes and societal risks, including droughts, food production, hurricane damage, and tropical tree cover (24). For decision makers it is essential to have information on the possible impacts of this climate variability on society. Such information can be particularly useful when the climate variability can be anticipated in advance, thus allowing for early warning and disaster planning (5). For example, projections carried out in September 2013 already suggested a 75% likelihood that El Niño conditions would develop in late 2014 (6). According to the ENSO forecast of the International Research Institute for Climate and Society and the Climate Prediction Center/NCEP/NWS, dated 9 October 2014, observed ENSO conditions did indeed move to those of a borderline El Niño during September and October 2014, with indications of weak El Niño conditions during the northern hemisphere winter 2014–2015 (iri.columbia.edu/our-expertise/climate/forecasts/enso/current/).However, to date little is known on ENSO’s influence on flood risk, whereby risk is defined as a function of hazard, exposure, and vulnerability (7) and is expressed in terms of socioeconomic indicators such as economic damage or affected people. Although global-scale flood-risk assessments have recently become a hot topic in both the scientific and policy communities, assessments to date have focused on current risks (711) or future risks under long-term mean climate change (12, 13). Meanwhile, other recent research suggests that ENSO-related variations of precipitation are likely to intensify in the future (14, 15) and that extreme El Niño events may increase in frequency (16). Hence, an understanding of ENSO’s influence on flood risk is vital in understanding both the possible impacts of upcoming ENSO events as well as planning for the potential socioeconomic impacts of changes in future ENSO frequency.In this paper, we show for the first time to our knowledge that ENSO has a very strong influence on flood risk in large parts of the world. These findings build on previous studies, especially in Australia and the United States, which show that ENSO and other forms of climate variability are strongly related to flood hazard in some regions (1725). To do this, we developed a modeling framework to specifically assess ENSO’s influence on global flood risk. The modeling framework involves using a cascade of hydrological, hydraulic, and impact models (10, 11). Using this model cascade, we assessed flood impacts in terms of three indicators: (i) exposed population, (ii) exposed gross domestic product (GDP), and (iii) urban damage (Materials and Methods). A novel aspect of the framework is that we are able to calculate flood risk conditioned on the climatology of all years, El Niño years only, and La Niña years only. This allows us, for the first time to our knowledge, to simulate the impacts of ENSO on flood risk. The hydrological and impact models have previously been validated for the period 1958–2000 (11). Here, we carried out further validation to assess the specific ability of the model cascade to simulate year-to-year fluctuations in peak river flows and flood impacts and anomalies in peak flows and impacts during El Niño and La Niña years (SI Discussion, Validation of Hydrological and Hydraulic Models).  相似文献   

2.
With the majority of the global human population living in coastal regions, correctly characterizing the climate risk that ocean-dependent communities and businesses are exposed to is key to prioritizing the finite resources available to support adaptation. We apply a climate risk analysis across the European fisheries sector to identify the most at-risk fishing fleets and coastal regions and then link the two analyses together. We employ an approach combining biological traits with physiological metrics to differentiate climate hazards between 556 populations of fish and use these to assess the relative climate risk for 380 fishing fleets and 105 coastal regions in Europe. Countries in southeast Europe as well as the United Kingdom have the highest risks to both fishing fleets and coastal regions overall, while in other countries, the risk-profile is greater at either the fleet level or at the regional level. European fisheries face a diversity of challenges posed by climate change; climate adaptation, therefore, needs to be tailored to each country, region, and fleet’s specific situation. Our analysis supports this process by highlighting where and what adaptation measures might be needed and informing where policy and business responses could have the greatest impact.

The ocean provides human societies with a wide variety of goods and services, ranging from food and employment to climate regulation and cultural nourishment (1). Climate change is already shifting the abundance, distribution, productivity, and phenology of living marine resources (24), thereby impacting many of the ecosystem services upon which society depends (5). These impacts, however, are not being experienced uniformly by human society but depend on the characteristics and context of the community or business affected. Raising awareness and understanding the risk to human systems is therefore a critical first step (6) to developing and prioritizing appropriate adaptation options in response to the challenges of the climate crisis (7).Over the past decades, climate risk assessments (CRAs) and climate vulnerability assessments (CVAs) have been developed to identify and prioritize adaptation needs. The approach, developed by the Intergovernmental Panel on Climate Change (IPCC), has shifted over time from a focus on “vulnerability” to a focus on “risk” (8), in part due to criticisms of the negative framing that “vulnerability” implies (9). The modern CRA framework (10) considers risk as the intersection of hazard, exposure, and vulnerability (11), Kenya (12), and the United States (13), at the national level across coastal areas of the United States (14, 15) and Australia (16, 17), across regions such as Pacific island nations (18, 19), and globally (6, 20, 21). Several “best practice” guides have also been developed (7, 22).Table 1.Definitions of terms, as used in the context of this climate risk analysis
TermDefinition used here
Climate riskThe potential for climate change to have adverse consequences for human systems, specifically for European coastal regions and fishing fleets.
HazardThe potential for and severity of climate change impacts on the unit of interest (i.e., fish and shellfish populations). Here, we focus explicitly on negative impacts, following from the definition of risk as being an adverse consequence.
ExposureThe sensitivity of a region or fishing fleet to the climate hazard (i.e., the likelihood of being affected by changes in the living marine resources).
VulnerabilityThe ability of a region or fleet to anticipate or respond to changes induced by climate hazards and to minimize, cope with, and recover from the consequences. High adaptive capacity gives low vulnerability.
Open in a separate windowThese definitions are adapted for the present study from those used in the most recent IPCC report (5).CRAs and CVAs covering European waters are, however, notable by their absence from this list. This is surprising given that European waters provide over one-eighth of the world’s total marine fisheries catches (23) and have witnessed many well-documented changes in fish abundance and distribution in response to climate change (2426). The lack of attention to climate risk in European fisheries may be due, in part, to the previous results of global CVAs (6) that ranked European countries as having low vulnerabilities (their relative affluence giving high “adaptive capacity” in these analyses). Yet the European region poses unique challenges when assessing climate risks due to the wide range of species, biogeographical zones, and habitats linked by intertwined management structures. Fishing techniques and the scale of fisheries also vary widely from large fleets of small vessels in the Mediterranean Sea (27) to some of the largest fishing vessels in the world (e.g., the 144-m-long Annelies Ilena). Furthermore, although fisheries contribute very little to national gross domestic product (GDP), food, or income security for most European countries (25), in specific communities and regions, fishing is the mainstay of employment (28). Adapting European fisheries to a changing climate, therefore, requires the development of robust analyses capable of assessing the climate risk across this extremely diverse continent.We conducted a CRA across the European marine fisheries sector that is globally unprecedented in its span and detail, estimating the climate risk of 1) coastal regions and 2) fishing fleets in linked analyses. Our analyses spanned more than 50° of latitude from the Black Sea to the Arctic and encompass the United Kingdom, Norway, Iceland, and Turkey in addition to the 22 coastal nations of the European Union. We developed an approach that distinguishes fine-scale geographical differences in the climate hazard of fish and shellfish populations, and hence, the climate risk to both European coastal regions and fishing fleets. Uniquely, since both CRAs were based on the same underlying climate hazard, these analyses could be combined to compare the relative importance of this hazard to fleets and coastal regions within a country.  相似文献   

3.
Assessing temporal variability in extreme rainfall events before the historical era is complicated by the sparsity of long-term “direct” storm proxies. Here we present a 2,200-y-long, accurate, and precisely dated record of cave flooding events from the northwest Australian tropics that we interpret, based on an integrated analysis of meteorological data and sediment layers within stalagmites, as representing a proxy for extreme rainfall events derived primarily from tropical cyclones (TCs) and secondarily from the regional summer monsoon. This time series reveals substantial multicentennial variability in extreme rainfall, with elevated occurrence rates characterizing the twentieth century, 850–1450 CE (Common Era), and 50–400 CE; reduced activity marks 1450–1650 CE and 500–850 CE. These trends are similar to reconstructed numbers of TCs in the North Atlantic and Caribbean basins, and they form temporal and spatial patterns best explained by secular changes in the dominant mode of the El Niño/Southern Oscillation (ENSO), the primary driver of modern TC variability. We thus attribute long-term shifts in cyclogenesis in both the central Australian and North Atlantic sectors over the past two millennia to entrenched El Niño or La Niña states of the tropical Pacific. The influence of ENSO on monsoon precipitation in this region of northwest Australia is muted, but ENSO-driven changes to the monsoon may have complemented changes to TC activity.Two primary components of tropical precipitation—monsoons and tropical cyclones (TCs)—are capable of producing high volumes of rainfall in short periods of time (extreme rainfall events) that lead to flooding. Because both systems respond to changes in atmospheric and sea surface conditions (1, 2), it is imperative that we understand their sensitivities to climate change. For example, over recent decades, warming of the oceans has driven increases in the mean latitude (3) and energy released by TCs (4). These storms (e.g., hurricanes, typhoons, tropical storms, and tropical depressions) can produce enormous economic and societal disruptions but also represent important components of low-latitude hydroclimate (5) and ocean heat budgets (6). Monsoon reconstructions spanning the last several millennia have been developed using a variety of proxies (710), including stalagmites (1113), but reconstructing past TC activity is generally more difficult. In most of the world’s ocean basins, accurate counts of TCs are limited to the start of the satellite era (since 1970 CE), an interval too short to capture changes occurring over multidecadal to centennial time scales. Therefore, as a complement to the historical record, sedimentological analyses of storm-sensitive sites have formed the basis of TC reconstructions, primarily in and around the North Atlantic and Caribbean basins (1419), that largely focus on near-coastal sequences, including beach ridges, overwash deposits, and shallow marine sediments. Together, these studies have revealed that North Atlantic and Caribbean TC activity varied substantially over the past several centuries to millennia, with multicentennial shifts attributed to a range of factors including atmospheric dynamics in the North Atlantic, North African rainfall, and El Niño/Southern Oscillation (ENSO).Today, ENSO represents a dominant control of interannual TC activity at a global scale through its influences on surface ocean temperature gradients and atmospheric circulation (2023). However, no record has clearly demonstrated the link between ENSO and prehistoric TCs in the tropical Pacific, Indian, or Australian regions, leaving unanswered questions about the sensitivity of cyclogenesis to ENSO before the modern era. This issue is of particular concern given modeling results that predict changes in ENSO behavior may accompany anthropogenic warming of the atmosphere (24, 25). Fully assessing the sensitivity of TCs to changes in climate requires high-resolution and precisely dated paleostorm reconstructions from multiple basins spanning periods beyond those available in observational data, a goal that has largely proven elusive. Few such records unambiguously derived from TCs have been identified, particularly in the western Pacific and Indo-Pacific (20, 2630).  相似文献   

4.
5.
The El Niño−Southern Oscillation (ENSO) phenomenon, the most pronounced feature of internally generated climate variability, occurs on interannual timescales and impacts the global climate system through an interaction with the annual cycle. The tight coupling between ENSO and the annual cycle is particularly pronounced over the tropical Western Pacific. Here we show that this nonlinear interaction results in a frequency cascade in the atmospheric circulation, which is characterized by deterministic high-frequency variability on near-annual and subannual timescales. Through climate model experiments and observational analysis, it is documented that a substantial fraction of the anomalous Northwest Pacific anticyclone variability, which is the main atmospheric link between ENSO and the East Asian Monsoon system, can be explained by these interactions and is thus deterministic and potentially predictable.The El Niño−Southern Oscillation (ENSO) phenomenon is a coupled air−sea mode, and its irregular occurring extreme phases El Niño and La Niña alternate on timescales of several years (18). The global atmospheric response to the corresponding eastern tropical Pacific sea surface temperature (SST) anomalies (SSTA) causes large disruptions in weather, ecosystems, and human society (3, 5, 9).One of the main properties of ENSO is its synchronization with the annual cycle: El Niño events tend to grow during boreal summer and fall and terminate quite rapidly in late boreal winter (918). The underlying dynamics of this seasonal pacemaking can be understood in terms of the El Niño/annual cycle combination mode (C-mode) concept (19), which interprets the Western Pacific wind response during the growth and termination phase of El Niño events as a seasonally modulated interannual phenomenon. This response includes a weakening of the equatorial wind anomalies, which causes the rapid termination of El Niño events after boreal winter and thus contributes to the seasonal synchronization of ENSO (17). Mathematically, the modulation corresponds to a product between the interannual ENSO phenomenon (ENSO frequency: fE) and the annual cycle (annual frequency: 1 y-1), which generates near-annual frequencies at periods of  ~  10 mo (1 + fE) and  ~  15 mo (1 − fE) (19).In nature, a wide variety of nonlinear processes exist in the climate system. Atmospheric examples include convection and low-level moisture advection (19). An example for a quadratic nonlinearity is the dissipation of momentum in the planetary boundary layer, which includes a product between ENSO (E) and the annual cycle (A) due to the windspeed nonlinearity: vE⋅ vA (17, 19). In the frequency domain, this product results in the near-annual sum (1 + fE) and difference (1 − fE) tones (19). The commonly used Niño 3.4 (N3.4) SSTA index (details in SI Appendix, SI Materials and Methods) exhibits most power at interannual frequencies (Fig. 1A). In contrast, the near-annual combination tones (1 ± fE) are the defining characteristic of the C-mode (Fig. 1B).Open in a separate windowFig. 1.Schematic for the ENSO (E) and combination mode (ExA) anomalous surface circulation pattern and corresponding spectral characteristics. (A) Power spectral density for the normalized N3.4 index of the Hadley Centre Sea Ice and Sea Surface Temperature data set version 1 (HadISST1) 1958–2013 SSTA using the Welch method. (B) As in A but for the theoretical quadratic combination mode (ExA). (C) Regression coefficient of the normalized N3.4 index and the anomalous JRA-55 surface stream function for the same period (ENSO response pattern). (D) Regression coefficient of the normalized combination mode (ExA) index and the anomalous JRA-55 surface stream function (combination mode response pattern). Areas where the anomalous circulation regression coefficient is significant above the 95% confidence level are nonstippled.Physically, the dominant near-annual combination mode comprises a meridionally antisymmetric circulation pattern (Fig. 1D). It features a strong cyclonic circulation in the South Pacific Convergence Zone, with a much weaker counterpart cyclone in the Northern Hemisphere Central Pacific. The most pronounced feature of the C-mode circulation pattern is the anomalous low-level Northwest Pacific anticyclone (NWP-AC). This important large-scale atmospheric feature links ENSO impacts to the Asian Monsoon systems (2025) by shifting rainfall patterns (SI Appendix, Fig. S1B), and it drives sea level changes in the tropical Western Pacific that impact coastal systems (26). It has been demonstrated using spectral analysis methods and numerical model experiments that the C-mode is predominantly caused by nonlinear atmospheric interactions between ENSO and the warm pool annual cycle (19, 20). Local and remote thermodynamic air−sea coupling amplify the signal but are not the main drivers for the phase transition of the C-mode and its associated local phenomena (e.g., the NWP-AC) (20).Even though ENSO and the C-mode are not independent, their patterns and spectral characteristics are fundamentally different, which has important implications when assessing the amplitude and timing of their regional climate impacts (Fig. 1). Here we set out to study the role of nonlinear interactions between ENSO and the annual cycle (10) in the context of C-mode dynamics. Such nonlinearities can, in principle, generate a suite of higher-order combination modes, which would contribute to the high-frequency variability of the atmosphere—in a deterministic and predictable way.  相似文献   

6.
This paper identifies rare climate challenges in the long-term history of seven areas, three in the subpolar North Atlantic Islands and four in the arid-to-semiarid deserts of the US Southwest. For each case, the vulnerability to food shortage before the climate challenge is quantified based on eight variables encompassing both environmental and social domains. These data are used to evaluate the relationship between the “weight” of vulnerability before a climate challenge and the nature of social change and food security following a challenge. The outcome of this work is directly applicable to debates about disaster management policy.Managing disasters, especially those that are climate-induced, calls for reducing vulnerabilities as an essential step in reducing impacts (18). Exposure to environmental risks is but one component of potential for disasters. Social, political, and economic processes play substantial roles in determining the scale and kind of impacts of hazards (1, 812). “Disasters triggered by natural hazards are not solely influenced by the magnitude and frequency of the hazard event (wave height, drought intensity etc.), but are also rather heavily determined by the vulnerability of the affected society and its natural environment” (ref. 1, p. 2). Thus, disaster planning and relief should address vulnerabilities, rather than returning a system to its previous condition following a disaster event (6).Using archaeologically and historically documented cultural and climate series from the North Atlantic Islands and the US Southwest, we contribute strength to the increasing emphasis on vulnerability reduction in disaster management. We ask whether there are ways to think about climate uncertainties that can help people build resilience to rare, extreme, and potentially devastating climate events. More specifically, we ask whether vulnerability to food shortfall before a climate challenge predicts the scale of impact of that challenge. Our goal is both to assess current understandings of disaster management and to aid in understanding how people can build the capability to increase food security and reduce their vulnerability to climate challenges.We present analyses of cases from substantially different regions and cultural traditions that show strong relationships between levels of vulnerability to food shortage before rare climate events and the impact of those events. The patterns and details of the different contexts support the view that vulnerability cannot be ignored. These cases offer a long-term view rarely included in studies of disaster management or human and cultural well-being (for exceptions, see refs. 13 and 14). This long time frame allows us to witness changes in the context of vulnerabilities and climate challenges, responding to a call for more attention to “how human security changes through time, and particularly the dynamics of vulnerability in the context of multiple processes of change” (ref. 10, p. 17).  相似文献   

7.
Theories of human behavior suggest that individuals attend to the behavior of certain people in their community to understand what is socially normative and adjust their own behavior in response. An experiment tested these theories by randomizing an anticonflict intervention across 56 schools with 24,191 students. After comprehensively measuring every school’s social network, randomly selected seed groups of 20–32 students from randomly selected schools were assigned to an intervention that encouraged their public stance against conflict at school. Compared with control schools, disciplinary reports of student conflict at treatment schools were reduced by 30% over 1 year. The effect was stronger when the seed group contained more “social referent” students who, as network measures reveal, attract more student attention. Network analyses of peer-to-peer influence show that social referents spread perceptions of conflict as less socially normative.One of the most elusive and important goals in the behavioral sciences is to understand how community-wide patterns of behavior can be changed (18). In some cases, social scientists seek to reduce widespread and persistent patterns of negative behavior like corruption or conflict; in others, to promote positive behavior like healthy eating or environmental conservation. Research on changing individual behavior provides many intervention strategies targeted to the psychology of the individual, such as attitudinal persuasion, situational cues, and peer influence (912). Another body of research focuses on scaling up behavior change interventions to the community level, studying attempts to reach every individual in a population with mass education or persuasion messaging (13), or with institutional regulation or defaults (14). A third strategy has been to seed a social network with individuals who demonstrate new behaviors, and to rely on processes of social influence to spread the behavior through the channel of structural features of the network (1518).The present paper incorporates all three approaches. We implemented a social influence strategy designed to change individual behavior, and we tested whether, as a result, new behaviors and norms are transmitted through a social network and also whether they scale up to shift overall levels of behavior within a community. Specifically, we randomized the selection of students within a comprehensively measured social network to determine the relative power of certain individuals to influence the behavior of others. We randomly assigned the presence of this treatment to some community networks and not others. This approach allowed us to determine whether influence from a small group of influential people is enough to shift a community’s behavioral climate, which we define as a widespread and persistent behavioral pattern across the community.Our experimental design is motivated by theoretical debates about how social norms emerge and are transmitted within communities (1, 1923). At the community level, it is believed that social norms, or perceptions of typical or desirable behavior, emerge when they support the survival of the group (24) or because of arbitrary historical precedent (23). Once formed, these informal rules for behavior are transmitted by the survival of those who follow them, or through the punishment of deviants and the social success of followers. For these reasons, theory suggests that most individual community members strive to understand the social norms of a group and adjust their own behavior accordingly (21, 25). When many individuals in a community perceive a similar norm and adjust their behavior, then a community-wide behavioral pattern may emerge.Social norms may be explained directly to community members through storytelling or advice, but small-scale experiments and theory suggest that individuals often infer which behaviors are typical and desirable through observation of other community members’ behavior (1, 21, 22). A large literature attempts to identify which community members are effective at transmitting social information across a community (16, 18, 2628). Theories of norm perception predict that individuals infer community social norms by observing the behavior of community members who have many connections within the community’s social network (29). Sometimes called “social referents” (20), individuals may view these community members as important sources of normative information, in part because their many connections imply a comparatively greater knowledge of typical or desirable behavioral patterns in the community. In fact, social referents may have many connections for numerous reasons: they may have a higher status, they may be more popular, or they may have a greater capacity for socialization. Social referents may be different on many dimensions, but what they share is a comparatively greater amount of attention from their peers. Theory and evidence point to the prediction, supported by recent experimental evidence (20, 30), that social referents are particularly influential over perceptions of community norms and behavior in their network.However, despite the large theoretical and empirical literature devoted to ideas about how social norms and behavioral patterns emerge and persist, the central question of which individual level interventions can shift a community’s behavioral climate remains open. We pose this question in the context of adolescent school conflict, such as verbal and physical aggression, rumor mongering, and social exclusion. Although the term “conflict” lacks a consensus definition (31), we follow other social scientists (32, 33) who define conflict broadly, as characterized by antagonistic relations or interactions, or behavioral opposition, respectively, between two or more social entities. This broad definition includes harassment or antagonism from a high-power or high-status person aimed at a person with lower power or status (i.e., bullying), but also conflict between or among people with relatively balanced levels of social power and status.Within many middle and secondary schools in the United States, student conflict is part of the schools’ behavioral climate; that is, conflict is widespread and persistent (34, 35). In contrast to claims that conflict is driven by a minority group of student “bullies” (36), evidence suggests a majority of students contribute to conflicts at their school (37), and these conflicts persist over time because of cyclical patterns of offense and retaliation (38).Student conflict, and in particular bullying, has recently attracted research and policy attention as online social media have brought face-to-face student conflicts into adult view (34, 39). New laws and school policies have been introduced to improve school climate, along with many school programs targeting students’ character and empathy. However, basic research illustrates that students perceive social constraints on reporting or intervening in peer conflict (40). That is, students may perpetuate and tolerate conflict not because of their personal character or level of empathy, but because they perceive conflict behaviors to be typical or desirable: that is, normative within their school’s social network. In such a context, reporting or intervening in peer conflict could be perceived by peers as deviant.  相似文献   

8.
California is currently in the midst of a record-setting drought. The drought began in 2012 and now includes the lowest calendar-year and 12-mo precipitation, the highest annual temperature, and the most extreme drought indicators on record. The extremely warm and dry conditions have led to acute water shortages, groundwater overdraft, critically low streamflow, and enhanced wildfire risk. Analyzing historical climate observations from California, we find that precipitation deficits in California were more than twice as likely to yield drought years if they occurred when conditions were warm. We find that although there has not been a substantial change in the probability of either negative or moderately negative precipitation anomalies in recent decades, the occurrence of drought years has been greater in the past two decades than in the preceding century. In addition, the probability that precipitation deficits co-occur with warm conditions and the probability that precipitation deficits produce drought have both increased. Climate model experiments with and without anthropogenic forcings reveal that human activities have increased the probability that dry precipitation years are also warm. Further, a large ensemble of climate model realizations reveals that additional global warming over the next few decades is very likely to create ∼100% probability that any annual-scale dry period is also extremely warm. We therefore conclude that anthropogenic warming is increasing the probability of co-occurring warm–dry conditions like those that have created the acute human and ecosystem impacts associated with the “exceptional” 2012–2014 drought in California.The state of California is the largest contributor to the economic and agricultural activity of the United States, accounting for a greater share of population (12%) (1), gross domestic product (12%) (2), and cash farm receipts (11%) (3) than any other state. California also includes a diverse array of marine and terrestrial ecosystems that span a wide range of climatic tolerances and together encompass a global biodiversity “hotspot” (4). These human and natural systems face a complex web of competing demands for freshwater (5). The state’s agricultural sector accounts for 77% of California water use (5), and hydroelectric power provides more than 9% of the state’s electricity (6). Because the majority of California’s precipitation occurs far from its urban centers and primary agricultural zones, California maintains a vast and complex water management, storage, and distribution/conveyance infrastructure that has been the focus of nearly constant legislative, legal, and political battles (5). As a result, many riverine ecosystems depend on mandated “environmental flows” released by upstream dams, which become a point of contention during critically dry periods (5).California is currently in the midst of a multiyear drought (7). The event encompasses the lowest calendar-year and 12-mo precipitation on record (8), and almost every month between December 2011 and September 2014 exhibited multiple indicators of drought (Fig. S1). The proximal cause of the precipitation deficits was the recurring poleward deflection of the cool-season storm track by a region of persistently high atmospheric pressure, which steered Pacific storms away from California over consecutive seasons (811). Although the extremely persistent high pressure is at least a century-scale occurrence (8), anthropogenic global warming has very likely increased the probability of such conditions (8, 9).Despite insights into the causes and historical context of precipitation deficits (811), the influence of historical temperature changes on the probability of individual droughts has—until recently—received less attention (1214). Although precipitation deficits are a prerequisite for the moisture deficits that constitute “drought” (by any definition) (15), elevated temperatures can greatly amplify evaporative demand, thereby increasing overall drought intensity and impact (16, 17). Temperature is especially important in California, where water storage and distribution systems are critically dependent on winter/spring snowpack, and excess demand is typically met by groundwater withdrawal (1820). The impacts of runoff and soil moisture deficits associated with warm temperatures can be acute, including enhanced wildfire risk (21), land subsidence from excessive groundwater withdrawals (22), decreased hydropower production (23), and damage to habitat of vulnerable riparian species (24).Recent work suggests that the aggregate combination of extremely high temperatures and very low precipitation during the 2012–2014 event is the most severe in over a millennium (12). Given the known influence of temperature on drought, the fact that the 2012–2014 record drought severity has co-occurred with record statewide warmth (7) raises the question of whether long-term warming has altered the probability that precipitation deficits yield extreme drought in California.  相似文献   

9.
Value is a foundational concept in reinforcement learning and economic choice theory. In these frameworks, individuals choose by assigning values to objects and learn by updating values with experience. These theories have been instrumental for revealing influences of probability, risk, and delay on choices. However, they do not explain how values are shaped by intrinsic properties of the choice objects themselves. Here, we investigated how economic value derives from the biologically critical components of foods: their nutrients and sensory qualities. When monkeys chose nutrient-defined liquids, they consistently preferred fat and sugar to low-nutrient alternatives. Rather than maximizing energy indiscriminately, they seemed to assign subjective values to specific nutrients, flexibly trading them against offered reward amounts. Nutrient–value functions accurately modeled these preferences, predicted choices across contexts, and accounted for individual differences. The monkeys’ preferences shifted their daily nutrient balance away from dietary reference points, contrary to ecological foraging models but resembling human suboptimal eating in free-choice situations. To identify the sensory basis of nutrient values, we developed engineering tools that measured food textures on biological surfaces, mimicking oral conditions. Subjective valuations of two key texture parameters—viscosity and sliding friction—explained the monkeys’ fat preferences, suggesting a texture-sensing mechanism for nutrient values. Extended reinforcement learning and choice models identified candidate neuronal mechanisms for nutrient-sensitive decision-making. These findings indicate that nutrients and food textures constitute critical reward components that shape economic values. Our nutrient-choice paradigm represents a promising tool for studying food–reward mechanisms in primates to better understand human-like eating behavior and obesity.

The concept of “value” plays a fundamental role in behavioral theories that formalize learning and decision-making. Economic choice theory examines whether individuals behave as if they assigned subjective values to goods, which are inferred from observable choices (1, 2). In reinforcement learning, values integrate past reward experiences to guide future behavior (3, 4). Although these theories have been critical for revealing how choices depend on factors such as probability, risk, and delay (2, 4, 5), they do not explain how values and preferences are shaped by particular properties of the choice objects themselves. Why do we like chocolate, and why do some individuals like chocolate more than others? In classical economics, one famously does not argue about tastes (6). By contrast, biology conceptualizes choice objects as rewards with well-defined components that benefit survival and reproductive success and endow rewards with value (4). Here we followed this approach to investigate how the biologically critical, intrinsic properties of foods—their nutrients and sensory qualities—influence values inferred from behavioral choices and help explain individual differences in preference.The reward value of food is commonly thought to derive from its nutrients and sensory properties: sugar and fat make foods attractive because of their sweet taste and rich mouthfeel. Sensory scientists and food engineers seek to uncover rules that link food composition to palatability (710). Similarly, ecological foraging theory links animals’ food choices to nutritional quality (11). By contrast, in behavioral and neuroscience experiments, food components are often only manipulated to elicit choice variation but rarely studied in their own right. Here, we aimed to empirically ground the value concept in the constitutive properties of food rewards. We combined a focus on specific nutrients and food qualities with well-controlled repeated-choice paradigms from behavioral neurophysiology and studied the choices of rhesus monkeys (Macaca mulatta) for nutrient-defined liquid rewards.Like humans, macaques are experts in scrutinizing rewards for sophisticated, value-guided decision-making (4, 1215). This behavioral complexity, the closeness of the macaque brain’s sensory and reward systems to those of humans (16), and the suitability for single-neuron recordings make macaques an important model for studying food–reward mechanisms with relevance to human eating behavior and obesity (17).Previous studies in macaques uncovered key reward functions and their neuronal implementations, including the assignment of values to choice options (13, 1825), reinforcement learning (4, 26) and reward-dependence on satiety and thirst (7, 27, 28). Despite these advances, behavioral principles for nutrient rewards in macaques remain largely uncharacterized. The typical diet of these primates includes a broad variety of foods and nutrient compositions (29, 30). Their natural feeding conditions require adaptation to both short-term and seasonal changes in nutrient availability and ecologically diverse habitats (31, 32). Thus, the macaque reward system should be specialized for flexible, nutrient-directed food choices. Accordingly, we manipulated the fat and sugar content of liquid food rewards to study their effects on macaques’ choices. We addressed several aims.First, we tested whether macaques’ choices were sensitive to the nutrient composition of rewards, consistent with the assignment of subjective values. In previous studies, macaques showed subjective trade-offs between flavored liquid rewards (12, 13). We hypothesized that nutrients and nutrient-correlated sensory qualities constitute the intrinsic food properties that shape such preferences. We focused on macronutrients (carbohydrates, fats, and proteins), specifically sugar and fat, because of their relevance for human overeating and obesity, and their role in determining sensory food qualities. As nutrients are critical for survival and well-being, nonsated macaques should prefer foods high in nutrient content. In addition, like humans, they may individually prefer specific nutrients and sensory qualities (e.g., valuing isocaloric sweet taste over fat-like texture). Because nutrients are basic building blocks of foods, establishing an animal’s “nutrient–value function” could enable food choice predictions across contexts.Second, to identify a physical, sensory basis for nutrient preferences, we developed engineering tools to measure nutrient-dependent food textures on biological surfaces that mimicked oral conditions. Although sugar is directly sensed by taste receptors (33), the mechanism for oral fat-sensing remains unclear. While the existence of a “fat taste” in primates is debated (34), substantial evidence points to a somatosensory, oral–texture mechanism (7, 9). Fat-like textures reliably elicit fatty, creamy mouthfeel (8) and activate neural sensory and reward systems in macaques (35) and humans (36, 37). Two distinct texture parameters are implicated in fat-sensing: viscosity and sliding friction, reflecting a food’s thickness and lubricating properties, respectively (3840). We hypothesized that these parameters mediate the influence of fat content on choices.Third, we compared the monkeys’ choices to ecologically relevant dietary reference points. In optimal foraging theory (41), animals maximize energy as a common currency for choices (“energy maximization”). Alternatively, animals may balance the intake of different nutrients (“nutrient balancing”) (4244) or choose food based on the reward value of specific sensory and nutrient components (“nutrient reward”) (7, 45). We evaluated these strategies in a repeated-choice paradigm suited for neurophysiological recordings and derived hypotheses about the neuronal mechanisms for nutrient-sensitive decision-making (e.g., “energy-tracking neurons” versus “nutrient–value neurons”—Discussion).Finally, based on our behavioral data, we explored in computational simulations how theories of reinforcement learning and economic choice can be extended by a nutrient–value function. Together with recently proposed homeostatic reinforcement learning (46), nutrient-specific model parameters may optimize predictions when choices depend on nutrient composition and homeostatic set-points.  相似文献   

10.
Despite its theoretical prominence and sound principles, integrated pest management (IPM) continues to suffer from anemic adoption rates in developing countries. To shed light on the reasons, we surveyed the opinions of a large and diverse pool of IPM professionals and practitioners from 96 countries by using structured concept mapping. The first phase of this method elicited 413 open-ended responses on perceived obstacles to IPM. Analysis of responses revealed 51 unique statements on obstacles, the most frequent of which was “insufficient training and technical support to farmers.” Cluster analyses, based on participant opinions, grouped these unique statements into six themes: research weaknesses, outreach weaknesses, IPM weaknesses, farmer weaknesses, pesticide industry interference, and weak adoption incentives. Subsequently, 163 participants rated the obstacles expressed in the 51 unique statements according to importance and remediation difficulty. Respondents from developing countries and high-income countries rated the obstacles differently. As a group, developing-country respondents rated “IPM requires collective action within a farming community” as their top obstacle to IPM adoption. Respondents from high-income countries prioritized instead the “shortage of well-qualified IPM experts and extensionists.” Differential prioritization was also evident among developing-country regions, and when obstacle statements were grouped into themes. Results highlighted the need to improve the participation of stakeholders from developing countries in the IPM adoption debate, and also to situate the debate within specific regional contexts.Feeding the 9,000 million people expected to inhabit Earth by 2050 will present a constant and significant challenge in terms of agricultural pest management (13). Despite a 15- to 20-fold increase in pesticide use since the 1960s, global crop losses to pests—arthropods, diseases, and weeds—have remained unsustainably high, even increasing in some cases (4). These losses tend to be highest in developing countries, averaging 40–50%, compared with 25–30% in high-income countries (5). Alarmingly, crop pest problems are projected to increase because of agricultural intensification (4, 6), trade globalization (7), and, potentially, climate change (8).Since the 1960s, integrated pest management (IPM) has become the dominant crop protection paradigm, being endorsed globally by scientists, policymakers, and international development agencies (2, 915). The definitions of IPM are numerous, but all involve the coordinated integration of multiple complementary methods to suppress pests in a safe, cost-effective, and environmentally friendly manner (9, 11). These definitions also recognize IPM as a dynamic process in terms of design, implementation, and evaluation (11). In practice, however, there is a continuum of interpretations of IPM (e.g., refs. 14, 16, 17), but bounded by those that emphasize pesticide management (i.e., “tactical IPM”) and those that emphasize agroecosystem management (i.e., “strategic IPM,” also known as “ecologically based pest management”) (16, 18, 19). Despite apparently solid conceptual grounding and substantial promotion by the aforementioned groups, IPM has a discouragingly poor adoption record, particularly in developing-country settings (9, 10, 1523), raising questions over its applicability as it is presently conceived (15, 16, 22, 24).The possible reasons behind the developing countries’ poor adoption of IPM have been the subject of considerable discussion since the 1980s (9, 15, 16, 22, 2531), but this debate has been notable for the limited direct involvement from developing-country stakeholders. Most of the literature exploring poor adoption of IPM in the developing world has originated in the developed world (e.g., refs. 15, 16, 22). An international workshop, entitled “IPM in Developing Countries,” was held at the Pontificia Universidad Católica del Ecuador (PUCE) from October 31 to November 3, 2011. Poor IPM adoption spontaneously became a central discussion point, creating an opportunity to address the apparent participation bias in the IPM adoption debate.It was therefore decided to explore the topic further by eliciting and mapping the opinions of a large and diverse pool of IPM professionals and practitioners from around the world, including many based in developing countries. The objective was to generate and prioritize a broad list of hypotheses to explain poor IPM adoption in developing-country agriculture. We also wanted to explore differences as influenced by respondents’ characteristics, particularly their region of practice. To achieve these objectives, we used structured concept mapping (32), an empirical survey method often used to quantify and give thematic structure to open-ended opinions (33).We know of only one other similar study that characterizes obstacles to IPM. It was based on the structured responses of 153 experts, all from high-income countries (30). Our survey was designed to progress from unstructured to structured responses, and to reach a much larger and diverse pool of participants, particularly those from the “Global South.” Considering that the vast majority of farmers live in developing countries (34), it would seem imperative that the voices from this region be heard.  相似文献   

11.
In recent years, the Indian Ocean Dipole (IOD) has received much attention in light of its substantial impacts on both the climate system and humanity. Due to its complexity, however, a reliable prediction of the IOD is still a great challenge. In this study, climate network analysis was employed to investigate whether there are early warning signals prior to the start of IOD events. An enhanced seesaw tendency in sea surface temperature (SST) among a large number of grid points between the dipole regions in the tropical Indian Ocean was revealed in boreal winter, which can be used to forewarn the potential occurrence of the IOD in the coming year. We combined this insight with the indicator of the December equatorial zonal wind in the tropical Indian Ocean to propose a network-based predictor that clearly outperforms the current dynamic models. Of the 15 IOD events over the past 37 y (1984 to 2020), 11 events were correctly predicted from December of the previous year, i.e., a hit rate of higher than 70%, and the false alarm rate was around 35%. This network-based approach suggests a perspective for better understanding and predicting the IOD.

The Indian Ocean Dipole (IOD) is a zonal dipole mode of the sea surface temperature (SST) that occurs interannually in the tropical Indian Ocean (TIO) (1, 2). A positive (negative) IOD features a below (above) normal SST off the Sumatran coast and a warming (cooling) over the western equatorial Indian Ocean. Ever since the severe floods in East Africa in 1997, which were induced by an extreme positive IOD (pIOD) event (2), the IOD has attracted much attention. Many studies showed that the IOD can affect the climate not only in the Indian Ocean rim countries but also in other more distant regions (35). In the past decades, great efforts have been made to reveal the mechanisms of the IOD, but the IOD prediction is still challenging, which further limits the associated seasonal climate predictions.One reason for the difficulties in predicting the IOD is that the TIO is complex, with multiple processes interacting. Previous studies reported that there are different triggers that may initiate the occurrence of the IOD, such as the El Nin˜o–Southern Oscillation (ENSO) (6, 7), the Indonesian Throughflow (8), intraseasonal disturbances (9), the subtropical IOD (10), springtime Indonesian rainfall (11), and the interhemispheric pressure gradient over the maritime continent (12). In addition, the development of the IOD in boreal summer was found to be controlled by different feedback processes in the Indian Ocean, including the thermocline–SST, cloud–radiation–SST, and evaporation–SST–wind feedbacks (1, 9, 13). As a typical atmosphere–ocean coupled mode, the IOD is thus a complex phenomenon that is sensitive to changes in multiple associated processes. The dilemma is that it is not clear which of the above-mentioned triggers play the main role in a given IOD event. For instance, ENSO events have been well recognized as a major external force to trigger IOD events via altering the Walker Circulation (4), but there are still many cases (e.g., 1996, 2012, and 2019) where an IOD event is not accompanied by an ENSO event, and the different IOD classifications (3, 14, 15) make the ENSO–IOD interactions even more complicated. The formation of the IOD was found to be associated with both the forcing outside the Indian Ocean and internal variability within the basin (16), but there is no certain physical mechanism that combines all the associated forcings and feedback processes. Moreover, as a phenomenon with quasi-biennial frequency (1, 17), two (or three) pIOD events may occur in consecutive years (18), and sometimes there may even be a few consecutive years with no remarkable IOD events. These complex characteristics increase the difficulty of the IOD prediction (1923). In general, skillful predictions of the IOD events by climate models can only be made one season ahead (22, 23) and occasionally two or three seasons for strong events (9, 18). A rapid drop of the IOD predictive capability across the boreal winter has been well recognized as the winter predictability barrier (24), suggesting a lack of precursor signals due to the low signal-to-noise ratio. To better cope with the IOD-associated impacts, continuous efforts are thus required to improve the predictive capability for IOD events.In this study, we investigated this issue. Previous studies revealed different trigger and development mechanisms that contribute to the formation of the IOD. However, it is unclear whether there are any TIO states favorable for IOD onset, given the fact that the previously proposed triggering mechanisms do not work in all cases. It was recognized that a shallower thermocline depth in the eastern Indian Ocean is a precondition that favors the pIOD activity on decadal time scales (25, 26). Are there any preconditions on interannual or even biennial scales that may lead to an improved early warning of the IOD onset? To address these questions, we employed a recently proposed approach, the climate network analysis (27), to examine possible relationships among the grid points in the dipole regions. The climate network, as the name implies, is a network of the climate system with grid points (or stations) considered as nodes and the relations (i.e., correlations) between each pair of nodes as links (2729). By studying climate systems in terms of climate networks, one may obtain more detailed information, including the topological structure (30, 31) and the dynamic evolutions (32, 33). A particular advantage of the climate network approach is that by taking into account all the interior grid points in the climate system, even weak signals (that could appear seemingly negligible when considered alone) contribute substantially to the overall system dynamics, eventually leading to significant effects when exhibiting cooperative behavior (34). Based on this advantage, the climate network analysis has been successfully applied in the forecasting of Atlantic Meridional Overturning Circulation (35, 36); the predicting of extreme precipitation events (37); and in particular, an early forecast of the onset of the Indian Summer Monsoon (38). By analyzing the cooperative behaviors among the interior nodes in the tropical Pacific and north Pacific, early warning signals have been detected for the onset of El Nin˜o events (39, 40) and the phase change of the Pacific Decadal Oscillation (PDO) (34). In this study, we employed this approach to investigate IOD events, especially the interactions of the SSTs between the dipole regions, to see whether there are early signals arising from the cooperative behaviors among the grid points, or in other words, to detect possible TIO states that favor the development of the IOD.  相似文献   

12.
A prevailing view is that Weber’s law constitutes a fundamental principle of perception. This widely accepted psychophysical law states that the minimal change in a given stimulus that can be perceived increases proportionally with amplitude and has been observed across systems and species in hundreds of studies. Importantly, however, Weber’s law is actually an oversimplification. Notably, there exist violations of Weber’s law that have been consistently observed across sensory modalities. Specifically, perceptual performance is better than that predicted from Weber’s law for the higher stimulus amplitudes commonly found in natural sensory stimuli. To date, the neural mechanisms mediating such violations of Weber’s law in the form of improved perceptual performance remain unknown. Here, we recorded from vestibular thalamocortical neurons in rhesus monkeys during self-motion stimulation. Strikingly, we found that neural discrimination thresholds initially increased but saturated for higher stimulus amplitudes, thereby causing the improved neural discrimination performance required to explain perception. Theory predicts that stimulus-dependent neural variability and/or response nonlinearities will determine discrimination threshold values. Using computational methods, we thus investigated the mechanisms mediating this improved performance. We found that the structure of neural variability, which initially increased but saturated for higher amplitudes, caused improved discrimination performance rather than response nonlinearities. Taken together, our results reveal the neural basis for violations of Weber’s law and further provide insight as to how variability contributes to the adaptive encoding of natural stimuli with continually varying statistics.

Weber’s law states that the discrimination threshold or “just noticeable difference” (JND) is proportional to stimulus amplitude (1). While the prevailing view is that this law holds across multiple sensory modalities and species (16), more recent studies have shown that Weber’s law consistently does not hold across sensory modalities when higher amplitude, physiologically relevant stimuli are considered [e.g., auditory (7, 8), visual (9), and vestibular (1012) systems]. Specifically, discrimination thresholds saturate for higher amplitudes and are thus not proportional to stimulus amplitude across the entire range (7, 913). While there is a building consensus that perceptual discrimination performance is better than that predicted from Weber’s law across sensory modalities, to date, the neural substrates underlying such violations remain unknown. It is generally thought that a decrease in neural sensitivity or gain with increasing stimulus amplitude provides the neural basis for Weber’s law (1416). Such “Weber adaptation” is advantageous for sensory coding as it serves to broaden the dynamic range and to maintain information capacity in response to stimuli whose amplitudes vary over orders of magnitude (14, 15, 1721). However, Weber adaptation is not sufficient to explain the violations of Weber’s law that have been observed across modalities. Thus, what mechanisms underlie perception across the entire range of stimuli encountered in the natural environment remains a fundamental and unanswered question.Here, we took advantage of a sensory system with well-described circuitry to gain insight into how neural response properties give rise to perceptual performance. Specifically, the vestibular system generates vital reflexes that stabilize gaze and posture during movement, and makes a vital contribution to self-motion perception (2225). Previous studies have demonstrated that vestibular perception violates Weber’s law. Specifically, the discrimination performance of human subjects is much better than expected for higher rotational stimulus amplitudes (1012) commonly experienced during natural everyday activities (e.g., walking) (26). Head motion is initially sensed by peripheral vestibular afferents that make synaptic contact with central vestibular nuclei neurons (27). Neurons within the ventral posterior lateral (VPL) thalamus receive direct input from the vestibular nuclei (28) and project to higher cortical areas (29, 30) that mediate self-motion perception (31) (see ref. 32 for review). Afferents and vestibular nuclei neurons do not display significant nonlinearities in their responses to the low frequency stimuli that have been used in perceptual testing (3336). In contrast, neurons at the next stage of processing within area VPL have been shown to respond nonlinearly to head motion, notably showing decreases in neural sensitivity with increasing stimulus amplitude (3741). However, whether such nonlinearities can explain the observed violations of Weber’s law remains unknown to date.Accordingly, we investigated the neural substrate underlying improved discrimination performance for higher amplitude vestibular stimuli. We recorded the activities of vestibular thalamocortical neurons in response to rotational self-motion stimuli with varying amplitude in rhesus monkeys. We found that neural discrimination thresholds were lower than predicted for higher stimulus amplitudes, thereby providing a neural correlate for previous results showing improved perceptual discrimination performance (10). Theory predicts that discrimination thresholds are determined not only by neural gain but also by variability (42). Consequently, we characterized how each quantity varied as a function of stimulus amplitude. We found that the dependence of neural variability on stimulus amplitude accounted for previous perceptual results.Specifically, variability initially increased but saturated for higher stimulus amplitudes, thereby causing the improved neural discrimination performance required to explain perception. Taken together, our results reveal that amplitude-dependent changes in neural variability can account for the fact that self-motion perceptual performance is better than that predicted from Weber’s law. Our findings, furthermore, provide insight as to how variability contributes to the adaptive encoding of natural stimuli with continually varying statistics.  相似文献   

13.
Sorghum is a drought-tolerant crop with a vital role in the livelihoods of millions of people in marginal areas. We examined genetic structure in this diverse crop in Africa. On the continent-wide scale, we identified three major sorghum populations (Central, Southern, and Northern) that are associated with the distribution of ethnolinguistic groups on the continent. The codistribution of the Central sorghum population and the Nilo-Saharan language family supports a proposed hypothesis about a close and causal relationship between the distribution of sorghum and languages in the region between the Chari and the Nile rivers. The Southern sorghum population is associated with the Bantu languages of the Niger-Congo language family, in agreement with the farming-language codispersal hypothesis as it has been related to the Bantu expansion. The Northern sorghum population is distributed across early Niger-Congo and Afro-Asiatic language family areas with dry agroclimatic conditions. At a finer geographic scale, the genetic substructure within the Central sorghum population is associated with language-group expansions within the Nilo-Saharan language family. A case study of the seed system of the Pari people, a Western-Nilotic ethnolinguistic group, provides a window into the social and cultural factors involved in generating and maintaining the continent-wide diversity patterns. The age-grade system, a cultural institution important for the expansive success of this ethnolinguistic group in the past, plays a central role in the management of sorghum landraces and continues to underpin the resilience of their traditional seed system.Sorghum [Sorghum bicolor (L.) Moench] is a drought-tolerant C4 crop of major importance for food security in Africa (1, 2). The grain crop has played a fundamental role in adaptation to environmental change in the Sahel since the early Holocene, when the Sahara desert was a green homeland for Nilo-Saharan groups pursuing livelihoods based on hunting or herding of cattle and wild grain collecting (3, 4). The earliest archaeological evidence of human sorghum use is dated 9100–8900 B.P., and the seeds were excavated together with cattle bones, lithic artifacts, and pottery from a site close to the current border between Egypt and Sudan (5, 6). The timing of the domestication of cattle and sorghum remains contested due to limited archaeological evidence, but, at some point, the livelihoods in this region transformed from hunting and gathering into agropastoralism. Sorghum cultivation in combination with cattle herding was a successful livelihood adaptation to the dry grassland ecology, and, eventually, as the climate changed and the Sahel moved south, the agropastoral adaptation spread over large parts of the Central African steppes (7).Recent molecular work on sorghum diversity (813) stands on the shoulders of J. R. Harlan and others’ work from the 1960s–1980s. Diversity of sorghum types, varieties, and races has been related to movement of people, disruptive selection, geographic isolation, gene flow from wild to cultivated plants, and recombination of these types in different environments (2, 14, 15). On the basis of morphology, Harlan and de Wet (16) classified sorghum into five basic and 10 intermediary botanical races (16). The race “bicolor” has small elongated grains, and, because of the “primitive” morphology, it is considered the progenitor of more derived races (16, 17). The race “guinea” has open panicles well adapted to high rainfall areas, and it is proposed that the “guinea margaritiferum” type from West Africa represents an independent domestication (10, 12). The race “kafir” is associated with the Bantu agricultural tradition, and the race “durra” is considered well-adapted to the dryland agricultural areas along the Arabic trade routes from West Africa to India (14). The fifth race, “caudatum,” is characterized by “turtle-backed” grains, and Stemler et al. (ref. 17, p. 182) proposed that “the distribution of caudatum sorghums and Chari-Nile–speaking peoples coincide so closely that a causal relationship seems probable.” This hypothesis is considered plausible on the basis of historical linguistics, but it remains to be tested by independent evidence (3). The hypothesis is a specific version of the interdisciplinary “farming-language codispersal hypothesis,” which proposes that farming and language families have moved together through population growth and migration (18, 19).The role of cultural selection and adaptation has been documented in many studies of domestication and translocation of crops (20, 21). The literature on the role of farmers’ management in maintaining and enhancing genetic resources (2226) is relevant to understanding how patterns of diversity visible at large spatial scales are caused by evolutionary processes operating at finer scales. On-farm management of crop varieties and cultural boundaries influencing the diffusion of seeds, practices, and knowledge are important local-scale explanatory factors behind patterns of regional and continental scale associations between ethnolinguistic groups and crop genetic structure (2730).Knowledge on the role of social, cultural, and environmental factors in structuring crop diversity is important to assess the resilience of rural livelihoods in the face of global environmental change. Impact studies project that anthropogenic climate change will negatively affect sorghum yields in Sub-Saharan Africa (31, 32). Such projections pose questions about the availability of appropriate genetic resources and the ability of both breeding programs and local seed systems to develop the required adaptations in a timely manner (33, 34). Insight in local seed systems can contribute to more sustainable development assistance efforts aimed at building resilience in African agriculture in the face of climate change and human insecurity (25, 35).Here, we present a study of geographic patterns in African sorghum diversity and its associations with the distribution of ethnolinguistic groups. First, we evaluate the proposed farming-language codispersal hypothesis by genotyping sorghum accessions from a continent-wide diversity panel (36). Second, to elucidate the local level mechanisms involved in generating and maintaining this diversity, we present a case study of the sorghum seed system of a group of descendants of the first Nilo-Saharan sorghum cultivators, the Pari people in South Sudan. By comparing accessions collected in 1983 with seeds sampled from the same villages in 2010 and 2013, we assess the resilience of the traditional Pari seed system during a period of civil war and climatic stress. We draw on environmental, linguistic, and anthropological evidence to understand the role of geographic, ecological, historical, and cultural factors in shaping sorghum genetic structure.  相似文献   

14.
Coastal ecosystems provide numerous important ecological services, including maintenance of biodiversity and nursery grounds for many fish species of ecological and economic importance. However, human population growth has led to increased pollution, ocean warming, hypoxia, and habitat alteration that threaten ecosystem services. In this study, we used long-term datasets of fish abundance, water quality, and climatic factors to assess the threat of hypoxia and the regulating effects of climate on fish diversity and nursery conditions in Elkhorn Slough, a highly eutrophic estuary in central California (United States), which also serves as a biodiversity hot spot and critical nursery grounds for offshore fisheries in a broader region. We found that hypoxic conditions had strong negative effects on extent of suitable fish habitat, fish species richness, and abundance of the two most common flatfish species, English sole (Parophrys vetulus) and speckled sanddab (Citharichthys stigmaeus). The estuary serves as an important nursery ground for English sole, making this species vulnerable to anthropogenic threats. We determined that estuarine hypoxia was associated with significant declines in English sole nursery habitat, with cascading effects on recruitment to the offshore adult population and fishery, indicating that human land use activities can indirectly affect offshore fisheries. Estuarine hypoxic conditions varied spatially and temporally and were alleviated by strengthening of El Niño conditions through indirect pathways, a consistent result in most estuaries across the northeast Pacific. These results demonstrate that changes to coastal land use and climate can fundamentally alter the diversity and functioning of coastal nurseries and their adjacent ocean ecosystems.Over a third of Earth’s human population is concentrated along coastal margins (1), and much of the planet is dependent on the many functions and services provided by coastal ecosystems. Coastal ecosystems face multiple threats that include habitat loss and modification through urban development, intensification of agriculture and subsequent eutrophication, climate change, and overfishing, all of which decrease ecosystem functioning and diminish the ecological and economic value of continental shelves around the world (26). The effect of multiple stressors, such as climate change and hypoxia, over spatial and temporal scales relevant to the diversity and function of coastal systems is poorly understood. Furthermore, there are very few predictions on how climate change will interact with other anthropogenic threats to influence ecosystem functioning and services.Certain critical functions and services of coastal ecosystems, such as estuaries, are potentially affected by anthropogenic threats. These services include supporting biodiversity (7) and the provision of nursery habitat for species, where estuaries can contribute disproportionately to offshore fisheries productivity (8, 9). The nursery function, in particular, could be affected by a suite of anthropogenic stressors, manifesting in declines to offshore fisheries production. Along the California Current, factors potentially influencing the coastal nursery function include climatic effects, such as El Niño and upwelling (1012), as well as anthropogenic factors operating on multiple scales, such as ocean warming on ocean basin scales (1214), or anthropogenic nutrient loading on local to regional scales. The latter can drive the depletion of oxygen from the water column, hypoxia, with negative consequences to aquatic life (2, 1417).Using a highly altered, albeit regionally important estuarine ecosystem, we examined how anthropogenically induced hypoxia influences vital ecosystem services, such as the maintenance of biodiversity and nursery function, and investigated whether climate indirectly drives these ecosystem services through the modulation of hypoxia. By determining the climatic drivers of hypoxia and its association with fish diversity and nursery function, we are able to show the linkages between human stressors, climate, and ecosystem services.  相似文献   

15.
Changes in mean climatic conditions will affect natural and societal systems profoundly under continued anthropogenic global warming. Changes in the high-frequency variability of temperature exert additional pressures, yet the effect of greenhouse forcing thereon has not been fully assessed or identified in observational data. Here, we show that the intramonthly variability of daily surface temperature changes with distinct global patterns as greenhouse gas concentrations rise. In both reanalyses of historical observations and state-of-the-art projections, variability increases at low to mid latitudes and decreases at northern mid to high latitudes with enhanced greenhouse forcing. These latitudinally polarized daily variability changes are identified from internal climate variability using a recently developed signal-to-noise-maximizing pattern-filtering technique. Analysis of a multimodel ensemble from the Coupled Model Intercomparison Project Phase 6 shows that these changes are attributable to enhanced greenhouse forcing. By the end of the century under a business-as-usual emissions scenario, daily temperature variability would continue to increase by up to a further 100% at low latitudes and decrease by 40% at northern high latitudes. Alternative scenarios demonstrate that these changes would be limited by mitigation of greenhouse gases. Moreover, global changes in daily variability exhibit strong covariation with warming across climate models, suggesting that the equilibrium climate sensitivity will also play a role in determining the extent of future variability changes. This global response of the high-frequency climate system to enhanced greenhouse forcing is likely to have strong and unequal effects on societies, economies, and ecosystems if mitigation and protection measures are not taken.

The effect of anthropogenic greenhouse gas emissions on mean climatic conditions is well understood. Theory, observational, and modeling work all demonstrate that average temperatures increase as a result of elevated greenhouse gas concentrations (1). However, it is also of considerable importance to natural and human systems whether changes in the temporal variability of climatic conditions have accompanied historical global warming and whether they will do so in the future (25). A more variable climate implies greater uncertainty and greater frequency of extremes, both of which constitute more damaging conditions.The variability of climate from one year to the next has received considerable attention. Large-scale climatic oscillations, such as the El Niño Southern Oscillation and the Indian Ocean Dipole, are dominant determinants of interannual variability (68) and have been shown to exhibit more frequent extremes under enhanced greenhouse forcing within comprehensive climate models (911), results that are supported by paleoclimatic evidence (12). Identifying a response in interannual temperature variability has been less conclusive. Some studies have attributed recent summer temperature extremes to greater interannual variability, both regionally (13) and globally (14), but there is still debate as to the extent of the role of interannual variability (1517). Some regional trends in interannual temperature variability have been identified (1721), but there is no consensus between observations and climate models (22).Here, we focus on variability of temperature at a higher frequency (daily), which a growing body of econometric literature has identified as an important determinant of societal outcomes, including human health (2327), agriculture (2830), and economic growth (31). The effect of enhanced greenhouse gas concentrations on the daily variability of temperature is therefore of wide societal importance and a critical component of the impact of anthropogenic climate change.Decreases in daily temperature variability at northern mid to high latitudes have been detected in observations (3234) and agree well with predictions from comprehensive climate models (3436) and physical reasoning (34, 35). Previous generations of climate models have also suggested that daily variability may increase during European summer (37) and across the tropics (36, 38), but these predictions have not yet been detected in observations or confirmed in state-of-the art climate models. This paper unifies these works by presenting a global analysis of changes in subseasonal, daily temperature variability under enhanced greenhouse forcing in both reanalyses of historical observations (National Oceanographic and Atmospheric Administration [NOAA] 20th Century Reanalysis Version 3 and the European Centre for Medium-Range Weather Forecasts Reanalysis 5 [ERA-5]) and the latest generation of comprehensive climate models (Coupled Model Intercomparison Project phase 6 [CMIP-6]). Daily temperature variability refers to the intramonthly SD of daily surface temperature from hereon. We consider changes in daily variability in boreal winter (“DJF”), boreal summer (“JJA”), and across the year (“annual”) to both assess the season specific mechanisms identified in previous work and to provide an aggregated overview of variability changes.  相似文献   

16.
17.
The COVID-19 pandemic led to lockdowns in countries across the world, changing the lives of billions of people. The United Kingdom’s first national lockdown, for example, restricted people’s ability to socialize and work. The current study examined how changes to socializing and working during this lockdown impacted ongoing thought patterns in daily life. We compared the prevalence of thought patterns between two independent real-world, experience-sampling cohorts, collected before and during lockdown. In both samples, young (18 to 35 y) and older (55+ y) participants completed experience-sampling measures five times daily for 7 d. Dimension reduction was applied to these data to identify common “patterns of thought.” Linear mixed modeling compared the prevalence of each thought pattern 1) before and during lockdown, 2) in different age groups, and 3) across different social and activity contexts. During lockdown, when people were alone, social thinking was reduced, but on the rare occasions when social interactions were possible, we observed a greater increase in social thinking than prelockdown. Furthermore, lockdown was associated with a reduction in future-directed problem solving, but this thought pattern was reinstated when individuals engaged in work. Therefore, our study suggests that the lockdown led to significant changes in ongoing thought patterns in daily life and that these changes were associated with changes to our daily routine that occurred during lockdown.

On March 23, 2020, the United Kingdom entered a nationwide lockdown to curb the spread of COVID-19. This first national lockdown required people to stay at home and not meet with anyone outside their household. Social gatherings were banned, and “nonessential” industries were closed, reducing opportunities for work (1). There were also large economic changes (2), and death rates increased substantially (3). Studies show the lockdown had widespread psychological and behavioral consequences including elevated anxiety and depression levels (4), overall deterioration of mental health (5), changes to diet and physical activity (68), high levels of loneliness (9), and increasing suicidal ideation (10). Our study used experience sampling to measure patterns of ongoing thoughts before and during lockdown in the United Kingdom, with the aim of understanding how specific features of the stay-at-home order impacted people’s thinking in daily life, and to use this data to inform contemporary theoretical views on ongoing thought.Our investigation served three broad goals. First, the lockdown led to changes in opportunities for socializing, and contemporary theories of ongoing thought suggest that social processing is an important influence on our day-to-day thinking (11, 12). For example, previous research indicates that individuals spend a lot of time thinking about other people in daily life (13, 14) or when performing tasks dependent on social cognition in the laboratory (15). Importantly, spontaneous social thoughts decline following periods of solitude and increase following periods of social interaction in the laboratory (11). They can also facilitate socio-emotional adjustment during important life transitions, such as starting university (16). Furthermore, ongoing thought patterns with social features are associated with increased neural responses to social stimuli (in this case, faces) (17). Such evidence suggests that the social environment can shape ongoing thought, leading to the possibility that changes in opportunities for socialization following the stay-at-home order could have changed the expression of social thinking in daily life.Second, lockdowns also disrupted individuals’ normal working practices, forcing people to reassess their goals. Prior work highlights that ongoing thought content is linked to an individual’s current concerns and self-related goals (1821) and that experimentally manipulating an individual’s goals can prime ongoing thought to focus on these issues (2123). In particular, a substantial proportion of ongoing thoughts are future directed (14, 18, 21, 2426), and this “prospective bias” is thought to support the formation and refinement of personal goals for future behavior (18, 21, 27, 28). Notably, this type of thought is also important in maintaining mental health through associations with improved subsequent mood (24) and reduced suicidal ideation (29, 30). Changes to opportunities for working during the lockdown, therefore, provide a chance to understand whether prospective features of ongoing thought are altered when important external commitments change.Third, previous work indicates that the contents of thought vary across the life span. For example, during periods of low cognitive demand, younger adults report significantly more future-directed thoughts, while older adults report significantly more past-related thoughts (31). At rest, older adults report more “novel” and present-oriented thoughts compared to younger adults (32). In daily life, older adults tend to report fewer “off-task” thoughts than younger adults, and their thoughts are rated as more “pleasant,” “interesting,” and “clear” (33). Finally, aging is associated with a decline in daydreaming, particularly a reduction in topics such as the future, fear of failure, or guilt (34). However, the degree to which these age-related changes are explained by lifestyle differences between young and older individuals is unclear. The lockdown may have altered key contextual factors that, under normal circumstances, differ systematically between younger and older adults. For example, increasing age is associated with more interactions with family members and fewer with “peripheral partners” (e.g., coworkers, acquaintances, and strangers) (35), a pattern that may be common in younger people during lockdown. With all this in mind, the lockdown provided an opportunity to examine whether changes to daily life during the lockdown differentially impacted ongoing thought patterns in younger and older individuals.Our study used an experience-sampling methodology in which people are signaled at random times in their daily lives to obtain multiple reports describing features of their ongoing thoughts and the context in which they occur (e.g., social environment, activity, and location) (36). To examine the contents of people’s thoughts, we used multidimensional experience sampling (MDES) (37). In this method, participants describe their in-the-moment thoughts by rating their thoughts on several dimensions (e.g., temporal focus or relationship to self and others) (38). Dimension reduction techniques can then be applied to use covariation in the responses to different questions to identify “patterns of thought” (37, 39). Previous studies have used MDES to identify common patterns of ongoing thought, varying in both form and content, often with distinct neural correlates (27, 37, 3943). For example, a pattern of episodic social cognition is associated with increased activity within regions of the ventromedial prefrontal cortex associated with memory and social cognition (41), while a pattern of external task focus is associated with increased activity in the intraparietal sulcus (42). In addition, at rest, visual imagery is associated with stronger interactions between the precuneus and lateral frontotemporal network (44), while detailed task focus is high during working memory tasks (15) and other complex tasks (45) and linked to activity in the default mode network during working memory maintenance (46).In summary, our study set out to examine whether ongoing thought patterns experienced during lockdown differed from those normally reported in daily life, focusing on the consequences of changes in opportunities for socialization and work. The prelockdown sample was an existing dataset used to provide a baseline for ongoing thought patterns in daily life before lockdown restrictions. In both samples, young (18 to 35 y) and older (55+ y) participants completed surveys five times daily over 7 d. Each sampling point obtained in the moment measured key dimensions of ongoing thought using MDES (37). Participants also provided information regarding the social environment in which the experience occurred. Dimension reduction was applied to both samples’ thought data to identify common patterns of thought. We then used linear mixed modeling (LMM) to explore the prevalence of each thought pattern 1) before and during lockdown, 2) in different age groups, and 3) across social contexts. In the lockdown sample, participants provided additional information regarding their current activity (e.g., working or leisure activities) and virtual social environment, which we used to explore how specific features of daily life during lockdown corresponded with patterns of thought.  相似文献   

18.
Nations in the 21st century face a complex mix of environmental and social challenges, as highlighted by the on-going Sustainable Development Goals process. The “planetary boundaries” concept [Rockström J, et al. (2009) Nature 461(7263):472–475], and its extension through the addition of social well-being indicators to create a framework for “safe and just” inclusive sustainable development [Raworth K (2012) Nature Climate Change 2(4):225–226], have received considerable attention in science and policy circles. As the chief aim of this framework is to influence public policy, and this happens largely at the national level, we assess whether it can be used at the national scale, using South Africa as a test case. We developed a decision-based methodology for downscaling the framework and created a national “barometer” for South Africa, combining 20 indicators and boundaries for environmental stress and social deprivation. We find that it is possible to maintain the original design and concept of the framework while making it meaningful in the national context, raising new questions and identifying priority areas for action. Our results show that South Africa has exceeded its environmental boundaries for biodiversity loss, marine harvesting, freshwater use, and climate change, and social deprivation is most severe in the areas of safety, income, and employment. Trends since 1994 show improvement in nearly all social indicators, but progression toward or over boundaries for most environmental indicators. The barometer shows that achieving inclusive sustainable development in South Africa requires national and global action on multiple fronts, and careful consideration of the interplay between different environmental domains and development strategies.Human impact on the Earth’s biophysical processes and resources is a global concern. It is seen by many as a new geological era, the Anthropocene (1), with natural resource consumption accelerating in the past 50 y—food, freshwater, and fossil fuel use have more than tripled (2)—and these trends are likely to continue as global population grows to 9.6 billion by 2050 (3). This concern has led to international treaties that seek to address global environmental challenges through negotiation and agreement among the nations of the world, such as the United Nations (UN) Convention for the Protection of the Ozone Layer, the UN Convention on Biological Diversity (UNCBD), and the UN Framework Convention on Climate Change (UNFCCC). This impact has also led to the proliferation of sustainable development indicators (SDIs). The outcome of the 1992 UN Conference on Environment and Development, Agenda 21, calls for SDIs to “provide solid bases for decision-making at all levels and to contribute to a self-regulating sustainability of integrated environment and development system” (4). Over 900 SDI initiatives have been undertaken to date (5), in recognition of the fact that indicators provide a quantitative and rational basis for decision making (6), simplify a complex reality to a manageable level (7), create a body of knowledge and comparable data for policy applications, measure progress (8), and allow the public to evaluate society and its leaders (9). Individual indices, such as the Human Development Index and the Ecological Footprint, have been used to compare countries, and sustainability frameworks, such as Ostrom’s framework for social-ecological systems (10) and the “ecosystems approach” adopted by the UNCBD (11), have been developed to better understand the relationships between social and ecological systems.In 2009 a new conceptual framework, “planetary boundaries,” was proposed by Rockström et al. (12, 13) as “a bid to reform environmental governance at multiple scales” (14). The planetary boundaries are an estimated “safe distance” from thresholds associated with nine global environmental change processes that, when crossed, will take humanity into unchartered environmental territory (13). The nine processes (or dimensions) are: climate change, ocean acidification, freshwater use, land-use change, biodiversity loss, nutrient cycles (nitrogen and phosphorous), ozone depletion, atmospheric aerosol loading, and chemical pollution. Three of these global boundaries (climate change, biodiversity loss, and nitrogen fixation) have been transgressed and several others are in danger of being exceeded. Rockstrom et al. proposed there should be a global goal to stay within the “safe operating space for humanity” defined by these boundaries.Despite a mixed reaction from the academic community, who have raised concerns about the existence of global tipping points for some of the dimensions (1517) and the specific metrics used (1823), the planetary boundaries concept has been used in proposals for defining the UN Sustainable Development Goals (SDGs) (2426). The SDGs will guide the international sustainable development agenda after 2015 and they represent an opportunity for science to inform policy making (2729), for the UN to implement the lessons from the Millennium Development Goals (MDGs) and to expand them to include all countries, and for greater integration of environmental and social metrics in decision-making. In this context, the planetary boundaries concept was extended by Raworth (30, 31) to include a set of 11 social dimensions, defining “a social foundation” below which exists unacceptable human deprivation. This approach highlighted the notion that access to the benefits of natural resources is also of global concern, and Raworth (30) argued that ending current global deprivation could be achieved with a minimal impact on the planetary boundaries. Raworth reframed Rockström et al.’s (12, 13) planetary boundaries concept as a “safe and just space for humanity”; this new framework brought together the dual objectives of poverty eradication and environmental sustainability as socio-economic priorities (30).Raworth’s safe and just space (SJS) framework has gained interest from the UN General Assembly (32), policy think tanks (e.g., ref. 33), and development agencies (e.g., ref. 34) because it provides a platform for integrated analysis and debate about global goals. The framework appears in the Worldwatch Institute’s latest State of the World report (35) and Griggs et al. (25) have since developed a similar framework to reframe the UN paradigm of three pillars of sustainable development as a nested concept.However, social and environmental concerns are intrinsically scale-dependent and need to take local circumstances into account if they are to be acted upon by national governments, which are ultimately responsible for taking action. The downscaling of the SJS to subglobal spatial scales, with heterogeneity of biophysical and social conditions and the instruments of governance, is not straightforward. The particular challenges for the biophysical dimensions are highlighted by Nykvist et al. (36), who assessed national “environmental performance” on four planetary boundaries (climate change, water, land, and nitrogen) for 60 countries. Because the chief aim of the SJS is to influence public policy, and this happens largely at the national level, our objective in this report is to assess whether the SJS concept can be used at the national scale, using South Africa as a test case.In this report we first review the SJS concept and explore how it might be applied at the national scale. We then present a decision-based methodology and results for our case study on South Africa. Finally, we discuss the applicability of the tool in South Africa, the local-regional-global links and the SDGs, and the data limitations, scientific challenges, and further research needs.  相似文献   

19.
The peopling of Remote Oceanic islands by Austronesian speakers is a fascinating and yet contentious part of human prehistory. Linguistic, archaeological, and genetic studies have shown the complex nature of the process in which different components that helped to shape Lapita culture in Near Oceania each have their own unique history. Important evidence points to Taiwan as an Austronesian ancestral homeland with a more distant origin in South China, whereas alternative models favor South China to North Vietnam or a Southeast Asian origin. We test these propositions by studying phylogeography of paper mulberry, a common East Asian tree species introduced and clonally propagated since prehistoric times across the Pacific for making barkcloth, a practical and symbolic component of Austronesian cultures. Using the hypervariable chloroplast ndhF-rpl32 sequences of 604 samples collected from East Asia, Southeast Asia, and Oceanic islands (including 19 historical herbarium specimens from Near and Remote Oceania), 48 haplotypes are detected and haplotype cp-17 is predominant in both Near and Remote Oceania. Because cp-17 has an unambiguous Taiwanese origin and cp-17–carrying Oceanic paper mulberries are clonally propagated, our data concur with expectations of Taiwan as the Austronesian homeland, providing circumstantial support for the “out of Taiwan” hypothesis. Our data also provide insights into the dispersal of paper mulberry from South China “into North Taiwan,” the “out of South China–Indochina” expansion to New Guinea, and the geographic origins of post-European introductions of paper mulberry into Oceania.The peopling of Remote Oceania by Austronesian speakers (hereafter Austronesians) concludes the last stage of Neolithic human expansion (13). Understanding from where, when, and how ancestral Austronesians bridged the unprecedentedly broad water gaps of the Pacific is a fascinating and yet contentious subject in anthropology (18). Linguistic, archaeological, and genetic studies have demonstrated the complex nature of the process, where different components that helped to shape Lapita culture in Near Oceania each have their own unique history (13). Important evidence points to Taiwan as an Austronesian ancestral homeland with a more distant origin in South China (S China) (3, 4, 912), whereas alternative models suggest S China to North Vietnam (N Vietnam) (7) or a Southeast Asian (SE Asian) origin based mainly on human genetic data (5). The complexity of the subject is further manifested by models theorizing how different spheres of interaction with Near Oceanic indigenous populations during Austronesian migrations have contributed to the origin of Lapita culture (13), ranging from the “Express Train” model, assuming fast migrations from S China/Taiwan to Polynesia with limited interaction (4), to the models of “Slow Boat” (5) or “Voyaging Corridor Triple I,” in which “Intrusion” of slower Austronesian migrations plus the “Integration” with indigenous Near Oceanic cultures had resulted in the “Innovation” of the Lapita cultural complex (2, 13).Human migration entails complex skills of organization and cultural adaptations of migrants or colonizing groups (1, 3). Successful colonization to resource-poor islands in Remote Oceania involved conscious transport of a number of plant and animal species critical for both the physical survival of the settlers and their cultural transmission (14). In the process of Austronesian expansion into Oceania, a number of animals (e.g., chicken, pigs, rats, and dogs) and plant species (e.g., bananas, breadfruit, taro, yam, paper mulberry, etc.), either domesticated or managed, were introduced over time from different source regions (3, 8, 15). Although each of these species has been shown to have a different history (8), all these “commensal” species were totally dependent upon humans for dispersal across major water gaps (6, 8, 16). The continued presence of these species as living populations far outside their native ranges represents legacies of the highly skilled seafaring and navigational abilities of the Austronesian voyagers.Given their close association to and dependence on humans for their dispersal, phylogeographic analyses of these commensal species provide unique insights into the complexities of Austronesian expansion and migrations (6, 8, 17). This “commensal approach,” first used to investigate the transport of the Pacific rat Rattus exulans (6), has also been applied to other intentionally transported animals such as pigs, chickens, and the tree snail Partula hyalina, as well as to organisms transported accidentally, such as the moth skink Lipinia noctua and the bacterial pathogen Helicobacter pylori (see refs. 2, 8 for recent reviews).Ancestors of Polynesian settlers transported and introduced a suite of ∼70 useful plant species into the Pacific, but not all of these reached the most isolated islands (15). Most of the commensal plants, however, appear to have geographic origins on the Sahul Plate rather than being introduced from the Sunda Plate or East Asia (16). For example, Polynesian breadfruit (Artocarpus altilis) appears to have arisen over generations of vegetative propagation and selection from Artocarpus camansi that is found wild in New Guinea (18). Kava (Piper methysticum), cultivated for its sedative and anesthetic properties, is distributed entirely to Oceania, from New Guinea to Hawaii (16). On the other hand, ti (Cordyline fruticosa), also a multifunctional plant in Oceania, has no apparent “native” distribution of its own, although its high morphological diversity in New Guinea suggests its origin there (19). Other plants have a different history, such as sweet potato, which is of South American origin and was first introduced into Oceania in pre-Columbian times and secondarily transported across the Pacific by Portuguese and Spanish voyagers via historically documented routes from the Caribbean and Mexico (17).Of all commensal species introduced to Remote Oceania as part of the “transported landscapes” (1), paper mulberry (Broussonetia papyrifera; also called Wauke in Hawaii) is the only species that has a temperate to subtropical East Asian origin (15, 20, 21). As a wind-pollinated, dioecious tree species with globose syncarps of orange–red juicy drupes dispersed by birds and small mammals, paper mulberry is common in China, Taiwan, and Indochina, growing and often thriving in disturbed habitats (15, 20, 21). Because of its long fiber and ease of preparation, paper mulberry contributed to the invention of papermaking in China in A.D. 105 and continues as a prime source for high-quality paper (20, 21). In A.D. 610, this hardy tree species was introduced to Japan for papermaking (21). Subsequently it was also introduced to Europe and the United States (21). Paper mulberry was introduced to the Philippines for reforestation and fiber production in A.D. 1935 (22). In these introduced ranges, paper mulberry often becomes naturalized and invasive (2022). In Oceania, linguistic evidence suggests strongly an ancient introduction of paper mulberry (15, 20); its propagation and importance across Remote Oceanic islands were well documented in Captain James Cook’s first voyage as the main material for making barkcloth (15, 20).Barkcloth, generally known as tapa (or kapa in Hawaii), is a nonwoven fabric used by prehistoric Austronesians (15, 21). Since the 19th century, daily uses of barkcloth have declined and were replaced by introduced woven textiles; however, tapa remains culturally important for ritual and ceremony in several Pacific islands such as Tonga, Fiji, Samoa, and the SE Asian island of Sulawesi (23). The symbolic status of barkcloth is also seen in recent revivals of traditional tapa making in several Austronesian cultures such as Taiwan (24) and Hawaii (25). To make tapa, the inner bark is peeled off and the bark pieces are expanded by pounding (20, 21, 23). Many pieces of the bark are assembled and felted together via additional poundings to create large textiles (23). The earliest stone beaters, a distinctive tool used for pounding bark fiber, were excavated in S China from a Late Paleolithic site at Guangxi dating back to ∼8,000 y B.P. (26) and from coastal Neolithic sites in the Pearl River Delta dating back to 7,000 y B.P. (27), providing the earliest known archaeological evidence for barkcloth making. Stone beaters dated to slightly later periods have also been excavated in Taiwan (24), Indochina, and SE Asia, suggesting the diffusion of barkcloth culture to these regions (24, 27). These archaeological findings suggest that barkcloth making was invented by Neolithic Austric-speaking peoples in S China long before Han-Chinese influences, which eventually replaced proto-Austronesian language as well as culture (27).In some regions (e.g., Philippines and Solomon Islands), tapa is made of other species of the mulberry family (Moraceae) such as breadfruit and/or wild fig (Ficus spp.); however, paper mulberry remains the primary source of raw material to produce the softest and finest cloth (20, 23). Before its eradication and extinction from many Pacific islands due to the decline of tapa culture, paper mulberry was widely grown across Pacific islands inhabited by Austronesians (15, 20). Both the literature (15, 20) and our own observations (2830) indicate that extant paper mulberry populations in Oceania are only found in cultivation or as feral populations in abandoned gardens as on Rapa Nui (Easter Island), with naturalization only known from the Solomon Islands (20). For tapa making, its stems are cut and harvested before flowering, and as a majority of Polynesian-introduced crops (16), paper mulberry is propagated clonally by cuttings or root shoots (15, 20), reducing the possibility of fruiting and dispersal via seeds. The clonal nature of the Oceanic paper mulberry has been shown by the lack of genetic variability in nuclear internal transcribed spacer (ITS) DNA sequences (31). With a few exceptions (30), some authors suggest that only male trees of paper mulberry were introduced to Remote Oceania in prehistoric time (15, 20). Furthermore, because paper mulberry has no close relative in Near and Remote Oceania (20), the absence of sexual reproduction precludes the possibility of introgression and warrants paper mulberry as an ideal commensal species to track Austronesian migrations (6, 30).To increase our understanding of the prehistoric Austronesian expansion and migrations, we tracked geographic origins of Oceanic paper mulberry, the only Polynesian commensal plant likely originating in East Asia, using DNA sequence variation of the maternally inherited (32) and hypervariable (SI Text) chloroplast ndhF-rpl32 intergenic spacer (33). We sampled broadly in East Asia (Taiwan, S China, and Japan) and SE Asia (Indochina, the Philippines, and Sulawesi) as well as Oceanic islands where traditional tapa making is still practiced. Historical herbarium collections (A.D. 1899–1964) of Oceania were also sampled to strengthen inferences regarding geographic origins of Oceanic paper mulberry. The employment of ndhF-rpl32 sequences and expanded sampling greatly increased phylogeographic resolution not attainable in a recent study (31) using nuclear ITS sequences (also see SI Text and Fig. S1) and intersimple sequence repeat (ISSR) markers with much smaller sampling.Open in a separate windowFig. S1.ITS haplotype network (n = 17, A–Q) and haplotype distribution and frequency. The haplotype network was reconstructed using TCS (34), with alignment gaps treated as missing data. The sizes of the circles and pie charts are proportional to the frequency of the haplotype (shown in parentheses). Squares denote unique haplotypes (haplotype found only in one individual).  相似文献   

20.
Since the late 1970s, satellite-based instruments have monitored global changes in atmospheric temperature. These measurements reveal multidecadal tropospheric warming and stratospheric cooling, punctuated by short-term volcanic signals of reverse sign. Similar long- and short-term temperature signals occur in model simulations driven by human-caused changes in atmospheric composition and natural variations in volcanic aerosols. Most previous comparisons of modeled and observed atmospheric temperature changes have used results from individual models and individual observational records. In contrast, we rely on a large multimodel archive and multiple observational datasets. We show that a human-caused latitude/altitude pattern of atmospheric temperature change can be identified with high statistical confidence in satellite data. Results are robust to current uncertainties in models and observations. Virtually all previous research in this area has attempted to discriminate an anthropogenic signal from internal variability. Here, we present evidence that a human-caused signal can also be identified relative to the larger “total” natural variability arising from sources internal to the climate system, solar irradiance changes, and volcanic forcing. Consistent signal identification occurs because both internal and total natural variability (as simulated by state-of-the-art models) cannot produce sustained global-scale tropospheric warming and stratospheric cooling. Our results provide clear evidence for a discernible human influence on the thermal structure of the atmosphere.Global changes in the physical climate system are driven by both internal variability and external influences (1, 2). Internal variability is generated by complex interactions of the coupled atmosphere–ocean system, such as the well-known El Niño/Southern Oscillation. External influences include human-caused changes in well-mixed greenhouse gases, stratospheric ozone, and other radiative forcing agents, as well as natural fluctuations in solar irradiance and volcanic aerosols. Each of these external influences has a unique “fingerprint” in the detailed latitude/altitude pattern of atmospheric temperature change (38). The use of such fingerprint information has proved particularly useful in separating human, solar, and volcanic influences on climate, and in discriminating between externally forced signals and internal variability (37).We have two main scientific objectives. The first is to consider whether a human-caused fingerprint can be identified against the “total” natural variability arising from the combined effects of internal oscillatory behavior , solar irradiance changes, and fluctuations in atmospheric loadings of volcanic aerosols. To date, only one signal detection study (involving hemispheric-scale surface temperature changes) has relied on information (9). All other pattern-based fingerprint studies have tested against (2, 47, 10, 11). When fingerprint investigations use information from simulations with natural external forcing, it is typically for the purpose of ascertaining whether model-predicted solar and volcanic signals are detectable in observational climate records, and whether the amplitude of the model signals is consistent with observed estimates of signal strength (7, 12, 13).We are addressing a different statistical question here. We seek to determine whether observed changes in the large-scale thermal structure of the atmosphere are truly unusual relative to the best current estimates of the total natural variability of the climate system. The significance testing framework applied here is highly conservative. Our estimates incorporate variability information from 850 AD to 2005, and sample substantially larger naturally forced changes in volcanic aerosol loadings and solar irradiance than have been observed over the satellite era.Our second objective is to examine the sensitivity of fingerprint results to current uncertainties in models and observations. With one exception (11), previous fingerprint studies of changes in the vertical structure of atmospheric temperature have used information from individual models. An additional concern is that observational uncertainty is rarely considered in such work (37). These limitations have raised questions regarding the reliability of fingerprint-based findings of a discernible human influence on climate (14).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号