首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
Catastrophic decline of Indigenous populations in the Americas following European contact is one of the most severe demographic events in the history of humanity, but uncertainty persists about the timing and scale of the collapse, which has implications for not only Indigenous history but also the understanding of historical ecology. A long-standing hypothesis that a continent-wide pandemic broke out immediately upon the arrival of Spanish seafarers has been challenged in recent years by a model of regional epidemics erupting asynchronously, causing different rates of population decline in different areas. Some researchers have suggested that, in California, significant depopulation occurred during the first two centuries of the post-Columbus era, which led to a “rebound” in native flora and fauna by the time of sustained European contact after 1769. Here, we combine a comprehensive prehistoric osteological dataset (n = 10,256 individuals) with historic mission mortuary records (n = 23,459 individuals) that together span from 3050 cal BC to AD 1870 to systematically evaluate changes in mortality over time by constructing life tables and conducting survival analysis of age-at-death records. Results show that a dramatic shift in the shape of mortality risk consistent with a plague-like population structure began only after sustained contact with European invaders, when permanent Spanish settlements and missions were established ca. AD 1770. These declines reflect the syndemic effects of newly introduced diseases and the severe cultural disruption of Indigenous lifeways by the Spanish colonial system.

Catastrophic decline of Indigenous populations in the Americas following the arrival of Europeans is arguably one of the most severe demographic collapses in the history of humanity (18). While it is generally accepted that diseases from Eurasia and Africa played a significant role in depopulation, scholars have long debated many aspects of the Indigenous population decline, including its pace, timing, and the exact causes of mortality. A long-standing theory suggests that the high mortality rate was influenced most profoundly by a lack of immunity among Native Americans to newly introduced Afro-Eurasian diseases (the so-called “virgin soils theory”; 912), but it is increasingly recognized that germs alone do not provide a full explanation for the precipitous die-offs (13, 14). Perhaps more culpable was the cultural chaos that spread through the Americas following European contact that would have dramatically exacerbated the vulnerability of Indigenous populations (13, 14). Extreme social disruption (14), altered food regimes (15, 16), famine and food insecurity (17), escalating violence (18, 19), forced relocation, land expropriation, enslavement, and captive-taking (20) certainly amplified the deadly potential of new diseases while also increasing mortality independently.Recognizing that introduced pathogens by themselves do not provide a full explanation for Indigenous depopulation, the timing, pace, and magnitude of the decline have long histories of debate. At the mid-twentieth century, scholars suggested that coast-to-coast disease dispersion began almost immediately following the arrival of Columbus and that the demographic collapse was the result of a pandemic or series of pandemics (2, 4, 7) that ultimately contributed to an underestimation of the true precontact Indigenous population of the Americas. Diseases were argued to have reached some regions, such as what is now known as California, before arrival of the Europeans themselves. The size of the precontact human population and timing of its decline also affect reconstructions of associated ecology used as conservation targets (21), as some suggest there was a rebound in endemic game animals coincident with the rapid decline of Indigenous populations after 1492 (2224) as well as changes in fire regimes (25, 26), reforestation (27), and altered patterns of carbon sequestration (28, 29).Recent research has questioned the evidentiary basis for early and extensive post-Columbian population decline because much of it consists of anecdotal and/or circumstantial historic accounts (3032). A particularly influential bioarchaeological study (33) noted that Native populations were not living in a disease‐free environment prior to contact, that the arrival of Europeans did not initiate a sudden pandemic, and that epidemic diseases probably struck different populations at different times. Similar findings concerning health and resiliency were ascribed to Canadian Indigenous populations where people appeared to suffer severe epidemiological impacts only following sustained contact with Europeans (34). More recent regional studies in the east (e.g., refs. 35 and 36), southeast (37), and southwest (38) also report evidence for severe Native population decline mostly after the establishment of an enduring European presence. In northwestern North America, historic studies on the Columbia River (39) report disease-induced population decline only around the late eighteenth century after Spanish missions were established in California. Subsequently, a continent-wide spatial meta-analysis of archaeological and historic evidence for the timing of disease spread and population decline found that while most populations experienced significant losses from disease only after sustained contact with Europeans, there was evidence in some regions for impacts prior to sustained contact and that disease dispersal in North America is probably best characterized as a series of regional epidemics rather than a continent-wide pandemic (40). The meta-analysis had lighter coverage for western North America and did not include systematic records of mortality, which are ultimately required to fully understand the impact of the European invasion.For California, a largely circumstantial case for severe disease-induced population decline beginning in the sixteenth century was advanced decades ago (4144). Alternately, scholars have suggested that connections to Mexico via the Puebloan Southwest or contacts from European seafarers were sufficient to spread disease (42) and effect depopulation. Spanish explorer Juan Rodríguez Cabrillo made first contact with Native southern Californians by sea in 1542. After Cabrillo, there were four known European sea voyages before 1769 that included Native contacts (Fig. 1): de Unamuno in 1587, Cermeño in 1595, Vizcaíno in 1602 and 1603, and Francis Drake in 1579, although there may have been additional unrecorded, occasional contacts with Manila galleons that sailed along the California coast ca. 1566 to 1821. The Spanish also began establishing missions in southern Baja California in 1697 and were working their way northward, but the effort ended in 1767 with the establishment of Mission Santa Maria de Los Angeles, 340 km south of Alta California (Fig. 1). Sustained contact began in what is today California with the Portolá overland expedition in 1769 that made its way first to San Diego and eventually to San Francisco Bay. The expedition led to the establishment of Mission San Diego in 1769 and the first mission in central California (Mission San Carlos de Borromeo) in 1770.Open in a separate windowFig. 1.Pre-1769 routes of colonial contact and California missions. (Inset) Northern California archaeological sites and Spanish missions in current study.While it is possible that some infectious diseases were introduced into California as a result of pre-1769 contacts, the likelihood that they precipitated a radical reduction in Indigenous populations has proven difficult to test with bioarchaeological or other empirical evidence largely because the diseases thought most responsible for mortality do not leave an enduring skeletal signature (15, 45). A comprehensive archaeological study of the Yosemite Valley in central California (Fig. 1) that examined a variety of proxy data found evidence for disease-induced population decline prior to direct interaction between Indigenous and nonnative people but dating 1790 to 1800, decades after Spanish missions were established in the Coast Ranges 200 km to the west (46). Such a late date suggests that much of California may well have been relatively isolated from disease outbreaks during the sixteenth through eighteenth centuries.Here, we attempt to systematically evaluate the impacts of pre-1769 diseases in order to determine if such a late date for initial depopulation applies to the whole of central California. We analyzed the single largest systematic dataset of mortality records yet compiled across North America that couples archaeological and historic data in order to contribute to an overarching portrait of regional epidemics effecting decline among Indigenous populations after the establishment of enduring European settlements. We systematically evaluated the impact of pre-1769 diseases by conducting survival analysis (refs. 4749; SI Appendix) from age-at-death records of 33,715 Native people who lived on tribal lands within the area now referred to as central California between 5000 and 150 cal B.P. (AD 1870). These records come from 10,256 human burials from 252 archaeological sites (Datasets S1 and S2) and 23,459 historic records (Dataset S3) kept by Spanish missionaries at 10 central California missions (Fig. 1) dating between AD 1770 and 1825. We then compared the resulting survival curves, estimated mean age at death, and hazard ratios with historic plague populations (50) and simulated (51) records of stable and plague populations.  相似文献   

4.
5.
Existing radiocarbon (14C) dates on American mastodon (Mammut americanum) fossils from eastern Beringia (Alaska and Yukon) have been interpreted as evidence they inhabited the Arctic and Subarctic during Pleistocene full-glacial times (∼18,000 14C years B.P.). However, this chronology is inconsistent with inferred habitat preferences of mastodons and correlative paleoecological evidence. To establish a last appearance date (LAD) for M. americanum regionally, we obtained 53 new 14C dates on 36 fossils, including specimens with previously published dates. Using collagen ultrafiltration and single amino acid (hydroxyproline) methods, these specimens consistently date to beyond or near the ∼50,000 y B.P. limit of 14C dating. Some erroneously “young” 14C dates are due to contamination by exogenous carbon from natural sources and conservation treatments used in museums. We suggest mastodons inhabited the high latitudes only during warm intervals, particularly the Last Interglacial [Marine Isotope Stage (MIS) 5] when boreal forests existed regionally. Our 14C dataset suggests that mastodons were extirpated from eastern Beringia during the MIS 4 glacial interval (∼75,000 y ago), following the ecological shift from boreal forest to steppe tundra. Mastodons thereafter became restricted to areas south of the continental ice sheets, where they suffered complete extinction ∼10,000 14C years B.P. Mastodons were already absent from eastern Beringia several tens of millennia before the first humans crossed the Bering Isthmus or the onset of climate changes during the terminal Pleistocene. Local extirpations of mastodons and other megafaunal populations in eastern Beringia were asynchrononous and independent of their final extinction south of the continental ice sheets.Last appearance dates (LADs) are crucial for evaluating hypotheses regarding the timing and causes of species disappearance in the fossil record (1). In principle, species extinction and local extirpation chronologies can be rigorously established by determining LADs using a variety of radiometric dating methods. In practice, however, problematic and incomplete chronological data inevitably affect the precision and accuracy of LADs. This may in turn affect how potential extinction mechanisms are evaluated. A case in point is the radiocarbon (14C) record of Pleistocene American mastodon (Mammut americanum) fossils from the unglaciated regions of Alaska and Yukon in northwest North America, collectively known as eastern Beringia (Fig. 1).Open in a separate windowFig. 1.Known fossil localities of M. americanum across North America [Table S1, with late Pleistocene glacial limits (white) and glacial/pluvial lakes (light blue) following ref. 22]. CIS, Cordilleran Ice Sheet; IIS, Innuitian Ice Sheet; LIS, Laurentide Ice Sheet. Dashed line and question mark denote uncertainty over northwest LIS limits (23). Localities with fossils analyzed in this study in Alaska and Yukon (northwest North America) are designated by red circles. Locality data for American mastodons across the continent, designated as green circles, are from refs. 58, 1117, 19, 20, and 24 (see Table S1) in addition to collections data from the United States National Park Service, Royal Ontario Museum, and Royal British Columbia Museum.The American mastodon was one of roughly 70 species of mammals in North America that died out during the late Quaternary extinctions (2, 3). American mastodon appears in the North American fossil record roughly 3.5 million years ago and was the terminal member of a lineage that arose from its presumed ancestor Miomastodon merriami, which had crossed the Bering Isthmus from Eurasia during the middle Miocene (4). Over the course of the late Pleistocene (∼125,000–10,000 y ago), M. americanum became widespread, occupying many parts of continental North America, as well as peripheral areas as mutually remote as the tropics of Honduras and the Arctic coast of Alaska (58) (Fig. 1). Despite their Old World roots, and unlike American populations of their distant relative the woolly mammoth (Mammuthus primigenius) (9), there is no evidence that American mastodons managed to cross the Bering Isthmus westward into Eurasia (4).The rich and well-dated record of American mastodons living in the midlatitudes, particularly near the Great Lakes and Atlantic coast regions, demonstrates they were among the last members of the megafauna to disappear in North America near the end of the Pleistocene (4, 1013). The current LAD for mastodons south of the former Laurentide and Cordilleran ice sheets is ∼10,000 14C years B.P. (B.P. = years before A.D. 1950), based on enamel and ultrafiltered bone collagen from the Overmyer specimen from northern Indiana (11). Paleoenvironmental data from this region are consistent with the view that American mastodons preferentially inhabited coniferous or mixed forests or lowland swampy habitats in what can plausibly be regarded as the most persistent portion of their late Pleistocene geographic range (1416) (Fig. 1 and Table S1). Mastodon remains at archeological sites south of the former continental ice sheets underscore the prominent role this mammal species has played in ongoing discussions of the late Quaternary extinctions (17).American mastodon and woolly mammoth differed in both habitat and dietary preference. As large grazers that relied on grasses and forbs, woolly mammoths were well adapted to semiarid, generally treeless steppe-tundra habitats that were widespread in eastern Beringia during Pleistocene glacial intervals (18). Conversely, mastodons were browsing specialists, relying on woody plants and preferentially inhabiting coniferous or mixed woodlands with lowland swamps (4). The large, bunodont teeth of mastodons were effective at stripping and crushing twigs, leaves, and stems from shrubs and trees (4). Plant remains from purported coprolites and stomach contents found in association with several mastodon skeletons found south of the former continental ice sheets include masticated or partially digested stick fragments, twigs, deciduous leaves, conifer needles, and conifer cones (4, 19). In places where American mastodons and woolly mammoths coexisted during the late Pleistocene, stable isotope and other paleoecological data establish these two proboscideans occupied and exploited distinct environmental niches and did not compete for the same resources (16, 20).In light of their preferred diet and habitat, the Arctic and Subarctic during the late Pleistocene would seem to be unlikely places for mastodon populations to live. Indeed, their fossils are quite rare; in the course of more than a century of collecting, American mastodon accounts for <5% of all proboscidean fossils recovered in Alaska and Yukon (7). Nevertheless, American mastodons certainly lived at high latitudes, either in small numbers or, more probably, for limited intervals. The likeliest proposed scenario is that American mastodons occupied the Arctic and Subarctic region only intermittently, during warm, Pleistocene interglacial periods, when widespread boreal forests and muskeg wetlands were established (21). During the Wisconsinan glaciation, when much of high-latitude North America was ice covered (22, 23), mastodons were probably absent in the cold, dry, unglaciated refugium of eastern Beringia. Indeed, this hypothesis was put forth decades before the advent of radiocarbon dating (24).It is thus surprising that the meager published 14C record has complicated, rather than corroborated, the long-standing hypothesis (24) that mastodons were Pleistocene interglacial residents of the Arctic and Subarctic and were absent during glacial periods (7, 25) (Table S2). In contrast, published dates on fossils from the Ikpikpuk River, Alaska (26), and Herschel Island, Yukon (27), are nonfinite (i.e., greater than ∼50,000 y B.P., the effective limit of 14C dating methods) and were interpreted to mean that mastodons lived along the Arctic coast only during the Last Interglacial (MIS 5: ∼125,000–75,000 y ago). If American mastodons did survive in eastern Beringia during the MIS 2 full-glacial period, and possibly even as late as the terminal Pleistocene as suggested by the two finite ages on Yukon molars (7, 25) (2, 17), abrupt climate change (28, 29), or extraterrestrial impact (30, 31).

Table 1.

Radiocarbon (14C) dates on American mastodon fossils from Alaska and Yukon
Specimen no.14C years B.P.Lab no.Details
Previously published radiocarbon dates
 Yukon
  CMN 3389724,980 ± 1300Beta 16163ref. 7
  YG 33.2>45,130Beta 189291ref. 27
  YG 43.218,460 ± 350TO 7745ref. 25
 Alaska
  UAMES 2414>50,000CAMS 91805ref. 26
New radiocarbon dates on previously published Yukon specimens
 CMN 33897>51,700UCIAMS 78694UF
 YG 43.2>49,200UCIAMS 75320UF
New specimens with single radiocarbon dates
 Yukon
  CMN 8707>51,700UCIAMS 78700UF
  CMN 11697>51,700UCIAMS 78698UF
  CMN 1535245,700 ± 2500UCIAMS 78695UF
  CMN 3189850,300 ± 3500UCIAMS 78696UF
  CMN 33066>49,900UCIAMS 78697UF
  CMN 42551>51,700UCIAMS 78699UF
  CMN 42552>41,100UCIAMS 78703UF
  F:AM: 10484242,100 ± 1300UCIAMS 88773UF
  YG 50.1>41,100AA 84994STD
  YG 139.5>41,100AA 84985STD
  YG 357.1>41,100AA 84995STD
  YG 361.9>49,900UCIAMS 72419UF
 Alaska
  F:AM: 103281>50,800UCIAMS 88775UF
  F:AM: 10329246,100 ± 2100UCIAMS 88771UF
  F:AM: 103295>46,900UCIAMS 88774UF
  UAMES 7666>50,800UCIAMS 88767UF
  UAMES 766751,300 ± 4000UCIAMS 88766UF
  UAMES 30197>51,700UCIAMS 117242UF
  UAMES 30198>51,200UCIAMS 117243UF
  UAMES 30199>46,400UCIAMS 117235UF
  UAMES 3020047,000 ± 2300UCIAMS 117241UF
  UAMES 30201>47,500UCIAMS 117232UF
  UAMES 34126>46,100UCIAMS 117237UF
New specimens with multiple radiocarbon dates
 Yukon
  CMN 33340,600 ± 1000UCIAMS 78701UF
  >47,800UCIAMS 83803UF
>49,500UCIAMS 83804UF
  YG 26.139,200 ± 3200AA 84981STD
>50,300UCIAMS 78705UF
>51,700UCIAMS 78704UF
 Alaska
  F:AM: 10327729,610 ± 340OxA-25402UF
33,810 ± 460UCIAMS 88772UF
47,100 ± 2500OxA-X-2490–48SAA
  F:AM: 10329135,240 ± 610UCIAMS 88776UF
42,800 ± 2400OxA-X-2515–35SAA
44,900 ± 2600OxA-X-2515–34SAA
  UAMES 2414>50,000CAMS 91805STD
>48,100UCIAMS 117234UF
  UAMES 766320,440 ± 130OxA-25401UF
33,090 ± 470UCIAMS 88768UF
43,000 ± 2200OxA-X-2457–7SAA
48,200 ± 2600OxA-X-2492–15SAA
  UAMES 970538,800 ± 1100CAMS 53904STD
>51,700UCIAMS 117239UF
  UAMES 11095>51,200UCIAMS 117233UF
>54,000CAMS 91808STD
  UAMES 1204751,700 ± 3200CAMS 92090STD
>48,800UCIAMS 117240UF
  UAMES 1206036,370 ± 790AA-48275STD
49,800 ± 3300UCIAMS 117236UF
  UAMES 3412531,780 ± 360UCIAMS 117238UF
>50,100OxA-29838UF
Open in a separate windowAll nominally finite radiocarbon dates are reported to 1σ. Specimen collection repositories: CMN, Canadian Museum of Nature; F:AM, Frick Collection of the American Museum of Natural History; UAMES, University of Alaska Museum Earth Sciences Collection; YG, Yukon Government Paleontology Program. Radiocarbon laboratories: AA, Arizona Accelerator Mass Spectrometry Laboratory; Beta, Beta Analytic Radiocarbon Laboratory; CAMS, Lawrence Livermore National Laboratory Center for Accelerator Mass Spectrometry; OxA, Oxford Radiocarbon Accelerator Unit; TO, Isotrace Laboratory the Canadian Centre for Accelerator Mass Spectrometry; UCIAMS, University of California, Irvine Keck Carbon Cycle Accelerator Mass Spectrometry Laboratory. Fraction dated: SAA, single amino acid hydroxyproline (35); STD, standard collagen gelatin pretreatment without ultrafiltration (49); UF, ultrafiltered collagen (32, 34).  相似文献   

6.
The impact of rapid climate change on contemporary human populations is of global concern. To contextualize our understanding of human responses to rapid climate change it is necessary to examine the archeological record during past climate transitions. One episode of abrupt climate change has been correlated with societal collapse at the end of the northwestern European Bronze Age. We apply new methods to interrogate archeological and paleoclimate data for this transition in Ireland at a higher level of precision than has previously been possible. We analyze archeological 14C dates to demonstrate dramatic population collapse and present high-precision proxy climate data, analyzed through Bayesian methods, to provide evidence for a rapid climatic transition at ca. 750 calibrated years B.C. Our results demonstrate that this climatic downturn did not initiate population collapse and highlight the nondeterministic nature of human responses to past climate change.Past population collapse in many parts of the world has been attributed to the direct effects of rapid climate change. Key case studies on the collapse of Anasazi (1) and Mayan (2) civilizations have attracted considerable public interest due to the concerns over the threat of climate change to contemporary populations. Recent paleoenvironmental studies have identified a major climate shift across much of northwestern Europe toward the end of the Bronze Age (3, 4). This has been associated with socioeconomic collapse in Ireland (5), northern Britain (6), and central and western Europe (7), and the expansion of Scythian culture into Europe and eastern Asia (8).In northwestern Europe, the eighth century calibrated years (cal.) B.C. sees the transition from the Late Bronze Age to the Early Iron Age. Whereas evidence for Late Bronze Age settlement and craft production is widespread, it is notoriously elusive for Early Iron Age communities in many parts of northwestern Europe (5, 911), suggesting a reduction in population levels. At the same time the international exchange networks required to support bronze-based economies appear to break down. To what extent might these changes be linked to the environmental downturn implied by the paleoclimate data?  相似文献   

7.
Policy responses to the COVID-19 outbreak must strike a balance between maintaining essential supply chains and limiting the spread of the virus. Our results indicate a strong positive relationship between livestock-processing plants and local community transmission of COVID-19, suggesting that these plants may act as transmission vectors into the surrounding population and accelerate the spread of the virus beyond what would be predicted solely by population risk characteristics. We estimate the total excess COVID-19 cases and deaths associated with proximity to livestock plants to be 236,000 to 310,000 (6 to 8% of all US cases) and 4,300 to 5,200 (3 to 4% of all US deaths), respectively, as of July 21, 2020, with the vast majority likely related to community spread outside these plants. The association is found primarily among large processing facilities and large meatpacking companies. In addition, we find evidence that plant closures attenuated county-wide cases and that plants that received permission from the US Department of Agriculture to increase their production-line speeds saw more county-wide cases. Ensuring both public health and robust essential supply chains may require an increase in meatpacking oversight and potentially a shift toward more decentralized, smaller-scale meat production.

Among the many challenges posed by the COVID-19 outbreak, maintaining essential supply chains while mitigating community spread of the virus is vital to society. Using county-level data as of July 21, 2020, we test the relationship between one such type of essential activity, livestock processing, and the local incidence of COVID-19 cases. We find that the presence of a slaughtering plant in a county is associated with four to six additional COVID-19 cases per thousand, or a 51 to 75% increase from the baseline rate. We also find an increase in the death rate by 0.07 to 0.1 deaths per thousand people, or 37 to 50% over the baseline rate. Our estimates imply that excess COVID-19 infections and deaths related to livestock plants are 236,000 to 310,000 (6 to 8% of all US cases) and 4,300 to 5,200 (3 to 4% of all US deaths), respectively, with the vast majority occurring among people not working at livestock plants.We further find the temporary closure of high-risk plants to be followed by lower rates of COVID-19 case growth. We also find that smaller, decentralized facilities do not appear to contribute to transmission and that plants that received permission from the US Department of Agriculture (USDA) to increase their production-line speeds saw more county-wide cases. Our associations hold after controlling for population risk factors and other potential confounders, such as testing rates. Although lacking a natural experiment to cement causality, we employ a combination of empirical tools—including an event study, instrumental variables (IVs), and matching—to support our findings.The centrality of livestock processing to local economies and national food supplies implies that mitigating disease spread through this channel may take an economic toll. Understanding the public health risk posed by livestock processing is essential for assessing potential impacts of policy action. However, generating case data attributable to livestock plants is challenging: Contact tracing in the United States is decentralized and sporadic, and there may be incentives for companies and government bodies to obscure case reporting (15). Our study represents an attempt to address this gap in knowledge.  相似文献   

8.
It is difficult to overstate the cultural and biological impacts that the domestication of plants and animals has had on our species. Fundamental questions regarding where, when, and how many times domestication took place have been of primary interest within a wide range of academic disciplines. Within the last two decades, the advent of new archaeological and genetic techniques has revolutionized our understanding of the pattern and process of domestication and agricultural origins that led to our modern way of life. In the spring of 2011, 25 scholars with a central interest in domestication representing the fields of genetics, archaeobotany, zooarchaeology, geoarchaeology, and archaeology met at the National Evolutionary Synthesis Center to discuss recent domestication research progress and identify challenges for the future. In this introduction to the resulting Special Feature, we present the state of the art in the field by discussing what is known about the spatial and temporal patterns of domestication, and controversies surrounding the speed, intentionality, and evolutionary aspects of the domestication process. We then highlight three key challenges for future research. We conclude by arguing that although recent progress has been impressive, the next decade will yield even more substantial insights not only into how domestication took place, but also when and where it did, and where and why it did not.  相似文献   

9.
Detection and attribution of past changes in cyclone activity are hampered by biased cyclone records due to changes in observational capabilities. Here we construct an independent record of Atlantic tropical cyclone activity on the basis of storm surge statistics from tide gauges. We demonstrate that the major events in our surge index record can be attributed to landfalling tropical cyclones; these events also correspond with the most economically damaging Atlantic cyclones. We find that warm years in general were more active in all cyclone size ranges than cold years. The largest cyclones are most affected by warmer conditions and we detect a statistically significant trend in the frequency of large surge events (roughly corresponding to tropical storm size) since 1923. In particular, we estimate that Katrina-magnitude events have been twice as frequent in warm years compared with cold years (P < 0.02).  相似文献   

10.
A numerical algorithm is applied to the Greenland Ice Sheet Project 2 (GISP2) dust record from Greenland to remove the abrupt changes in dust flux associated with the Dansgaard-Oeschger (D-O) oscillations of the last glacial period. The procedure is based on the assumption that the rapid changes in dust are associated with large-scale changes in atmospheric transport and implies that D-O oscillations (in terms of their atmospheric imprint) are more symmetric in form than can be inferred from Greenland temperature records. After removal of the abrupt shifts the residual, dejumped dust record is found to match Antarctic climate variability with a temporal lag of several hundred years. It is argued that such variability may reflect changes in the source region of Greenland dust (thought to be the deserts of eastern Asia). Other records from this region and more globally also reveal Antarctic-style variability and suggest that this signal is globally pervasive. This provides the potential basis for suggesting a more important role for gradual changes in triggering more abrupt transitions in the climate system.  相似文献   

11.
Forage fish support the largest fisheries in the world but also play key roles in marine food webs by transferring energy from plankton to upper trophic-level predators, such as large fish, seabirds, and marine mammals. Fishing can, thereby, have far reaching consequences on marine food webs unless safeguards are in place to avoid depleting forage fish to dangerously low levels, where dependent predators are most vulnerable. However, disentangling the contributions of fishing vs. natural processes on population dynamics has been difficult because of the sensitivity of these stocks to environmental conditions. Here, we overcome this difficulty by collating population time series for forage fish populations that account for nearly two-thirds of global catch of forage fish to identify the fingerprint of fisheries on their population dynamics. Forage fish population collapses shared a set of common and unique characteristics: high fishing pressure for several years before collapse, a sharp drop in natural population productivity, and a lagged response to reduce fishing pressure. Lagged response to natural productivity declines can sharply amplify the magnitude of naturally occurring population fluctuations. Finally, we show that the magnitude and frequency of collapses are greater than expected from natural productivity characteristics and therefore, likely attributed to fishing. The durations of collapses, however, were not different from those expected based on natural productivity shifts. A risk-based management scheme that reduces fishing when populations become scarce would protect forage fish and their predators from collapse with little effect on long-term average catches.Forage fish are small pelagic fish, such as herrings, anchovies, and sardines, that provide multiple benefits to people and marine food webs. These species support the largest fisheries in the world, accounting for 30% of global fisheries landings by weight and benefiting aquaculture and livestock industries through the production of fish meal and fish oil (1). At the same time, these species are important for marine food webs, because they provide a key linkage from lower trophic-level planktonic species to upper trophic-level predators, such as large fish, seabirds, and marine mammals (24). These predators also have economic value through fisheries (2), tourism (5), or nonmarket existence values (6). Collapses of forage fish populations, which have been frequent (7, 8), can, therefore, generate widespread ecological effects (911). Because of these concerns, there is a growing movement to develop and apply robust management approaches to forage fisheries to avoid the risk of fisheries-induced stock collapses and attendant ecological consequences (11, 12).One of the principal challenges in assessing the ecological consequences of forage fish fisheries is that these stocks undergo large cyclical fluctuations in abundance (13, 14) (Fig. 1). Fishing can potentially exacerbate naturally caused collapses, because shifts in populations’ spatial distributions coupled with fish schooling behavior allow fisheries to be economically viable, even when abundance is low (7, 15). Because of these fluctuations, standard static reference points used to judge stock status [e.g., unfished biomass (biomass that maximizes long-term sustainable yield)] have little meaning for the management of forage fish stocks. Most reference points are based on a presumed relation between population production and population biomass, but such a relationship rarely exists among these populations (Fig. S1). Moreover, these fluctuations greatly reduce our ability to ascertain effects of fishing on stock dynamics (16), and by extension, effects of fishing on dependent predators. Some have concluded that fishing acts primarily to accelerate population collapses that were destined to occur because of natural processes (7). To date, it has not been possible to determine whether fishing also makes collapses more frequent, more severe, or more prolonged.Open in a separate windowFig. 1.Examples of forage fish biomass trends showing magnitudes and characteristics of population fluctuations. Dotted lines denote the long-term mean biomass for each stock, and horizontal and vertical bars show time and biomass scale (expressed as a ratio of annual biomass to mean biomass), respectively. Time series are not aligned according to actual start and end date; β is the Fourier spectral scaling exponent, where variance scales with frequency as f −β. Five stocks show the range of population fluctuations from extreme long- (Tsushima Strait Pilchard) to short-term (Atlantic Menhaden) variability. Across all 40 stocks for which there were sufficiently long biomass time series to estimate β, the average coefficient of variation (CV) and β were 0.5 and 1.9, respectively. For comparison, a common decadal scale environmental index, the Pacific Decadal Oscillation (33), has β near 1.0.Here, we contribute to understanding ecological consequences of forage fish fisheries to ask how fishing has affected population characteristics that are most relevant for dependent predators. Predators are most sensitive to changes in forage fish abundance when forage abundance is low (9), and therefore, we focus on the effects of fishing with respect to the magnitude (scale of fluctuation), frequency (proportion of stocks at low abundance), and duration (number of years until recovery) of stock collapse. We compiled time series of population biomass and fisheries catches on stocks around the globe from stock assessments, restricting our analysis to 55 stocks with a time series that spanned at least 25 y (Table S1). Forage fish stocks used in this analysis included anchovies, capelin, herrings, mackerels, menhaden, sand eels, and sardines, which since 2000, supported average annual catches of 17 million tons y−1 and comprised 65% of global forage fish catches (17).  相似文献   

12.
Direct, accurate, and precise dating of archaeological pottery vessels is now achievable using a recently developed approach based on the radiocarbon dating of purified molecular components of food residues preserved in the walls of pottery vessels. The method targets fatty acids from animal fat residues, making it uniquely suited for directly dating the inception of new food commodities in prehistoric populations. Here, we report a large-scale application of the method by directly dating the introduction of dairying into Central Europe by the Linearbandkeramik (LBK) cultural group based on dairy fat residues. The radiocarbon dates (n = 27) from the 54th century BC from the western and eastern expansion of the LBK suggest dairy exploitation arrived with the first settlers in the respective regions and were not gradually adopted later. This is particularly significant, as contemporaneous LBK sites showed an uneven distribution of dairy exploitation. Significantly, our findings demonstrate the power of directly dating the introduction of new food commodities, hence removing taphonomic uncertainties when assessing this indirectly based on associated cultural materials or other remains.

The introduction of new food commodities into the human diet at the very beginnings of plant and animal domestication is one of the most critical questions in the Neolithization process, having far reaching consequences for human evolution and environmental change. Of major importance is milk exploitation, as it relates to animal domestication but also the ability of adult humans to digest lactose (1, 2). Clearly, identifying the beginnings of the exploitation of domesticated animals for their secondary products (i.e., those obtained during the life of animals, such as milk, wool, or blood) as opposed to primary products (i.e., those obtained by the death of the animal such as meat, skin, teeth, or horn) makes it extremely important to establish when and how dairying began (3, 4). Directly dating the introduction of a new food commodity is nonetheless challenging.Evidence for dairy exploitation in prehistory can be interpreted from iconography, diagnostic ceramics, or domesticated animal slaughter patterns based on sex and ages (3, 4). Additionally, direct evidence for dairy exploitation can be derived from lipid analyses of food residues preserved in pottery vessels. By determining the stable carbon isotope values of the two fatty acids (FAs) (C16:0 and C18:0) characteristic of degraded animal fats, dairy products can be distinguished from carcass products (5). Recent combined lipid residue analyses of pottery vessels and animal management assessments based on faunal remains (stable isotopes, butchery practices, kill-off patterns, and calving patterns) have provided invaluable knowledge of early dairying practices at archaeological sites. Currently, the earliest evidence for milk use from lipid residues and faunal assemblages recovered during the Neolithic was found in Anatolia during the 7th millennium BC (6), from several regions in the Balkans, eastern Europe, and the Mediterranean during the 6th millennium BC (712), in Saharan Africa (Libya and Algeria) during the 5th millennium BC (1315), from the beginning of the Neolithic in Britain, Ireland, and Scandinavia during the 4th millennium BC (5, 1619), and in the Baltic countries during the 3rd millennium BC (16). The dates of the introduction of dairying in these regions have been established largely indirectly based on associated materials (e.g., animal bone collagen, charcoal, charred seeds, etc.) recovered from the same archaeological contexts as the pottery yielding milk fat residues. However, uncertainties exist with indirect dating due to possible intrusion or residuality of datable materials, resulting from the disturbance of archaeological layers and the requirement for the datable materials to be short-lived and truly contemporaneous in date with the pottery vessels containing the dairy residues.Thus, the application of recently developed methods for the direct dating of lipids from pottery food residues offers a unique approach to obtain accurate and precise dates for the introduction of new food commodities. The direct 14C dating of dairy fat residues avoids all the aforementioned uncertainties, offering an unprecedented opportunity to accurately date the start of dairying practices. At the University of Bristol, United Kingdom, we recently reported a method for radiocarbon dating pottery vessels from their absorbed food residues. Our compound-specific radiocarbon analysis (CSRA) approach is based on the isolation of the C16:0 and C18:0 FAs from the clay matrix and freeing them from exogenous organic contaminants (20, 21). We have successfully applied this approach to a small number of dairy residues from the Libyan Sahara and Central Europe, with one of the oldest dated dairy residues coming from the 6th millennium in the Balkans (11, 22). Hence, this dating method offers the opportunity to directly date residues identified as dairy fats based on the compound-specific δ13C values of the C16:0 and C18:0 FAs, avoiding taphonomic uncertainties arising from dating-associated materials.In this paper, we focus on the Linearbandkeramik (LBK) culture, the first farming society in Central Europe, which emerged and expanded over much of northern Europe in the middle of the 6th millennium BC (23). This culture has been divided into five main phases: Earliest (I), Early (II), Middle (III), Late (IV), and Final (V) LBK, known as the Meier-Arendt chronology, whose timing and evolution differed in the different regions of the LBK (24). Hence, the ceramic phases discussed in the remainder of this paper use the regional and site classifications for the chronology of earliest, early, middle, and Late LBK, which are not necessarily contemporaneous. For example, phase I in Poland and phase I in Cuiry-lès-Chaudardes refer to the Earliest and Late LBK phases, respectively, in the Meier-Arendt chronology.Dairy residues were identified in varying quantities at LBK sites across Central Europe. Some sites show only a weak dairy signal (1 to 2 potsherds only), while others display much higher recovery, with over 20% of the residues displaying dairy fat molecular and carbon isotope characteristics. These results emphasize the spatial disparity in the exploitation of cattle and caprines for their milk in this period. We do not exclude the possibility that the use of organic containers other than clay vessels for dairy products at some sites may affect the overall dairy lipid recovery observed. Diachronic studies in certain regions also revealed dairy practices evolving from being nonexistent or at very low levels at LBK sites but becoming much more abundant in the following Middle Neolithic cultures [e.g., the Rössen culture in Lower Alsace, France (22) or Funnel Beaker culture at the site of Kopydłowo, Poland (25)]. Dating of dairy residues recovered from the earliest phases of the sites would provide calendar ages for the emergence of dairying between LBK regions based directly on the commodity itself rather than on associated materials. Critically, some sites cannot be dated by conventional materials due to their poor preservation, while at other sites where dairy evidence is scarce, the possibility exists for false-positive signal arising due to stratigraphic perturbations. In reporting here the application of our recently developed CSRA method to a wide range of potsherds, we begin to resolve the timing of appearance of dairying practices by LBK farmers during the Neolithic in the diverse regions of the settlements.  相似文献   

13.
Agricultural expansion into subtropical and tropical forests causes major environmental damage, but its wider social impacts often remain hidden. Forest-dependent smallholders are particularly strongly impacted, as they crucially rely on forest resources, are typically poor, and often lack institutional support. Our goal was to assess forest-smallholder dynamics in relation to expanding commodity agriculture. Using high-resolution satellite images across the entire South American Gran Chaco, a global deforestation hotspot, we digitize individual forest-smallholder homesteads (n = 23,954) and track their dynamics between 1985 and 2015. Using a Bayesian model, we estimate 28,125 homesteads in 1985 and show that forest smallholders occupy much larger forest areas (>45% of all Chaco forests) than commonly appreciated and increasingly come into conflict with expanding commodity agriculture (18% of homesteads disappeared; n = 5,053). Importantly, we demonstrate an increasing ecological marginalization of forest smallholders, including a substantial forest resource base loss in all Chaco countries and an increasing confinement to drier regions (Argentina and Bolivia) and less accessible regions (Bolivia). Our transferable and scalable methodology puts forest smallholders on the map and can help to uncover the land-use conflicts at play in many deforestation frontiers across the globe. Such knowledge is essential to inform policies aimed at sustainable land use and supply chains.

Smallholders produce about one-third of all crops globally, manage one-quarter of the global agricultural area, and are key to food security in low-income countries around the world (1, 2). Despite their importance, however, smallholders remain widely overlooked in policy making (3). This is particularly so for forest-dependent people (hereafter: forest smallholders), who live inside the forest matrix and depend on forests as their resource base for fuelwood, timber, nonwood forests products, or livestock herding (4). Forest smallholders are widespread, particularly in the tropics and subtropics (5). Yet despite recent advances in estimating their number and spatial distribution (4), we lack reliable information on how deforestation and agricultural expansion affects them across the world’s major deforestation frontiers.Putting forest-dependent people on the map is furthermore urgently needed in order to guide sustainable development programs to support them (4). Forest smallholders are particularly vulnerable, as they are typically poor and often lack formal land titles as well as institutional support (6). Today, agricultural expansion into tropical forests is often driven by large-scale farmers, producing commodities for global markets (7, 8). Such expanding commodity frontiers can trigger substantial and sometimes violent conflicts between forest smallholders and large-scale farmers (9), causing outmigration of forest smallholders to urban areas (10). Where forest smallholders persist, their resource base often vanishes or they are displaced to environmentally more marginal lands (11, 12), two processes referred to as ecological marginalization (13). While ecological marginalization has often been hypothesized, it has rarely been assessed empirically, and no study has quantified the ecological marginalization of forest-dependent people across any tropical deforestation frontier.Despite the major challenges forest smallholders face where commodity agriculture expands (14), the geography of competition between forest smallholders and large-scale producers remains largely elusive. For instance, whereas major efforts have gone into mapping Indigenous communities (15), we lack similar datasets for forest smallholders more broadly. As a consequence, assessments of land available for further agricultural expansion often do not fully account for the fact that many areas highlighted as available might in fact be inhabited by forest smallholders (16). Furthermore, it remains largely unclear to what extent commodity frontiers affect forest smallholders not just directly by displacing them but also by reducing forest cover and thus their resource base around their communities. These knowledge gaps hinder targeted actions toward avoiding or mitigating negative livelihood outcomes for forest smallholders.Commodity frontiers have expanded particularly rapidly in South America in recent years, mostly driven by cattle and soy production (17). The expansion of commodity agriculture has been particularly rapid in the Gran Chaco (hereafter: Chaco), the world’s largest tropical dry forest extending across Argentina, Bolivia, and Paraguay. This region harbors major carbon stocks (18), unique biodiversity (19), and is home to many Indigenous and non-Indigenous smallholder communities (20). The Chaco has recently become a global deforestation hotspot, which brings with it serious environmental impacts such as globally significant carbon emissions (18) and major biodiversity loss (21). Although there is increasing evidence that conflicts over land have become widespread (11, 22), information about the social costs of this expansion is scarce (7, 20, 23).Our overarching goal was to assess forest-smallholder dynamics in relation to expanding commodity agriculture in the Chaco for the period 1985 to 2015, during which commodity frontiers expanded dramatically in the region. Specifically, we ask the following: 1) how did the expansion of commodity agriculture in the Chaco shape the numbers and geographic patterns of forest smallholders? and 2) did the expansion of commodity agriculture result in increasing ecological marginalization of forest smallholders? We addressed these questions by digitizing forest-smallholder homesteads using high-resolution satellite images across the entire 1.1 million-km2 Chaco (Fig. 1). We then reconstructed dynamics of forest-smallholder homesteads back to 1985 and quantified trends in ecological marginalization by assessing resource base loss and environmental marginality (proxied by agroclimatic conditions and accessibility) around homesteads.Open in a separate windowFig. 1.Study region and key characteristics of forest-smallholder homesteads for digitization. Chaco region in South America and spatial patterns of agricultural expansion since 1985 (purple), agricultural expansion before 1985 (orange), and remaining forest (green, year 2015). “Other” represents natural grasslands, savannahs, wetlands, water bodies, and settlements (A). We used three key characteristics of forest-smallholder homesteads for digitization: 1) distinctive landscape patterns of Chacoan forest-smallholder homesteads (i.e., degradation of natural vegetation and soils, gradually decreasing with increasing distance from the center of the homestead) (B), (2) presence of at least one house (B and C), and (3) presence of a stable, corral, and/or water hole or well confirming livestock presence and thus a relatively permanent occupation (C and D). Photos: authors. Administrative units (provinces, departments, and states): APG, Alto Paraguay; BO, Boquerón; CA, Catamarca; CC, Concepción; CD, Cordillera; CE, Central; CH, Chaco; CO, Córdoba; CQ, Chuquisaca; CR, Corrientes; CZ, Caazapá; FO, Formosa; IT, Itapúa; JJ, Jujuy; LR, La Rioja; MI, Misiones; NE, Ñeembucú; PA, Paraguarí; PH, Presidente Hayes; SA, Salta; SC, Santa Cruz; SE, Santiago del Estero; SF, Santa Fe; SJ, San Juan; SL, San Luis; SP, San Pedro; TJ, Tarija; TU, Tucumán.  相似文献   

14.
Europe has experienced a stagnation of some crop yields since the early 1990s as well as statistically significant warming during the growing season. Although it has been argued that these two are causally connected, no previous studies have formally attributed long-term yield trends to a changing climate. Here, we present two statistical tests based on the distinctive spatial pattern of climate change impacts and adaptation, and explore their power under a range of parameter values. We show that statistical power for the identification of climate change impacts is high in many settings, but that power for identifying adaptation is almost always low. Applying these tests to European agriculture, we find evidence that long-term temperature and precipitation trends since 1989 have reduced continent-wide wheat and barley yields by 2.5% and 3.8%, respectively, and have slightly increased maize and sugar beet yields. These averages disguise large heterogeneity across the continent, with regions around the Mediterranean experiencing significant adverse impacts on most crops. This result means that climate trends can account for ∼10% of the stagnation in European wheat and barley yields, with likely explanations for the remainder including changes in agriculture and environmental policies.Europe has experienced a stagnation of yields for some crops, particularly wheat and barley, with a plateau since the early to mid-1990s (Fig. 1A) (1, 2). Explanations for this stagnation have focused on changing agricultural policy and, to a lesser extent, on shifting climate patterns (3, 4). Much of Europe saw the introduction of more stringent environmental policies during the 1990s, as well as the decoupling of subsidy payments from farm production in the European Union, both of which would be expected to lower the intensity of cereal production (5, 6). In addition, warming trends in the region have been large relative to natural variability and could be expected to negatively affect yields, particularly in southern Europe (SI Appendix, Fig. S1) (7, 8).Open in a separate windowFig. 1.Patterns and time evolution of crop yields in Europe and the predicted impacts of climate trends. (A) Area-weighted yields of the four crops examined in this paper for the countries included in the study, 1960–2010 (SI Appendix, Table S1) (17). (B–E) Maps of the observed linear trend in yield in 1989–2009 for wheat (B), maize (C), barley (D), and sugar beet (E) (25). Maps of the expected change in yield based on growing-season temperature and precipitation trends in 1989–2009 (27, 28) and the yield response functions described by Moore and Lobell (8) (SI Appendix, Figs. S3 and S4) for wheat (F), maize (G), barley (H), and sugar beet (I). White shows areas not included in the study due to insufficient data.Existing empirical evidence for either explanation is typically weak, taking the form of coincidence in the direction and timing of expected climate or policy effects with yield trends, so the relative importance of these mechanisms has not been rigorously demonstrated (3, 9). More persuasive detection and attribution studies instead identify impacts using a distinctive spatial pattern of trends associated with climate change forcing (10, 11). Because the spatial distribution of long-term trends is less likely to be correlated with other variables, these studies are better able to make a case for climate’s causal effect on the outcome of interest. Fig. 1 B–E shows the observed trends in yields of wheat, maize, barley, and sugar beet in Europe between 1989 and 2009. Fig. 1 F–I shows the trends in yield that would have been expected given observed changes in growing-season temperature and precipitation and the sensitivity of crops to those changes (8). Observed trends are both more positive and more spatially heterogeneous than the predicted trends, which would be expected given that the latter do not include the effects of technological improvements or of changing economic or policy conditions. Nevertheless, formal statistical tests can reveal whether or not the distinctive pattern, or fingerprint, of climate trend impacts is embedded within the observed pattern of long-term yield trends.Formal detection of a climate change signal and the attribution of that signal to anthropogenic greenhouse gas emissions has been successful in many physical and some biological systems (1012). However, few studies have attributed changing yield pattern to climate trends. This analysis is complicated by two factors. First, the expected response of agriculture to a given temperature or precipitation forcing is determined by an imperfectly known response function. This response uncertainty must be accounted for in determining whether or not climate change has had a statistically discernable impact. Second, farmers may or may not be adapting to the climate change they have experienced, creating additional uncertainty in the expected response of agriculture to climate forcing (13). This potential for adaptation means there are two response functions relevant to the detection (and prediction) of climate change impacts: the short-run response function that includes limited or no adaptation, and the long-run response function that includes full adaptation (8).In this paper, we first develop two general statistical tests that can be applied to the detection of impacts and adaptation in any managed system affected by climate change and report the power of these tests under a range of parameter values. We then apply these tests to Europe to determine whether climate trends have affected yields and, if so, to what extent these impacts can explain the stagnation of European cereal yields.  相似文献   

15.
As efforts to mitigate climate change increase, there is a need to identify cost-effective ways to avoid emissions of greenhouse gases (GHGs). Agriculture is rightly recognized as a source of considerable emissions, with concomitant opportunities for mitigation. Although future agricultural productivity is critical, as it will shape emissions from conversion of native landscapes to food and biofuel crops, investment in agricultural research is rarely mentioned as a mitigation strategy. Here we estimate the net effect on GHG emissions of historical agricultural intensification between 1961 and 2005. We find that while emissions from factors such as fertilizer production and application have increased, the net effect of higher yields has avoided emissions of up to 161 gigatons of carbon (GtC) (590 GtCO2e) since 1961. We estimate that each dollar invested in agricultural yields has resulted in 68 fewer kgC (249 kgCO2e) emissions relative to 1961 technology ($14.74/tC, or ∼$4/tCO2e), avoiding 3.6 GtC (13.1 GtCO2e) per year. Our analysis indicates that investment in yield improvements compares favorably with other commonly proposed mitigation strategies. Further yield improvements should therefore be prominent among efforts to reduce future GHG emissions.  相似文献   

16.
The periodic makeup of carbon nanotubes suggests that their formation should obey the principles established for crystals. Nevertheless, this important connection remained elusive for decades and no theoretical regularities in the rates and product type distribution have been found. Here we contend that any nanotube can be viewed as having a screw dislocation along the axis. Consequently, its growth rate is shown to be proportional to the Burgers vector of such dislocation and therefore to the chiral angle of the tube. This is corroborated by the ab initio energy calculations, and agrees surprisingly well with diverse experimental measurements, which shows that the revealed kinetic mechanism and the deduced predictions are remarkably robust across the broad base of factual data.  相似文献   

17.
Phanerozoic levels of atmospheric oxygen relate to the burial histories of organic carbon and pyrite sulfur. The sulfur cycle remains poorly constrained, however, leading to concomitant uncertainties in O2 budgets. Here we present experiments linking the magnitude of fractionations of the multiple sulfur isotopes to the rate of microbial sulfate reduction. The data demonstrate that such fractionations are controlled by the availability of electron donor (organic matter), rather than by the concentration of electron acceptor (sulfate), an environmental constraint that varies among sedimentary burial environments. By coupling these results with a sediment biogeochemical model of pyrite burial, we find a strong relationship between observed sulfur isotope fractionations over the last 200 Ma and the areal extent of shallow seafloor environments. We interpret this as a global dependency of the rate of microbial sulfate reduction on the availability of organic-rich sea-floor settings. However, fractionation during the early/mid-Paleozoic fails to correlate with shelf area. We suggest that this decoupling reflects a shallower paleoredox boundary, primarily confined to the water column in the early Phanerozoic. The transition between these two states begins during the Carboniferous and concludes approximately around the Triassic–Jurassic boundary, indicating a prolonged response to a Carboniferous rise in O2. Together, these results lay the foundation for decoupling changes in sulfate reduction rates from the global average record of pyrite burial, highlighting how the local nature of sedimentary processes affects global records. This distinction greatly refines our understanding of the S cycle and its relationship to the history of atmospheric oxygen.  相似文献   

18.
19.
Fossils and molecular data are two independent sources of information that should in principle provide consistent inferences of when evolutionary lineages diverged. Here we use an alternative approach to genetic inference of species split times in recent human and ape evolution that is independent of the fossil record. We first use genetic parentage information on a large number of wild chimpanzees and mountain gorillas to directly infer their average generation times. We then compare these generation time estimates with those of humans and apply recent estimates of the human mutation rate per generation to derive estimates of split times of great apes and humans that are independent of fossil calibration. We date the human–chimpanzee split to at least 7–8 million years and the population split between Neanderthals and modern humans to 400,000–800,000 y ago. This suggests that molecular divergence dates may not be in conflict with the attribution of 6- to 7-million-y-old fossils to the human lineage and 400,000-y-old fossils to the Neanderthal lineage.  相似文献   

20.
Human learning is supported by multiple neural mechanisms that maturate at different rates and interact in mostly cooperative but also sometimes competitive ways. We tested the hypothesis that mature cognitive mechanisms constrain implicit statistical learning mechanisms that contribute to early language acquisition. Specifically, we tested the prediction that depleting cognitive control mechanisms in adults enhances their implicit, auditory word-segmentation abilities. Young adults were exposed to continuous streams of syllables that repeated into hidden novel words while watching a silent film. Afterward, learning was measured in a forced-choice test that contrasted hidden words with nonwords. The participants also had to indicate whether they explicitly recalled the word or not in order to dissociate explicit versus implicit knowledge. We additionally measured electroencephalography during exposure to measure neural entrainment to the repeating words. Engagement of the cognitive mechanisms was manipulated by using two methods. In experiment 1 (n = 36), inhibitory theta-burst stimulation (TBS) was applied to the left dorsolateral prefrontal cortex or to a control region. In experiment 2 (n = 60), participants performed a dual working-memory task that induced high or low levels of cognitive fatigue. In both experiments, cognitive depletion enhanced word recognition, especially when participants reported low confidence in remembering the words (i.e., when their knowledge was implicit). TBS additionally modulated neural entrainment to the words and syllables. These findings suggest that cognitive depletion improves the acquisition of linguistic knowledge in adults by unlocking implicit statistical learning mechanisms and support the hypothesis that adult language learning is antagonized by higher cognitive mechanisms.

Human learning is thought to be supported by the interactions between two basic memory systems of the brain, namely declarative and nondeclarative memory (1). Declarative memory is characterized by voluntary, explicit, attention-based processes, such as recall and recognition of facts/events, and is mediated by medial-temporal lobe and prefrontal cortex structures (2). Nondeclarative memory, also referred to as procedural memory, on the other hand is part of implicit memory and includes the acquisition of a heterogeneity of skills, habits, and procedures. It is mediated by basal ganglia, cerebellar, and neocortical structures, as well as parts of the prefrontal cortex [e.g., Broca''s area (35)].Accumulating evidence supports a competitive relationship between these two memory systems during human skill learning. Suppression of the declarative memory system by interventions like repetitive transcranial magnetic stimulation (TMS), distraction tasks, alcohol consumption, hypnosis, intake of benzodiazepines, or cognitive fatigue, can actually enhance performance in implicit, perceptual-motor learning tasks such as the serial-reaction time task (611) or intuitive reasoning tasks (12). These findings suggest that higher-level cognitive functions associated with declarative memory and supported by the prefrontal cortex can interfere with behavior that is naturally driven by implicit learning processes (13). However, it remains unresolved whether competing memory systems also affect implicit statistical learning abilities that are critical for the early, rapid acquisition of language in infants (14). This is an important question, as it could explain why infants and children pick up languages with less effort than adults (cf “What don’t we know?”) (15).Language acquisition involves many different memory and learning processes that are dependent on both procedural and declarative memory (2, 16). The first step for infants acquiring language is to gain knowledge about the phonological structure in one’s spoken language system, the probabilistic constraints on how speech sounds combine (i.e., phonotactic learning), and the segments of continuous speech (i.e., word forms) (17). Word form learning takes place already in the first 12 months of life and is an important precursor to vocabulary acquisition (i.e., mapping form to meaning) and more complex language acquisition (e.g., grammar) later in development (18). In the present study, we focus on statistical learning mechanisms that contribute to word segmentation and thus novel word form learning in the early stages of language acquisition.Statistical learning is generally known as the ability to pick up on patterns in the environment through extraction of frequent regularities and distributional properties. The term was first introduced in the field of cognitive psychology by the work of Saffran, Aslin, and Newport (1996) (19), who demonstrated that infants of only 8-mo-old can extract word boundaries and segment novel word forms from a continuous stream of speech sounds with no other cue than the transitional probabilities between syllables. Later, this learning was also demonstrated in older children and adults (20, 21) and across different domains (e.g., music and grammar) or modalities (e.g., auditory, visual, and motor) (22, 23), indicating that statistical learning is a largely continuous and domain-general learning mechanism for skill acquisition across the human life span.In a typical statistical learning experiment, participants are repeatedly exposed to patterned stimuli such as consonant strings from an artificial grammar, or recurrent syllable triplets. Learning is then typically assessed postexposure by using a two-alternative forced-choice recognition task in which triplets from the exposure stream are pitted against foils. Participants have to indicate which of the two triplets sounded most familiar, and above-chance accuracy is taken as indication of learning. Since statistical learning occurs without any instruction or intention to learn, it is often assumed to result in implicit memory representations (24). This view is also supported by the evidence that statistical learning occurs in infants and even in sleeping neonates (25). However, in recent work, Batterink and colleagues demonstrated that even without intention to learn, adults acquire mainly explicit knowledge of the novel word forms during statistical learning (2629). This can be derived from the observation that participants’ performance was above chance when they were confident remembering the triplet but at chance when they were unconfident. Knowledge is implicit when participants lack awareness of what they have learned. This means that if participants perform also above chance when they are unconfident, knowledge is inferred to be implicit (30). In contrast, if they perform at chance level when confidence is low, no implicit knowledge is gained. Although statistical learning may produce additional implicit knowledge that cannot be assessed by the recognition and memory judgement tasks (e.g., ref. 28), Batterink’s earlier findings show that adults store the acquired word knowledge mainly in the explicit memory system.We and others have proposed that cognitive development and maturation of the prefrontal areas negatively affect language acquisition, such as word form or grammar learning (3135). For instance, we showed that children outperform adults on the Hebb repetition learning paradigm (32, 33), a memory paradigm in which participants are asked to immediately recall syllable sequences that consist of hidden repeated word forms. Interestingly, in a follow-up study, we found that cognitive depletion by TMS to the left dorsolateral prefrontal cortex (DLPFC), an area closely related to declarative memory and cognitive control, enhanced Hebb performance in adult participants (34). This suggests that late-developing prefrontal cognitive mechanisms can induce changes in efficiently acquiring sequential language information from the environment, a finding that is largely in line with previously reported evidence in skill learning (13). Recently, we corroborated this idea further by showing enhanced phonotactic constraint learning in adults under cognitive fatigue (35). Based on these findings, we hypothesize that the higher cognitive control system could reduce access to implicit memory processes in adults, thereby making them less efficient in language acquisition relative to infants and children. This idea is in line with the well-known less-is-more hypothesis that attributes developmental changes in language acquisition, such as phonology and grammar, to maturational changes in attention and memory capacities (3638). In our previous work, participants were explicitly asked to memorize (34) or produce (35) syllable sequences and thus exposure to the novel language was not passive, or “infant like.” Moreover, we did not separate implicit and explicit memory representations. Thus it remains unresolved how higher-order cognitive functions affect acquisition of implicit linguistic knowledge during passive listening to continuous speech using statistical learning mechanisms that support infant language acquisition (23, 39).The aim of the current study was to directly address this question using the auditory statistical learning paradigm. In particular, we aimed to determine whether a temporary depletion of the higher cognitive control system, using two different interventions, can unlock adults’ implicit statistical learning processes that serve infant word segmentation. To investigate this, we exposed young adults to continuous streams of syllables with, unknown to them, repeating three-syllable pseudowords, while watching a silent film. In the first experiment, inhibitory continuous theta-burst stimulation was used to induce a long-lasting disruption in left DLPFC or a control site prior to exposure, similar to the method used in Smalle et al., 2017 (34). In the second experiment, participants first performed an effortful dual working-memory task under high– or low–cognitive-load( HCL and LCL, respectively) conditions, which induces cognitive fatigue that hampers subsequent cognitive performance (7, 35, 40), or did not perform a cognitive load task prior to the language exposure (control or no-load condition). Our primary measure of statistical learning was the offline recognition of the hidden words, which was assessed 15 min after exposure. This was combined with a memory judgement procedure, which measured how confident the participants were that they remembered the hidden words. This task dissociates explicit versus implicit memory representations (e.g., refs. 2729, 41). In both experiments, electroencephalography (EEG) was also measured during the 20-min language exposure in order to investigate an online perceptual component as second independent measure of statistical learning. Research has shown that the steady-state response of the brain shows a decrease at the frequency of individual syllables and an increase at the rhythm of three-syllable words while listening to continuous sound streams that consist of repeating three-syllable structures. This shift in neural entrainment indicates online statistical learning of novel words as a function of auditory exposure (29). Overall, we predicted that TMS-induced disruption of the DLPFC (in experiment 1) and cognitive fatigue (in experiment 2) would enhance statistical language learning and especially strengthen implicit memory representations for the hidden novel words.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号