首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
We describe the problem of “selective inference.” This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have “cherry-picked”—searched for the strongest associations—means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.Statistical science has changed a great deal in the past 10–20 years, and is continuing to change, in response to technological advances in science and industry. The world is awash with big and complicated data, and researchers are trying to make sense out of it. Leading examples include data from “omic” assays in the biomedical sciences, financial forecasting from economic and business indicators, and the analysis of user click patterns to optimize ad placement on websites. This has led to an explosion of interest in the fields of statistics and machine learning and spawned a new field some call “data science.”In the words of Yoav Benjamini, statistical methods have become “industrialized” in response to these changes. Whereas traditionally scientists fit a few statistical models by hand, now they use sophisticated computational tools to search through a large number of models, looking for meaningful patterns. Having done this search, the challenge is then to judge the strength of the apparent associations that have been found. For example, a correlation of 0.9 between two measurements A and B is probably noteworthy. However, suppose that I had arrived at A and B as follows: I actually started with 1,000 measurements and I searched among all pairs of measurements for the most correlated pair; these turn out to be A and B, with correlation 0.9. With this backstory, the finding is not nearly as impressive and could well have happened by chance, even if all 1,000 measurements were uncorrelated. Now, if I just reported to you that these two measures A and B have correlation 0.9, and did not tell which of these two routes I used to obtain them, you would not have enough information to judge the strength of the apparent relationship. This statistical problem has become known as “selective inference,” the assessment of significance and effect sizes from a dataset after mining the same data to find these associations.As another example, suppose that we have a quantitative value y, a measurement of the survival time of a patient after receiving either a standard treatment or a new experimental treatment. I give the old drug (1) or new drug (2) at random to a set of patients and compute the mean difference in the outcome z=(y¯2y¯1)/s, where s is an estimate of SD of the raw difference. Then I could approximate the distribution of z by a standard normal distribution, and hence if I reported to you a value of, say, z = 2.5 you would be impressed because a value that large is unlikely to occur by chance if the new treatment had the same effectiveness as the old one (the P value is about 1%). However, what if instead I tried out many new treatments and reported to you only ones for which |z| > 2? Then a value of 2.5 is not nearly as surprising. Indeed, if the two treatments were equivalent, the conditional probability that |z| exceeds 2.5, given that it is larger than 2, is about 27%. Armed with knowledge of the process that led to the value z = 2.5, the correct selective inference would assign a P value of 0.27 to the finding, rather than 0.01.If not taken into account, the effects of selection can greatly exaggerate the apparent strengths of relationships. We feel that this is one of the causes of the current crisis in reproducibility in science (e.g., ref. 1). With increased competiveness and pressure to publish, it is natural for researchers to exaggerate their claims, intentionally or otherwise. Journals are much more likely to publish studies with low P values, and we (the readers) never hear about the great number of studies that showed no effect and were filed away (the “file-drawer effect”). This makes it difficult to assess the strength of a reported P value of, say, 0.04.The challenge of correcting for the effects of selection is a complex one, because the selective decisions can occur at many different stages in the analysis process. However, some exciting progress has recently been made in more limited problems, such as that of adaptive regression techniques for supervised learning. Here the selections are made in a well-defined way, so that we can exactly measure their effects on subsequent inferences. We describe these new techniques here, as applied to two widely used statistical methods: classic supervised learning, via forward stepwise regression, and modern sparse learning, via the “lasso.” Later, we indicate the broader scope of their potential applications, including principal components analysis.  相似文献   

4.
Insect societies such as those of ants, bees, and wasps consist of 1 or a small number of fertile queens and a large number of sterile or nearly sterile workers. While the queens engage in laying eggs, workers perform all other tasks such as nest building, acquisition and processing of food, and brood care. How do such societies function in a coordinated and efficient manner? What are the rules that individuals follow? How are these rules made and enforced? These questions are of obvious interest to us as fellow social animals but how do we interrogate an insect society and seek answers to these questions? In this article I will describe my research that was designed to seek answers from an insect society to a series of questions of obvious interest to us. I have chosen the Indian paper wasp Ropalidia marginata for this purpose, a species that is abundantly distributed in peninsular India and serves as an excellent model system. An important feature of this species is that queens and workers are morphologically identical and physiologically nearly so. How then does an individual become a queen? How does the queen suppress worker reproduction? How does the queen regulate the nonreproductive activities of the workers? What is the function of aggression shown by different individuals? How and when is the queen''s heir decided? I will show how such questions can indeed be investigated and will emphasize the need for a whole range of different techniques of observation and experimentation.  相似文献   

5.
We have asked here how the remarkable variation in maize haplotype structure affects recombination. We compared recombination across a genetic interval of 9S in 2 highly dissimilar heterozygotes that shared 1 parent. The genetic interval in the common haplotype is ≈100 kb long and contains 6 genes interspersed with gene-fragment-bearing Helitrons and retrotransposons that, together, comprise 70% of its length. In one heterozygote, most intergenic insertions are homozygous, although polymorphic, enabling us to determine whether any recombination junctions fall within them. In the other, most intergenic insertions are hemizygous and, thus, incapable of homologous recombination. Our analysis of the frequency and distribution of recombination in the interval revealed that: (i) Most junctions were circumscribed to the gene space, where they showed a highly nonuniform distribution. In both heterozygotes, more than half of the junctions fell in the stc1 gene, making it a clear recombination hotspot in the region. However, the genetic size of stc1 was 2-fold lower when flanked by a hemizygous 25-kb retrotransposon cluster. (ii) No junctions fell in the hypro1 gene in either heterozygote, making it a genic recombination coldspot. (iii) No recombination occurred within the gene fragments borne on Helitrons nor within retrotransposons, so neither insertion class contributes to the interval''s genetic length. (iv) Unexpectedly, several junctions fell in an intergenic region not shared by all 3 haplotypes. (v) In general, the ability of a sequence to recombine correlated inversely with its methylation status. Our results show that haplotypic structural variability strongly affects the frequency and distribution of recombination events in maize.  相似文献   

6.
7.
Bacteria serve as the central arena for understanding how gene networks and proteins process information and control cellular behaviors. Recently, much effort has been devoted to the investigation of specific bacteria gene circuits as functioning modules. The next challenge is the integrative modeling of complex cellular networks composed of many such modules. A tractable integrative model of the sophisticated decision-making signal transduction system that determines the fate between sporulation and competence is presented. This model provides an understanding of how information is sensed and processed to reach an “informative” decision in the context of cell state and signals from other cells. The competence module (ComK dynamics) is modeled as a stochastic switch whose transition rate is controlled by a quorum-sensing unit. The sporulation module (Spo0A dynamics) is modeled as a timer whose clock rate is adjusted by a stress-sensing unit. The interplay between these modules is mediated via the Rap assessment system, which gates the sensing units, and the AbrB–Rok decision module, which creates an opportunity for competence within a specific window of the sporulation timer. The timer is regulated via a special repressilator-like inhibition of Spo0A* by Spo0E, which is itself inhibited by AbrB. For some stress and input signals, this repressilator can generate a frustration state with large variations (fluctuations or oscillations) in Spo0A* and AbrB concentrations, which might serve an important role in generating cell variability. This integrative framework is a starting point that can be extended to include transition into cannibalism and the role of colony organization.  相似文献   

8.
DNA sequencing has revealed high levels of variability within most species. Statistical methods based on population genetics theory have been applied to the resulting data and suggest that most mutations affecting functionally important sequences are deleterious but subject to very weak selection. Quantitative genetic studies have provided information on the extent of genetic variation within populations in traits related to fitness and the rate at which variability in these traits arises by mutation. This paper attempts to combine the available information from applications of the two approaches to populations of the fruitfly Drosophila in order to estimate some important parameters of genetic variation, using a simple population genetics model of mutational effects on fitness components. Analyses based on this model suggest the existence of a class of mutations with much larger fitness effects than those inferred from sequence variability and that contribute most of the standing variation in fitness within a population caused by the input of mildly deleterious mutations. However, deleterious mutations explain only part of this standing variation, and other processes such as balancing selection appear to make a large contribution to genetic variation in fitness components in Drosophila.Advances in DNA sequencing methods have enabled geneticists to measure the amount of genetic variability in natural populations at the most basic level: the frequencies of variants in nucleotide sequences. This achievement has ended one component of a debate on the extent and causes of genetic variability that was initiated in the 1950s by Hermann Muller and Theodosius Dobzhansky (1, 2); we now know that DNA sequences are highly variable within the populations of most species (3). It has, however, been much harder to provide a definitive answer to the other component of this debate, which concerns the nature and intensity of the evolutionary forces that influence the frequencies of genetic variants within populations (1, 2, 4, 5). Are these variants mostly selectively neutral (6), with the fates of new mutations determined by random fluctuations in their frequencies (genetic drift)? Is selection on variants that affect fitness mostly purifying, so that mutations with harmful effects are rapidly removed from the population (1)? Or do many loci have variants maintained by balancing selection (2)? What fraction of newly arisen variants cause higher fitness and are in the process of spreading through the population and replacing their alternatives? How strong is the selection acting on nonneutral variants, and how much variation in fitness among individuals within populations is contributed by such variants? Does the existence of wide variation in fitness among individuals imply a genetic load that threatens the survival of the species (1)?These questions are very broad, and this paper deals only with one aspect of them. It focuses on the question of how recent inferences concerning the strength of purifying selection, derived from genome-wide surveys of DNA sequence variability, can be connected with the results of statistical studies of genetic variation in components of Darwinian fitness such as viability and fertility. I will refer to these two approaches as population genomics and quantitative genetics, respectively. The first approach sheds light on the general nature of the fitness effects of the DNA sequence variants found in natural populations, but says little about how these fitness effects are caused. The second tells us how much genetic variability exists for fitness traits, the rate at which it arise by mutation and something about the type of selection involved, but is silent about the nature of the underlying sequence variants.Surprisingly little attention has been paid to integrating these two lines of inquiry, except for ref. 7. I largely confine myself to results from studies of the fruitfly Drosophila, because this has been the most useful model organism for investigating these problems, especially by quantitative genetics methods. Current information derived from population genomics studies will first be reviewed, followed by an analysis of the results of quantitative genetics experiments on both mutational and standing variation. I show that the quantitative genetics results can only be explained if there is a significant input of new mutations with much larger effects on fitness than those inferred from population genomics. There also appears to be too much genetic variation in fitness components in natural populations to be explained purely by mutation selection balance, so that additional processes such as balancing selection must make an important contribution.  相似文献   

9.
A major challenge in cell biology is to understand how nanometer-sized molecules can organize micrometer-sized cells in space and time. One solution in many animal cells is a radial array of microtubules called an aster, which is nucleated by a central organizing center and spans the entire cytoplasm. Frog (here Xenopus laevis) embryos are more than 1 mm in diameter and divide with a defined geometry every 30 min. Like smaller cells, they are organized by asters, which grow, interact, and move to precisely position the cleavage planes. It has been unclear whether asters grow to fill the enormous egg by the same mechanism used in smaller somatic cells, or whether special mechanisms are required. We addressed this question by imaging growing asters in a cell-free system derived from eggs, where asters grew to hundreds of microns in diameter. By tracking marks on the lattice, we found that microtubules could slide outward, but this was not essential for rapid aster growth. Polymer treadmilling did not occur. By measuring the number and positions of microtubule ends over time, we found that most microtubules were nucleated away from the centrosome and that interphase egg cytoplasm supported spontaneous nucleation after a time lag. We propose that aster growth is initiated by centrosomes but that asters grow by propagating a wave of microtubule nucleation stimulated by the presence of preexisting microtubules.The large cells in early vertebrate embryos are organized by radial arrays of microtubules called asters. This general organization was described by early cytologists (1) but is clearly illustrated by modern fixed immunofluorescence or live imaging. At the end of mitosis, a pair of asters is observed at the spindle poles but remains small in radius, presumably because cyclin-dependent kinase 1 (Cdk1) inhibits aster growth (2). Once the cell enters interphase, the asters grow at rates of 30 µm/min in Xenopus zygotes and 15 µm/min in zebrafish, while maintaining a high density of microtubules at their periphery (24). Paired asters interact at the cell’s midplane to form a specialized zone of microtubule overlaps, which in turn recruit cytokinesis factors to the cell cortex (5, 6). Cell-spanning dimensions are presumably required so that the microtubules can touch the cortex to accurately position the cleavage furrow according to cell geometry (3, 7, 8).In the standard model of aster growth, microtubules are nucleated with their minus-ends anchored at the centrosome (9) and polymerize outward with plus-ends undergoing dynamic instability (10). However, there are several issues in applying this model to a very large cytoplasm (11). Because of the radial geometry, the standard model implies a decrease in microtubule density with increasing radius. In contrast, microtubule density seems to be constant or even increase toward the aster periphery in frog and fish zygotes (3). Furthermore, this radial elongation model predicts that a subset of microtubules spans the entire aster radius, but it is unknown whether such long microtubules exist. We wondered whether additional mechanisms promoted aster growth in large cells, such as microtubule sliding, treadmilling, or nucleation remote from centrosomes.Previously we developed a cell-free system to reconstitute cleavage furrow signaling where growing asters interacted (5, 12). Here, we combine cell-free reconstitution and quantitative imaging to identify microtubule nucleation away from the centrosome as the key biophysical mechanism underlying aster growth. We propose that aster growth in large cells should be understood as a spatial propagation of microtubule-stimulated microtubule nucleation.  相似文献   

10.
Both plants and animals require the activity of proteins containing nucleotide binding (NB) domain and leucine-rich repeat (LRR) domains for proper immune system function. NB-LRR proteins in plants (NLR proteins in animals) also require conserved regulation via the proteins SGT1 and cytosolic HSP90. RAR1, a protein specifically required for plant innate immunity, interacts with SGT1 and HSP90 to maintain proper NB-LRR protein steady-state levels. Here, we present the identification and characterization of specific mutations in Arabidopsis HSP90.2 that suppress all known phenotypes of rar1. These mutations are unique with respect to the many mutant alleles of HSP90 identified in all systems in that they can bypass the requirement for a cochaperone and result in the recovery of client protein accumulation and function. Additionally, these mutations separate HSP90 ATP hydrolysis from HSP90 function in client protein folding and/or accumulation. By recapitulating the activity of RAR1, these novel hsp90 alleles allow us to propose that RAR1 regulates the physical open–close cycling of a known “lid structure” that is used as a dynamic regulatory HSP90 mechanism. Thus, in rar1, lid cycling is locked into a conformation favoring NB-LRR client degradation, likely via SGT1 and the proteasome.  相似文献   

11.
The Calvin-Benson-Bassham cycle (Calvin cycle) catalyzes virtually all primary productivity on Earth and is the major sink for atmospheric CO2. A less appreciated function of CO2 fixation is as an electron-accepting process. It is known that anoxygenic phototrophic bacteria require the Calvin cycle to accept electrons when growing with light as their sole energy source and organic substrates as their sole carbon source. However, it was unclear why and to what extent CO2 fixation is required when the organic substrates are more oxidized than biomass. To address these questions we measured metabolic fluxes in the photosynthetic bacterium Rhodopseudomonas palustris grown with 13C-labeled acetate. R. palustris metabolized 22% of acetate provided to CO2 and then fixed 68% of this CO2 into cell material using the Calvin cycle. This Calvin cycle flux enabled R. palustris to reoxidize nearly half of the reduced cofactors generated during conversion of acetate to biomass, revealing that CO2 fixation plays a major role in cofactor recycling. When H2 production via nitrogenase was used as an alternative cofactor recycling mechanism, a similar amount of CO2 was released from acetate, but only 12% of it was reassimilated by the Calvin cycle. These results underscore that N2 fixation and CO2 fixation have electron-accepting roles separate from their better-known roles in ammonia production and biomass generation. Some nonphotosynthetic heterotrophic bacteria have Calvin cycle genes, and their potential to use CO2 fixation to recycle reduced cofactors deserves closer scrutiny.  相似文献   

12.
Physically distinguishable microdomains associated with various functional membrane proteins are one of the major current topics in cell biology. Glycosphingolipids present in such microdomains have been used as "markers;" however, the functional role of glycosyl epitopes in microdomains has received little attention. In this review, I have tried to summarize the evidence that glycosyl epitopes in microdomains mediate cell adhesion and signal transduction events that affect cellular phenotypes. Molecular assemblies that perform such functions are hereby termed "glycosynapse" in analogy to "immunological synapse," the membrane assembly of immunocyte adhesion and signaling. Three types of glycosynapses are so far distinguishable: (i) Glycosphingolipids organized with cytoplasmic signal transducers and proteolipid tetraspanin with or without growth factor receptors; (ii) transmembrane mucin-type glycoproteins with clustered O-linked glycoepitopes for cell adhesion and associated signal transducers at lipid domain; and (iii) N-glycosylated transmembrane adhesion receptors complexed with tetraspanin and gangliosides, as typically seen with the integrin-tetraspanin-ganglioside complex. The possibility is discussed that glycosynapses give rise to a high degree of diversity and complexity of phenotypes.  相似文献   

13.
Molecular imaging enables visualization of specific molecules in vivo and without substantial perturbation to the target molecule''s environment. Glycans are appealing targets for molecular imaging but are inaccessible with conventional approaches. Classic methods for monitoring glycans rely on molecular recognition with probe-bearing lectins or antibodies, but these techniques are not well suited to in vivo imaging. In an emerging strategy, glycans are imaged by metabolic labeling with chemical reporters and subsequent ligation to fluorescent probes. This technique has enabled visualization of glycans in living cells and in live organisms such as zebrafish. Molecular imaging with chemical reporters offers a new avenue for probing changes in the glycome that accompany development and disease.  相似文献   

14.
Most high-profile disasters are followed by demands for an investigation into what went wrong. Even before they start, calls for finding the missed warning signs and an explanation for why people did not “connect the dots” will be common. Unfortunately, however, the same combination of political pressures and the failure to adopt good social science methods that contributed to the initial failure usually lead to postmortems that are badly flawed. The high stakes mean that powerful actors will have strong incentives to see that certain conclusions are—and are not—drawn. Most postmortems also are marred by strong psychological biases, especially the assumption that incorrect inferences must have been the product of wrong ways of thinking, premature cognitive closure, the naive use of hindsight, and the neglect of the comparative method. Given this experience, I predict that the forthcoming inquiries into the January 6, 2021, storming of the US Capitol and the abrupt end to the Afghan government will stumble in many ways.

In the wake of high-profile disasters like the terrorist attacks on September 11, 2001, the destruction of the Challenger space shuttle, or the discovery that Iraq did not have active programs to produce weapons of mass destruction (WMD) in the years before the 2003 invasion, there are demands for an investigation into what went wrong with intelligence and policy making. Even before they start, calls for finding the missed warning signs and an explanation for why people did not “connect the dots” will be common, along with the expectation that a good inquiry will lead to changes that will make us much safer. Unfortunately, however, the same combination of political pressures and the failure to adopt good social science methods that contributed to the initial failure usually produces postmortems that are badly flawed, even if they produce some good information. This leads me to predict that the inquiries into the January 6, 2021, storming of the US Capitol and the abrupt end to the Afghan government in August 2021 will stumble in many ways. [Exceptions to this otherwise dreary pattern—studies that are better done—are generally produced by researchers who have had more social science education, are highly skilled, or who do the task well after the events, creating more room for perspective (13). There is also some evidence that organizations conducting routine postmortems, such as in the investigation of transportation accidents and medical mishaps, do it better (48).]I will look most closely at the American postmortems conducted over the past decade that examined major foreign policy failures. Because they were salient to the polity and generously funded, it is reasonable to expect they would have been done as well as possible. I have omitted only the congressional reports on the attack on the American diplomatic outpost at Benghazi on September 11 to 12, 2012, and the Mueller report on whether the Trump campaign conspired with Russia during the 2016 election and subsequently sought to obstruct the investigation. The former were so driven by the politics of attacking or defending Hillary Clinton, who was simultaneously the Secretary of State at the time and the Democratic candidate for president in 2016, that to take them as serious attempts at unraveling what happened would be a strain. The Mueller report was largely a fact-finding and legal document and so did not have the same purpose of understanding the events and the causal relationships at play. I believe that I have avoided the trap, discussed below, of only looking at cases that are likely to support my argument.There is no simple recipe for a successful postmortem, but there are roadmaps if not checklists that can help us judge the ones that have been done. To start with, humility in a double sense is in order. Not only is it likely that the case under consideration will be a difficult one, which means that the correct judgments were not likely to have been obvious at the time, but even later conclusions are likely to be disputable. A good postmortem then recognizes the ambiguities of the case, many of which may remain even after the best retrospective analysis.In this, it is important to separate judgments about why incorrect conclusions were reached from evaluations of the thinking and procedures that were involved. People can be right for the wrong reasons, and, even more troubling, can be wrong for the right reasons.In the analysis of why the contemporary judgments were reached and the causes of the errors that are believed to have been involved, standard social science methodology points to the value of the comparative method in trying to see whether the factors believed to have marred the effort were also present when the outcome was better. Related, it is not enough for postmortems to locate bits of evidence that are consistent with the explanations that they are providing; they must also try to see whether this evidence is inconsistent with alternative views. Common sense embodies cognitive shortcuts that need to be disciplined to avoid jumping to conclusions and seeing the evidence as pleasingly clear and consistent with favored views.It is of course easy—or at least easier—to be wise after the fact, something a good postmortem has to recognize and internalize. It is not only unfair to the contemporary actors but an impediment to good retrospective understanding to seize on evidence that now seems to be crucial or interpretations that we now think to be correct without taking the next step of analyzing whether and why they should have been seen as privileged at the time.The very fact that a disastrous failure occurred despite the existence of individuals and organizations designed to provide warning and take appropriate action indicates that the meaning of the course of events is not obvious, and this widens the space for the political and psychological biases that I will discuss below. Sometimes there may be bits of vital information that were not gathered, were overlooked, or blatantly misinterpreted, or there may have been outright incompetence, but most organizations are better than that. This means that in many instances, the case under examination will be an exception to many of our generalizations about how the world works (9, 10). To take a case I studied for the CIA, the Iranian revolution of 1979 is an exception to what political scientists and policy makers believe, which is that leaders who enjoy the support of their security forces will not be overthrown by mass movements. Indeed, the universal assumption behind postmortems that analysts and decision makers should have gotten right may not be correct. As I will discuss later, we live in a probabilistic universe and sometimes what happens is very unlikely, in which case it is far from clear that the decision makers should have acted differently. Such a conclusion is almost always psychologically and politically unacceptable if not unimaginable, however, and is almost never considered, let alone asserted as correct. The fact that the explanation for the events is not obvious also means that postmortems that are conducted without more attention to good methodology than was true of the analysis that produced the policy and the underpinning beliefs are likely to fall into traps similar to those that played a role in the failure itself.The area of my own expertise, international politics, presents three additional reasons why it is difficult both for contemporary actors to judge their environments and for postmortems to do better. Although these issues are not unique to international politics, they arise with great frequency there. First, it is often hard to understand why others are behaving as they are, especially when we are dealing with actors (individuals, organizations, or governments) who live in very different cultural and perceptual worlds from us. One of the other postmortems that I did for the US intelligence community (IC) was the failure to recognize that Saddam Hussein did not have active WMD programs in 2002 (11). The belief that he did rested in part on the fact that he had expelled the United Nations weapons inspectors at great cost to his regime. We now know that, contrary to what was believed not only by the United States, but by almost all countries, the reason behind Saddam’s decision to do so was not that he was hiding his programs. In retrospect and with access to interviews and an extensive documentary record, it is generally believed that Saddam felt he had to pretend to have WMD in order to deter Iran (12, 13). Even this explanation has been disputed (14, 15), however, which underscores the point that contemporary observers face very difficult problems when it comes to understanding the behavior of others, problems that may not be readily resolved even later with much more and better evidence. Iraq may be a particularly difficult case, but it is telling that more than 100 y later and with access to all existing records (some have been destroyed) (16), historians still debate the key issues related to the origins of World War I, especially the motives and intentions of Germany and Russia. In fact these debates largely mirror the ones that occurred among policy makers in 1914. It is very hard to get inside others’ heads.The second problem is that many situations that lead to postmortems involve competition and conflict. In the classical account (17, 18), the actors are in strategic interaction in that each tries to anticipate how others will act, knowing that others are doing likewise. This poses great intellectual difficulties for the actors and is one reason that policies can fail so badly that postmortems follow. These interactions also pose difficulties for the postmortems themselves. In some cases, leaders behave contrary to the theory and act as though they were playing a game against nature rather than against a protagonist in strategic interaction (19). When this is not the case, tracing how the actor expected others to behave is often difficult because decision makers rarely gratify historians by spelling out their thinking. Additionally, antagonists in conflict often have good reason to engage in concealment and deception (2023). Of course actors understand this and try to penetrate these screens. But success is not guaranteed and so errors are common and can result in disastrous policy failures. Furthermore, the knowledge that concealment and deception are possible can lead the actor to discount accurate information. This was the case in the American judgment that Iraq had WMD programs at the start of the 21st century. Intelligence analysts knew that Iraq had been trained by the Soviet Union in elaborate concealment and deception techniques and they plausibly—but incorrectly—believed that this explained why they were not seeing more signs of these programs (24). In retrospect these puzzles are easier to unravel, but that does not mean they are always easy to solve and deception can pose challenges to postmortems.A third problem both for contemporary decision makers and for those conducting postmortems is that when mass behavior plays an important role, events can be subject to rapid feedback that is difficult to predict. Revolutions, for example, can be unthinkable until they become inevitable, to borrow the subtitle of a perceptive book about the overthrow of the Shah of Iran (25). That is, in a situation in which a large portion of the population opposes a dictatorial regime backed by security forces many people will join mass protests only when they come to believe that these protests will be so large that the chance of success is high and that of being killed by participating is low. And the bigger the protests one day, the greater the chances of even larger ones the next day because of the information that has been revealed. Related dynamics were at work with the disintegration of the Afghan security forces in August 2021. But the tipping points (2628) involved are hard to foresee at the time and only somewhat less difficult to tease out in retrospect.The difficulties of the task of conducting adequate postmortems make it easier for biases to play a large role. In prominent cases the political needs and preferences of powerful groups and individuals come in, sometimes blatantly. When President Lyndon Johnson established the Warren Commission to analyze the assassination of his predecessor, he made it clear to Chief Justice Earl Warren and other members that any hint that the USSR or Cuba were involved would increase the chance of nuclear war. In parallel, he did not object when Allen Dulles, Director of the CIA and member of the commission, withheld information on the plots to assassinate Fidel Castro and on Lee Harvey Oswald’s contacts with Cuba since knowledge of them would point to the possibility that the Cuban leader was involved (29). When the space shuttle Challenger exploded a few minutes into its takeoff, President Ronald Reagan similarly appointed a national commission chaired by former Secretary of State William Rogers to get to the bottom of what happened. But Rogers understood that the program was essential to American prestige and the heated competition with the Soviet Union and so while the commission looked at the technical problems that caused the disaster, it did not probe deeply into the organizational and cultural characteristics of NASA that predisposed it to overlook potentially deadly problems. It also shied away from acknowledging that the complex advanced technology incorporated into the program made it essentially experimental and that even with reforms another accident was likely (30). It took a superb study by organizational sociologists to elucidate these issues (30, 31), and in an example of the impact of organizational politics, NASA ignored this situation until it suffered another disaster with the shuttle Columbia.Politics can also limit the scope of the inquiry. When the bipartisan 9/11 Commission decided that its report would be unanimous this had the effect if not the purpose of preventing a close examination of the policies of the George W. Bush administration. The public record was quite clear: President Bush and his colleagues believed that the main threat to the United States came from other powerful states, most obviously China and Russia, and terrorism was not only a secondary concern, but could only be significant if it was supported by a strong state. It is then not surprising that al Qaeda received little high-level attention in the first 9 mo of the administration. While incorrect, I do not believe this approach was unreasonable, a product of blind ideology, or the result of the rejection of everything the previous administration had done. But it was an important part of the story, and one that could not be recounted if the report was to be endorsed by all its members.Sometimes the political bias is more subtle, as in the Senate report on the program of Rendition, Detention, and Interrogation (RDI) involving secret prisons and torture that the Bush administration adopted in the wake of the terrorist attacks of 9/11. According to the executive summary (the only part of the report that is declassified), the Democratic majority of the Senate Select Committee on Intelligence (SSCI) concluded that congressional leaders (including the chair of the SSCI) were kept in the dark about the program and that it did not produce information that was necessary to further the counterterrorism effort, including tracking down Osama bin Laden’s hiding place (32). These conclusions are very convenient: the SSCI and Congress do not deserve any blame and because torture is ineffective the United States can abjure it without paying any price. If the report had found torture to be effective, even on some occasions, it would have made the committee, the government, and the American public face a trade-off between safety and morality, and obviously this would have been politically and psychologically painful. It was much more comfortable to say that no such choice was necessary.As a political scientist, not only am I not shocked by this behavior, but I also believe that it is somewhat justifiable. Keeping tensions with the Soviet Union and Cuba under control in the wake of Kennedy’s assassination was an admirable goal, and Reagan’s desire to protect the space program could also be seen as in the national interest. The SSCI was more narrowly political, but it is not completely illegitimate for political parties to seek advantage. What is crucial in this context, however, is that the politicization of the postmortems limits their ability to explain the events under consideration. This is not to say that many of their conclusions are necessarily incorrect. Although Johnson suspected otherwise, Oswald probably did act alone (33); cold weather did cause the shuttle’s O rings to become rigid and unable to provide a protective seal. But it is unlikely that the relevant congressional leaders were not informed about the RDI program. More importantly, the claim that even had torture not been used the evidence could and should have yielded the correct conclusions, while impossible to disprove, was not supported by valid analytical methods, as I will discuss below.Both politics and psychology erect barriers to the consideration of the argument that a conclusion that in retrospect is revealed to have been disastrously wrong may have been appropriate at the time. Given the ambiguity and incompleteness of the information that is likely to be available, the most plausible inferences may turn out to be incorrect (34). Being wrong does not necessarily mean that a person or organization did something wrong, but any postmortem that reached this conclusion would surely be scorned as a whitewash. This may be the best interpretation of the intelligence findings on Iraq leading up to the Second Gulf War, however. Although the IC’s judgment that Iraq had active WMD programs in 2002 was marred by excessive confidence and several methodological errors, including the analysts lack of awareness that their inferences were guided at least as much by Saddam’s otherwise inexplicable expulsion of international inspectors as by the bits of secret information that they cited as reasons for their conclusions (35), it was more plausible than the alternative explanations that now appear to be correct. It is telling that the official postmortems and the journalistic accounts do not reject this argument, they do not even consider it. The idea that a disastrously wrong conclusion was merited is deeply disturbing and not only conflicts with our desire to hold people and organizations accountable, but clashes with the “just world” intuition that even though evil and error are common, at bottom people tend to get what they deserve (3638) and our sense that well-designed organizations with excellent staffs should be able to understand the world.To put this another way, people tend to be guided by the outcomes when judging the appropriateness of the procedures and thinking behind analysis and policies. If the conclusions are later revealed to be correct or if the policies succeed, there will be a strong presumption that everything was done right. While intuitively this makes sense, and it would be surprising if there were no correlation between process and outcome, the correlation is not likely to be 1.0. Toward the end of the deliberations on whether the mysterious person that the United States located in a hideout in Abottabad really was Osama bin Laden, Michael Morell, Deputy Director of the CIA, said that the case for this was much weaker than that for the claim that Saddam had active WMD programs in 2002 (39). Although this assessment is disputable and by necessity subjective, Morell was very experienced in these matters and his argument is at minimum plausible.The quality of postmortems is impeded by the almost universal tendency to ignore the fact that people may be right for the wrong reasons and wrong for the right ones. For example, the SSCI praised those elements of the IC who dissented from the majority opinion that Iraq had active WMD programs without investigating the quality of their evidence or analysis (40). It was assumed rather than demonstrated that the skeptics looked more carefully, reasoned more clearly, or used better methods than did those who reached less accurate conclusions. To turn this around, this and many other postmortems failed to use standard social science comparative methods to probe causation but instead criticized those who were later shown to be wrong for being sloppy or using a faulty approach without looking at whether those who were later shown to be right followed the same procedures (41).Both the analysis being examined and the subsequent postmortems are prone to neglect the comparative method in another way as well. They usually look at whether a bit of evidence is consistent with the favored explanation without taking the next step of asking whether it is also consistent with alternative explanations. Just as CIA analysts seized on the fact that trucks of a type previously associated with chemical weapons were being used at suspicious sites and did not ask themselves whether an Iraq without active chemical production would find other uses for the trucks (42), so the SSCI report did not consider that the errors they attributed to bad tradecraft could also be explained by political pressures (which is not to say that the latter explanation is in fact correct), and those who argued for the importance of these pressures did not look at the other areas of intelligence, especially on the links between Saddam and al Qaeda, to see whether the same pressures had the expected effect (in fact, they did not). Here factors were judged to be causal without looking at other cases in which the factors were present to see if the outcomes were the same.A related set of failings, and ones that are central, include the hindsight fallacy, cherry picking of evidence, confirmation bias, and ignoring the costs of false positives. These are epitomized by what I noted at the start—in the aftermath of a disaster, people ask how the decision makers and analysts could have been so wrong, why the dots remained unconnected, and why warning signs were missed or discounted. Hindsight bias is very strong (43, 44); when we know how an episode turned out we see it as predictable (and indeed often believe that we did predict it) if not inevitable. Just as we assimilate new information to our preexisting beliefs (45, 46), once we know the outcome, when we go back over the information or look at old information for the first time (as is often the case for those conducting postmortems), there is a very strong propensity to see that it unequivocally points to the outcome that occurred. In a form of confirmation bias and premature cognitive closure that comes with our expectations about what we are likely to see, knowledge of how events played out skews our perceptions of how informative the data were at the time. The problematic way of thinking compounds because once we think we know the answer, we search for information that fits with it and interpret ambiguous evidence as supportive.This is not to suggest that good postmortems should ignore the light shed by the outcome and what in retrospect appears to be the correct view. Without this, after all, later analysts would have no inherent advantages over contemporary ones. But authors of postmortems must struggle against the natural tendency to weigh the evidence by what we now know—or believe—to be correct. Given all the information that flows into the organization, it is usually easy to pick out reports, anecdotes, and data that pointed in the right direction. But to do this is to engage in cherry picking unless we can explain why at the time these indicators should have been highlighted and interpreted as we now do and why the evidence relied on at the time should have been ignored or seen differently. As Roberta Wohlstetter pointed out in her path-breaking study of why the United States was taken by surprise by the Japanese attack on Pearl Harbor (47), most of the information is noise (if not, in competitive situations, deception), and the problem is to identify the signals. To use the language that became popular after 9/11, to say that analysts failed to connect the dots is almost always misleading because there are innumerable dots that could be connected in many ways. For example, official and journalistic analyses of 9/11 stress the mishandling of information about the suspicious behavior of some people attending flight instruction schools. True, but if the attack had been by explosive-laden trucks I suspect that we would be pointing to suspicious behavior at trucking schools.Fighting against the strong attraction of knowing how the dots were in fact connected is a difficult task, and one that must be recognized at the start of a postmortem lest it make debilitating errors. A prime example is the SSCI majority report on the RDI program mentioned above. The grounds for the conclusion that torture was ineffective were that the correct conclusions could have been drawn from the mass of information obtained from other sources, especially interrogations that used more benign techniques. The problem is not that this claim is necessarily incorrect, but that it commits the hindsight fallacy and cherry picks. Knowing the right answer, one can find strong clues to it in the enormous store of data that was available to the analysts. But this information could have yielded multiple inferences and pictures; to have established its conclusion the postmortem would have to show why what we now believe to be correct was more plausible and better supported than the many alternatives (48, 49). Such an argument will always be difficult and subject to dispute, but like most retrospective analyses, the SSCI report did not even recognize that this was necessary.The frequently asked question of what warning signs were missed points to the related problem of the failure to recognize the potential importance of false positives. This comes up with special urgency after mass shootings and often yields a familiar and plausible list of indicators: mental health problems, withdrawal from social interactions, expressed hostility, telling others that something dramatic will happen soon. But even if we find that these signs universally preceded mass shootings, this would not be enough to tell us that bystanders and authorities who saw them should have stepped in: looking only at cases of mass shootings (what is known as searching on the dependent variable) makes it impossible to determine the extent to which these supposed indicators differentiate cases in which people go on shooting sprees from those in which they do not. Acting on these indicators could then lead to large numbers of false positives. Economists have the saying that the stock market predicted seven of the last three recessions; those who do postmortems should take heed.Put differently, the “warning signs” may be necessary but not sufficient conditions for the behavior. Their significance depends on how common they are in the general population, and this we can tell only by looking at the behavior of people who do not commit these terrible crimes. The same point applies to looking for predictors of other noxious behavior, such as terrorism, violent radicalization, and domestic abuse, to mention just several that receive a great deal of attention (or of unusually good behavior like bravery or great altruism).Examining people who do not behave in these ways or instances that do not lead to the undesired outcomes would be labor intensive and can raise issues of civil liberties. But this is not always the case, and was not in one well-known instance (although this was not a postmortem, but predecisional analysis). Before the doomed launch of the Challenger, because the engineers knew that the O rings that were supposed to keep the flames from escaping through the boosters’ joints might be a problem in cold weather, they provided the higher authorities with a slide showing the correlation between the air temperature and the extent of the damage to the rings in previous launches. Because this showed some but not an overwhelming negative correlation it was not seen as decisive. The slide omitted data for launches in which there was no damage to the rings at all, however. This showed that partial burn-throughs occurred only when the temperature fell below 53°, and had these negative cases been displayed the role of cold would have been more apparent.With these past cases as guides, we can expect that absent self-conscious efforts to counter the biases discussed above, the attempt to understand the failure of the authorities to anticipate and control events of January 6, 2021, will be suboptimal. That is not to say that these postmortems will be without value. There are things to be learned about why communication channels were not clearer, what the barriers to cooperation among the diverse law enforcement organizations were, why there was so little contingency planning, and why the decision to deploy the National Guard was delayed. The analysis of the lack of forewarning, however, is likely to fit the pattern of hindsight bias, cherry picking, and the neglect of comparisons. The media has said that an alarming report from an FBI field station was not passed on and that insufficient attention was paid to the “chatter” on social media (5053). It is almost certain that a more thorough investigation will turn up additional bits of information that in retrospect pointed to the violence that ensued. If the analysis stops here, however, the obvious conclusion that these indicators should have been heeded will be incorrect. Better methodology points to the next steps of looking at some of the multiple cases in which large-scale violence did not occur or was easily contained in order to ascertain whether the reports preceding January 6 were markedly different. We should also look for indications and reports that the demonstration would be peaceful.It is not surprising that the fall of Kabul has led to widespread calls for an inquiry into what went wrong. In all probability, however, these postmortems are also likely to be deficient. They will be highly politicized and subject to the hindsight fallacy and related methodological shortcomings. Because the stakes are so high and involve so many different entities and actors, political pressures will be generated not only by the Democrats and Republicans, but also by different parts of the government, especially the military and the civilian intelligence community.Politics will be involved in how the postmortem is framed. Democrats will want to focus on the intelligence while Republicans will seek a broader scope to include if not concentrate on the decisions that were made. Disputes about the time period to be examined are also likely. Republicans will want to limit the study to the consequences of President Biden’s April 14, 2021, announcement that all troops would be withdrawn by September 11; Democrats will want to start the clock earlier, with the Trump administration’s February 2020 agreement with the Taliban to withdraw by May 1, 2021. More specifically, Democrats will want to look for evidence that the Trump agreement led to secret arrangements between the Taliban and local authorities that laid the foundations for the latter’s defections in the summer.Because the issues are so salient and emotion-laden, there will be pressure to have the investigations be entirely disinterested, which means that the people conducting them should not have had deep experience with any of the organizations involved. This impulse is reasonable, but comes at a price because reading and interpreting intelligence reports requires a familiarity with their forms and norms (which differ from one organization to another), and outsiders are at a disadvantage here. The distinction between strategic and tactical warning is easily missed by those new to the subject, as is the need to determine whether and how assessments are contingent (i.e., dependent on certain events occurring or policies being adopted). An understanding of how policy makers (known as “consumers”) are likely to interpret intelligence is also important and needs to be factored in. In the case of Afghanistan, as in other military engagements, military intelligence is prone to paint a relatively optimistic picture of the progress that is being made. Experienced consumers understand this and can apply an appropriate discount factor. A further complication is that as the pressure for a full American withdrawal increased, the military’s incentives changed, and pessimism about the prospects for the Afghan army in the absence of American support came to the fore. If a postmortem takes these estimates at face value and legitimately faults them for their organizational bias, it would be important to also probe how the estimates were interpreted.Here as in other cases, the overarching threats to a valuable postmortem are hindsight bias and cherry picking, which will be especially strong because we now more clearly see the weaknesses of the government and the positive feedback that led Taliban victories to multiply. If the study tries to uncover when various consumers and intelligence analysts and organizations concluded that a quick Taliban victory was likely, it will have to confront not only the obvious problem that people have incentives to exaggerate their prescience, but that memories about exactly when certain conclusions were arrived at are especially unreliable when events are coming thick and fast, as was true of the analysts charting the unrest that led to the fall of the Shah.* In retrospect, the more accurate assessments will stand out and there will be strong impulses to claim that at the time they had more support than the alternatives and so should have been believed.None of this is to say that a well-designed postmortem of either Afghanistan or the events of January 6 would conclude that few errors were made or that the information was analyzed appropriately. But it would provide a more solid grounding for any conclusions, attributions of blame, and proposals for change if blame is the ending rather than the starting point, hindsight and confirmation biases are held in check, and the ambiguity of the evidence and frequent need for painful value trade-offs are recognized. Postmortems are hard and fallible, but the country can ill afford to continue mounting ones that have unnecessary flaws.  相似文献   

15.
16.
17.
The brain mechanisms of fear have been studied extensively using Pavlovian fear conditioning, a procedure that allows exploration of how the brain learns about and later detects and responds to threats. However, mechanisms that detect and respond to threats are not the same as those that give rise to conscious fear. This is an important distinction because symptoms based on conscious and nonconscious processes may be vulnerable to different predisposing factors and may also be treatable with different approaches in people who suffer from uncontrolled fear or anxiety. A conception of so-called fear conditioning in terms of circuits that operate nonconsciously, but that indirectly contribute to conscious fear, is proposed as way forward.
Hunger, like, anger, fear, and so forth, is a phenomenon that can be known only by introspection. When applied to another…species, it is merely a guess about the possible nature of the animal’s subjective state.Nico Tinbergen (1)Neuroscientists use “fear” to explain the empirical relation between two events: for example, rats freeze when they see a light previously associated with electric shock. Psychiatrists, psychologists, and most citizens, on the other hand, use…“fear” to name a conscious experience of those who dislike driving over high bridges or encountering large spiders. These two uses suggest…several fear states, each with its own genetics, incentives, physiological patterns, and behavioral profiles.Jerome Kagan (2)
My research focuses on how the brain detects and responds to threats, and I have long argued that these mechanisms are distinct from those that make possible the conscious feeling of fear that can occur when one is in danger (36). However, I, and others, have called the brain system that detects and responds to threats the fear system. This was a mistake that has led to much confusion. Most people who are not in the field naturally assume that the job of a fear system is to make conscious feelings of fear, because the common meaning of fear is the feeling of being afraid. Although research on the brain mechanisms that detect and respond to threats in animals has important implications for understanding how the human brain feels fear, it is not because the threat detection and defense responses mechanisms are fear mechanisms. It is instead because these nonconscious mechanisms initiate responses in the brain and body that indirectly contribute to conscious fear.In this article, I focus on Pavlovian fear conditioning, a procedure that has been used extensively to study the so-called fear system. I will propose and defend a different way of talking about this research, one that focuses on the actual subject matter and data (threat detection and defense responses) and that is less likely to compel the interpretation that conscious states of fear underlie defense responses elicited by conditioned threats. It will not be easy to give up the term fear conditioning, but I think we should.  相似文献   

18.
Previous studies showed that baby monkeys separated from their mothers develop strong and lasting attachments to inanimate surrogate mothers, but only if the surrogate has a soft texture; soft texture is more important for the infant’s attachment than is the provision of milk. Here I report that postpartum female monkeys also form strong and persistent attachments to inanimate surrogate infants, that the template for triggering maternal attachment is also tactile, and that even a brief period of attachment formation can dominate visual and auditory cues indicating a more appropriate target.

Before the 1900s, affection and religion were thought to be the most important factors in child rearing, but starting around 1910 and lasting for at least 30 y there was a major shift toward cleanliness, order, and scientific principles of conditioning and training, motivated by the new behaviorist theory in psychology (1). A survey (2) of the advice found in popular women’s magazines during that era summarized the Zeitgeist as the following:
Mothers were admonished to insist upon obedience at all times, and if temper tantrums resulted, they should be ignored. Together with a more severe attitude toward the child went a new taboo on physical handling. Love, and particularly the physical manifestation of it, was discouraged in most of the articles on infant Disciplines. It was believed that stimulation of any sort would lead to precocity in the older child and dullness in the man. Furthermore, baby’s strength was needed for rapid growing, and picking the baby up deprived him of his strength. Still another reason for discouraging physical contact with baby was the belief that postnatal conditions for the infant should closely approximate prenatal conditions, and since the infant was not handled in the uterus, he should not be handled after birth.
Watson wrote (3): “There are serious rocks ahead for the overkissed child.” I distinctly remember in the early 1950s my grandfather admonishing my mother not to “coddle” me; I remember because I thought that was something you did to eggs.This hands-off view led to sterile, contactless nurseries across many countries. Holding and touching of premature infants or even visiting hospitalized children was generally forbidden, on the grounds that it spread diseases. Orphanages and other children’s institutions varied but often adhered to this trend, some even keeping infants in isolated cubicles to prevent the spread of infections. However, in the 1940s evidence began amassing that such institutionalization, or prolonged hospitalization, could hinder children’s growth and lead to severe psychological problems (46). Both my collaborator, David Hubel, and my husband recollect being hospitalized as young children and not allowed visitors; their mothers could only wave to them through a window in the door. Both found it traumatizing.Thus, the importance of infant/caregiver attachment began to be recognized, but its biological basis was disputed. The prevailing, behaviorist, view among psychologists and sociologists was that this attachment derived from a learned association between the mother and hunger satiation (7). A less popular theory was that humans and animals have fundamental innate drives beyond the physiological drives of discomfort and hunger, and these drives include attachment to a mother figure (Fig. 4).Open in a separate windowFig. 4.Monkey B2 still carrying her toy 2 mo after parturition. She is the same monkey as the infant in Fig. 1. On the evening of the day of parturition she retained this toy in preference to her own live infant.Innate visual and auditory triggers underlie imprinting in hatchling birds, which leads to persistent following responses (8), but almost nothing was known about innate mechanisms responsible for the strong and lasting ties mammalian infants form with their mothers. Gertrude Van Wagenen (9) observed that infant macaques, separated from their mothers and fed from tiny nursing bottles, would not feed properly unless they could cling to a soft towel (see also ref. 10). Harry Harlow adopted the towel technique for raising infant macaques and found that they developed strong attachments to the towels and were distressed when the cloths were removed for cleaning (11). These laboratory-reared monkeys were larger and healthier and had a higher survival rate than infants left to the care of their monkey mothers, as long as they had a soft cloth; without the cloth, survival was lower.To try to disentangle the relative importance of vision, touch, warmth, and nourishment for establishing infant bonding, Harlow and Zimmermann (12) constructed various mother surrogates for infant macaques. Some surrogates were soft and others rigid and unyielding, with or without faces, heated or not, with or without an attached milk bottle. Infants provided with two different surrogates overwhelmingly preferred the cloth surrogate over the rigid surrogate, irrespective of which provided milk or heat or had a face. Infants displayed strong sustained attachment to and derived security from cloth surrogates for more than a year and a half (the duration of the experiment). Infants’ attachment to and behavior toward the cloth surrogates was indistinguishable from mother-reared infants’ attachment to their monkey mothers (11). The parallels between the abnormal behaviors of macaque infants reared without any cloths or soft surrogates (13) and the psychological problems often found in children who as infants had been reared in sterile cubicles (4) were instrumental in changing child-rearing practices and hospital visiting rules (14).Beyond its social importance, Harlow’s work shows that infant monkey attachment is based on surprisingly few sensory features, i.e., primarily tactile, and may be established within only a limited time during development, as in imprinting. The complementary attachment, of mothers to their offspring, though widely acknowledged in literature and art, is not much studied. In rodents, olfactory and auditory cues emitted by pups trigger maternal behavior in hormonally primed females (15). Mother ewes form selective attachments to their own lambs via olfactory imprinting (16). Almost nothing is known about the sensory cues involved in generating the strong and lasting attachment of primate mothers to their infants (17). Macaque mothers hold and protect their infants continuously after birth and carry them around, groom them, and treat them protectively for many months (Fig. 1). Given the complex social organization of this species (18), massive visual system with specialized visual domains, comparable to those in humans, for face and body recognition (19), and auditory domains specific for monkey vocalizations (20), one might expect that multiple, complex sensory cues would be involved in mother macaques bonding with their infants.Open in a separate windowFig. 1.Typical maternal behavior of a rhesus macaque. Female is holding (Left), nursing (Center), and protecting her infant from the perceived threat of the author coming near her enclosure (Right). Most monkeys in our colony respond to familiar humans by indicating a desire for a scratch or expectation of a treat; mothers with infants are extra-defensive (10, 18) and will initially show aggression for a few seconds then calm down and accept treats. The infant in these photos is monkey B2, whose behavior as an adult is described below.Here I report some observations that indicate that primate maternal attachment to the infant may not depend on complex multisensory cues but rather that a single sufficient trigger for maternal behavior (in a hormonally primed female) is tactile. My first observation was of an 8-y-old primiparous female rhesus macaque, monkey Ve. She delivered a stillborn infant. She was holding the lifeless infant to her chest when I first observed her in the morning.* Appropriate veterinary care required that the dead infant be removed and examined, and to accomplish this, she was lightly anesthetized. When she recovered a few minutes later, she exhibited significant signs of distress: She vocalized loudly and constantly and seemed to be searching agitatedly around her enclosure. Other monkeys housed in the same room also began to vocalize and became agitated. To try to reduce the level of stress in the room I placed a stuffed animal in her enclosure. It was a 15-cm-tall, soft, furry toy, a faceless stuffed mouse, chosen because of its availability and its lack of potential choking hazards, such as sewn-on eyes. Monkey Ve immediately picked up the stuffed toy and held it to her chest. She stopped screeching and became calm while holding it, and the whole room quieted down. She held the toy to her chest continuously for more than a week, without any signs of distress. During this time, she behaved in a manner indistinguishable from other mothers in our colony with live infants, in that she continuously held the toy to her chest, and she exhibited aggressive behavior toward cohoused monkeys and toward even familiar humans when they approached her (Fig. 2). This enhanced level of defensive behavior is characteristic of females with infants (10, 18). About 10 d after parturition she discarded the stuffed toy and showed no further distress. This female a year later delivered and successfully nurtured a second infant.Open in a separate windowFig. 2.Female monkey Ve showing maternal behavior toward a stuffed toy mouse 2 and 3 d postpartum. She continuously holds the toy (Left and Center) and protects it from the perceived threat of the author approaching her home enclosure.I have offered stuffed toys to five different female monkeys immediately after eight births among them, after removal of the infant. Three of the females (monkeys Ve, Sv, and B2), after each of five births among them, picked up and carried the soft toy around for a week to several months (sometimes until the toy fell apart) (Fig. 3). The other two females who were offered toys (monkeys Ug and Sa), right after three births between them, showed no interest in any toy nor any distress after waking up from anesthesia. After one of the toy-adopting births, I had left both a soft toy and a similar-sized rigid pink baby doll in the monkey’s enclosure, and after another a kong toy along with the soft toy; in both cases the soft toy was picked up and carried around, not the rigid baby doll or the hard kong. In one case a brown Beanie Baby (a “monkey”) and a reddish one (an “orangutan”) were offered simultaneously, and the redder one was chosen and carried around for months (Fig. 3, Bottom). These stuffed toys matched a normal infant only in size, color, texture, and crude shape but did not possess any other infant characteristics such as odor, vocalization, movement, grasping, or suckling.Open in a separate windowFig. 3.Monkey Sv with adopted soft toys after her first parturition (Top) and her second (Bottom). The top row shows her still carrying around a toy 3 wk postparturition. The leftmost panel shows a red kong that was not chosen, and the rightmost panel shows her carrying the toy on her hips, a typical maternal behavior, but, as with live infants, the mother usually quickly grabs the infant back to her chest whenever anyone approaches, so it was difficult to get a picture of this. The bottom row shows the same monkey 3 wk after her second parturition; she chose this reddish toy over a brown one on the morning after birth.On one of the toy-adopting occasions described above, the mother, monkey B2, was initially anesthetized at 7 AM to remove the infant, and she “adopted” the soft toy as soon as she woke up (Fig. 4). She was anesthetized again at 11 AM because of a retained placenta, so the veterinarians could administer oxytocin and perform manual massage. Because the mother did not expel the placenta that day, the veterinary staff suggested returning the live infant to the mother overnight so the infant’s suckling could help expel the placenta. So, around 5 PM I brought the infant to the mother’s enclosure and placed it on a shelf just above where the mother was sitting, holding her stuffed toy. The mother looked back and forth between the toy she was holding and the wiggling, squeaking infant, and eventually moved to the back of her enclosure with the toy, leaving the lively infant on the shelf.§ Thus, the mother’s attachment to the stuffed toy that she had been holding for 6 h must have been stronger than her attraction to the real infant that she would have been holding for the few hours before lights-on at 7 AM. We were surprised that the auditory and visual cues emitted by the live infant did not convince the mother that she should trade the toy for the infant. It may be that she had already imprinted on the stuffed toy during the day and was subsequently unreceptive to any substitute. Alternatively, possession may play a role in sustaining attachment; we could have tested this by removing the stuffed toy, and presenting the toy and the real infant simultaneously, but did not, in order to avoid any potential aggressive behavior toward the infant.Three postpartum female macaques displayed strong sustained attachment to a small soft toy on five separate postpartum episodes. Two females did not. Thus, maternal attachment to an inanimate toy is not a rare occurrence. This attachment cannot be attributed to differences in rearing because two of the toy-adopting monkeys were reared by their monkey mothers (Figs. 1 and and4)4) and the third toy-adopting monkey was hand-reared by laboratory staff. Of the two females who did not adopt a toy and did not display any distress when they woke up after infant removal, one was hand-reared and the other reared by her monkey mother. Of the seven live births, the infant was found clinging to the mother at lights-on in the morning. It is unknown how long before lights-on the births occurred, the timing of which could have influenced any bonding, though the lack of distress by the mothers who did not adopt a toy suggests a weak bond. Mothers who have nurtured infants for more than a few days show distress when the infants are removed for testing or procedures, and they aggressively grab the infants when they are returned. Furthermore, a stuffed toy was not an acceptable substitute to two mothers of week-old infants, when those infants were briefly removed from the mother for procedures; thus, presumably, the mothers had by then formed attachments to their own living infants.Carrying of a dead infant corpse by its mother has been observed in more than a dozen nonhuman primate species, including macaques (2123). Carrying behavior occurs in ∼20% of macaque stillbirths or infant deaths and usually lasts for only a few days (22), though it can persist longer. Anthropomorphizing this behavior as “grief” may be misguided; possibly these animals were as satisfied with the corpses as our macaques were with their soft toys.The sparseness of the template for triggering maternal behavior is surprising, but a broadly specified target, coupled with learning, would be a mechanism for ensuring flexible nurturing behavior: Once the nurturing target is fixed on, experience can adjust, refine, and maintain the template for recognizing the target for maternal attachment. Harlow and Zimmermann (12) found that soft texture is critical for the attachment of infant monkeys to inanimate surrogate mothers, and soft texture is even more important for fostering an infant’s attachment than is providing milk. Their work helped lead to a transformation in child-rearing philosophy, so that nowadays parents are encouraged to hold and cuddle their children—to do otherwise would now be considered cruel. My observations indicate that the postpartum maternal attachment drive can also be satisfied by holding a soft inanimate object. The calming effect of the toy on monkey Ve was dramatic, and using such surrogates may be a useful technique for relieving the stress associated with infant death or removal in captive primates (24).Although there is no way of knowing the extent to which these observations bear on human maternal bonding, or on other kinds of bonding, they do suggest that soft touch may be calming, therapeutic, perhaps even psychologically necessary, throughout the lifetime, not just in infants. These results also suggest, at least to me, that attachment bonds, even those that seem to be based on complex, unique, or sophisticated qualities, may actually be based on, or at least triggered by, simple sensory cues.  相似文献   

19.
Macrocycles, formally defined as compounds that contain a ring with 12 or more atoms, continue to attract great interest due to their important applications in physical, pharmacological, and environmental sciences. In syntheses of macrocyclic compounds, promoting intramolecular over intermolecular reactions in the ring-closing step is often a key challenge. Furthermore, syntheses of macrocycles with stereogenic elements confer an additional challenge, while access to such macrocycles are of great interest. Herein, we report the remarkable effect peptide-based catalysts can have in promoting efficient macrocyclization reactions. We show that the chirality of the catalyst is essential for promoting favorable, matched transition-state relationships that favor macrocyclization of substrates with preexisting stereogenic elements; curiously, the chirality of the catalyst is essential for successful reactions, even though no new static (i.e., not “dynamic”) stereogenic elements are created. Control experiments involving either achiral variants of the catalyst or the enantiomeric form of the catalyst fail to deliver the macrocycles in significant quantity in head-to-head comparisons. The generality of the phenomenon, demonstrated here with a number of substrates, stimulates analogies to enzymatic catalysts that produce naturally occurring macrocycles, presumably through related, catalyst-defined peripheral interactions with their acyclic substrates.

Macrocyclic compounds are known to perform a myriad of functions in the physical and biological sciences. From cyclodextrins that mediate analyte separations (1) to porphyrin cofactors that sit in enzyme active sites (2, 3) and to potent biologically active, macrocyclic natural products (4) and synthetic variants (57), these structures underpin a wide variety of molecular functions (Fig. 1A). In drug development, such compounds are highly coveted, as their conformationally restricted structures can lead to higher affinity for the desired target and often confer additional metabolic stability (813). Accordingly, there exists an entire synthetic chemistry enterprise focused on efficient formation and functionalization of macrocycles (1418).Open in a separate windowFig. 1.(A) Examples of macrocyclic compounds with important applications. HCV, hepatitis C virus. (B) Use of chiral ligands in metal-catalyzed or mediated stereoselective macrocyclization reactions. (C) Remote desymmetrization using guanidinylated ligands via Ullmann coupling. (D) This work: use of copper/peptidyl complexes for macrocyclization and the exploration of matched and mismatched effect.In syntheses of macrocyclic compounds, the ring-closing step is often considered the most challenging step, as competing di- and oligomerization pathways must be overcome to favor the intramolecular reaction (14). High-dilution conditions are commonly employed to favor macrocyclization of linear precursors (19). Substrate preorganization can also play a key role in overcoming otherwise high entropic barriers associated with multiple conformational states that are not suited for ring formation. Such preorganization is most often achieved in synthetic chemistry through substrate design (14, 2022). Catalyst or reagent controls that impose conformational benefits that favor ring formation are less well known. Yet, critical precedents include templating through metal-substrate complexation (23, 24), catalysis by foldamers (25) or enzymes (2629), or, in rare instances, by small molecules (discussed below). Characterization of biosynthetic macrocyclization also points to related mechanistic issues and attributes for efficient macrocyclizations (3034). Coupling macrocyclization reactions to the creation of stereogenic elements is also rare (35). Metal-mediated reactions have been applied toward stereoselective macrocyclizations wherein chiral ligands transmit stereochemical information to the products (Fig. 1B). For example, atroposelective ring closure via Heck coupling has been applied in the asymmetric total synthesis of isoplagiochin D by Speicher and coworkers (3640). Similarly, atroposelective syntheses of (+)-galeon and other diarylether heptanoid natural products were achieved via Ullman coupling using N-methyl proline by Salih and Beaudry (41). Finally, Reddy and Corey reported the enantioselective syntheses of cyclic terpenes by In-catalyzed allylation utilizing a chiral prolinol-based ligand (42). While these examples collectively illustrate the utility of chiral ligands in stereoselective macrocyclizations, such examples remain limited.We envisioned a different role for chiral catalysts when addressing intrinsically disfavored macrocyclization reactions. When unfavorable macrocyclization reactions are confronted, we hypothesized that a catalyst–substrate interaction might provide transient conformational restriction that could promote macrocyclization. To address this question, we chose to explore whether or not a chiral catalyst-controlled macrocyclization might be possible with peptidyl copper complexes. In the context of the medicinally ubiquitous diarylmethane scaffold, we had previously demonstrated the capacity for remote asymmetric induction in a series of bimolecular desymmetrizations using bifunctional, tetramethylguanidinylated peptide ligands. For example, we showed that peptidyl copper complexes were able to differentiate between the two aryl bromides during C–C, C–O, and C–N cross-coupling reactions (Fig. 1C) (4345). Moreover, in these intermolecular desymmetrizations, a correlation between enantioselectivity and conversion was observed, revealing the catalyst’s ability to perform not only enantiotopic group discrimination but also kinetic resolution on the monocoupled product as the reaction proceeds (44). This latter observation stimulated our speculation that if an internal nucleophile were present to undergo intramolecular cross-coupling to form a macrocycle, stereochemically sensitive interactions (so-called matched and mismatched effects) (46) could be observed (Fig. 1D). Ideally, we anticipated that transition state–stabilizing interactions might even prove decisive in matched cases, and the absence of catalyst–substrate stabilizing interactions might account for the absence of macrocyclization for these otherwise intrinsically unfavorable reactions. Herein, we disclose the explicit observation of these effects in chiral catalyst-controlled macrocyclization reactions.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号