首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We examine in Drosophila a group of ∼35 ionotropic receptors (IRs), the IR20a clade, about which remarkably little is known. Of 28 genes analyzed, GAL4 drivers representing 11 showed expression in the larva. Eight drivers labeled neurons of the pharynx, a taste organ, and three labeled neurons of the body wall that may be chemosensory. Expression was not observed in neurons of one taste organ, the terminal organ, although these neurons express many drivers of the Gr (Gustatory receptor) family. For most drivers of the IR20a clade, we observed expression in a single pair of cells in the animal, with limited coexpression, and only a fraction of pharyngeal neurons are labeled. The organization of IR20a clade expression thus appears different from the organization of the Gr family or the Odor receptor (Or) family in the larva. A remarkable feature of the larval pharynx is that some of its organs are incorporated into the adult pharynx, and several drivers of this clade are expressed in the pharynx of both larvae and adults. Different IR drivers show different developmental dynamics across the larval stages, either increasing or decreasing. Among neurons expressing drivers in the pharynx, two projection patterns can be distinguished in the CNS. Neurons exhibiting these two kinds of projection patterns may activate different circuits, possibly signaling the presence of cues with different valence. Taken together, the simplest interpretation of our results is that the IR20a clade encodes a class of larval taste receptors.Olfaction and taste are mediated by receptors of widely diverse families (1, 2). Studies of receptor expression have been critical to our understanding of chemosensory perception. Historically, the identification of several classes of receptors has been based largely on their expression patterns, with functional validation not becoming available until years later. Studies of receptor expression have informed our understanding of the principles of chemosensory coding. In some cases, analysis of receptor expression has suggested, and subsequently revealed, complex and elegant mechanisms of receptor gene regulation. Finally, in many cases, elucidation of receptor expression patterns has allowed chemosensory stimuli of particular ecological, evolutionary, or behavioral significance to be assigned to individual receptors.The Drosophila larva offers major advantages as an organism in which to study the molecular and cellular basis of taste. The larval taste system is relatively simple and can be investigated with incisive molecular and genetic approaches. Understanding the molecular and cellular mechanisms by which Drosophila larvae evaluate potential food sources may suggest means of manipulating the feeding of other insect larvae, some of which consume agricultural crops and collectively cause immense damage to the world’s agricultural output (3).The head of the Drosophila larva contains three external chemosensory organs (4) (Fig. 1). The dorsal organ (DO) is innervated by the dendrites of 21 olfactory neurons and nine gustatory neurons. The terminal organ (TO) and ventral organ contain the dendrites of ∼21 and approximately seven gustatory neurons, respectively.Open in a separate windowFig. 1.Chemosensory organs in the larval head and pharynx. VO, ventral organ. We have depicted the VPS as anterior to the DPS, but they are close and their apparent relative positions depend on the viewing angle. The DPO is more difficult to identify than the other organs, and its position relative to the DPS and PPS may depend on the larval stage; we have not depicted neural processes for it.There are also internal chemosensory organs lining the pharynx, each existing as a bilaterally symmetrical pair: the dorsal, ventral, and posterior pharyngeal sensilla (DPS, VPS, and PPS, respectively) (57) (Fig. 1). Each organ contains ∼17, 16, and 6 neurons, respectively, most of which are likely to be gustatory (5). Another organ, the dorsal pharyngeal organ (DPO), contains five neurons (3, 5, 8). A variety of other neurons in the body wall of the thorax and abdomen, and at the posterior tip of the larva, are also likely to be chemosensory (911).The Gustatory receptor (Gr) family comprises 60 genes (12, 13). Expression analysis of the Gr genes using the GAL4-UAS system has shown that 39 of the predicted proteins are likely to be expressed in the TO, DPS, VPS, or PPS of the larva (11, 14, 15). However, a receptor-to-neuron map of the TO neurons suggested that many TO neurons did not express any Gr genes, consistent with the notion that some larval taste neurons may express other kinds of taste receptors (15).The Ionotropic receptor (IR) family comprises 60 genes, of which members of one clade encode odor receptors (16). Another clade of 35 IR genes, called the IR20a clade, was recently shown to be expressed in gustatory neurons of Drosophila adults (17). Analysis of GAL4 drivers of 28 genes of the clade revealed expression of 16 drivers in adult taste neurons, collectively representing all taste organs of the fly. Virtually nothing is known of their expression in larvae.Here, we carry out a systematic expression analysis of the IR20a clade in the larval gustatory system. We find that 11 of the GAL4 drivers show expression in larval gustatory organs. Seven drivers are expressed in the DPS, with different drivers expressing in different DPS neurons, and one of these drivers is also expressed in the VPS. Another driver is expressed in the DPO; another is expressed in nonneuronal cells of the TO; and three are expressed in the body wall, where they are associated with sensory hairs, sensory cones, and trachea. The neurons that express the drivers show different projection patterns in the larval CNS. Some drivers show dynamic expression patterns over the course of development. The simplest interpretation of the results is that the IR20a clade encodes a class of larval taste receptors.  相似文献   

2.
Physically distinguishable microdomains associated with various functional membrane proteins are one of the major current topics in cell biology. Glycosphingolipids present in such microdomains have been used as "markers;" however, the functional role of glycosyl epitopes in microdomains has received little attention. In this review, I have tried to summarize the evidence that glycosyl epitopes in microdomains mediate cell adhesion and signal transduction events that affect cellular phenotypes. Molecular assemblies that perform such functions are hereby termed "glycosynapse" in analogy to "immunological synapse," the membrane assembly of immunocyte adhesion and signaling. Three types of glycosynapses are so far distinguishable: (i) Glycosphingolipids organized with cytoplasmic signal transducers and proteolipid tetraspanin with or without growth factor receptors; (ii) transmembrane mucin-type glycoproteins with clustered O-linked glycoepitopes for cell adhesion and associated signal transducers at lipid domain; and (iii) N-glycosylated transmembrane adhesion receptors complexed with tetraspanin and gangliosides, as typically seen with the integrin-tetraspanin-ganglioside complex. The possibility is discussed that glycosynapses give rise to a high degree of diversity and complexity of phenotypes.  相似文献   

3.
Molecular imaging enables visualization of specific molecules in vivo and without substantial perturbation to the target molecule''s environment. Glycans are appealing targets for molecular imaging but are inaccessible with conventional approaches. Classic methods for monitoring glycans rely on molecular recognition with probe-bearing lectins or antibodies, but these techniques are not well suited to in vivo imaging. In an emerging strategy, glycans are imaged by metabolic labeling with chemical reporters and subsequent ligation to fluorescent probes. This technique has enabled visualization of glycans in living cells and in live organisms such as zebrafish. Molecular imaging with chemical reporters offers a new avenue for probing changes in the glycome that accompany development and disease.  相似文献   

4.
Most high-profile disasters are followed by demands for an investigation into what went wrong. Even before they start, calls for finding the missed warning signs and an explanation for why people did not “connect the dots” will be common. Unfortunately, however, the same combination of political pressures and the failure to adopt good social science methods that contributed to the initial failure usually lead to postmortems that are badly flawed. The high stakes mean that powerful actors will have strong incentives to see that certain conclusions are—and are not—drawn. Most postmortems also are marred by strong psychological biases, especially the assumption that incorrect inferences must have been the product of wrong ways of thinking, premature cognitive closure, the naive use of hindsight, and the neglect of the comparative method. Given this experience, I predict that the forthcoming inquiries into the January 6, 2021, storming of the US Capitol and the abrupt end to the Afghan government will stumble in many ways.

In the wake of high-profile disasters like the terrorist attacks on September 11, 2001, the destruction of the Challenger space shuttle, or the discovery that Iraq did not have active programs to produce weapons of mass destruction (WMD) in the years before the 2003 invasion, there are demands for an investigation into what went wrong with intelligence and policy making. Even before they start, calls for finding the missed warning signs and an explanation for why people did not “connect the dots” will be common, along with the expectation that a good inquiry will lead to changes that will make us much safer. Unfortunately, however, the same combination of political pressures and the failure to adopt good social science methods that contributed to the initial failure usually produces postmortems that are badly flawed, even if they produce some good information. This leads me to predict that the inquiries into the January 6, 2021, storming of the US Capitol and the abrupt end to the Afghan government in August 2021 will stumble in many ways. [Exceptions to this otherwise dreary pattern—studies that are better done—are generally produced by researchers who have had more social science education, are highly skilled, or who do the task well after the events, creating more room for perspective (13). There is also some evidence that organizations conducting routine postmortems, such as in the investigation of transportation accidents and medical mishaps, do it better (48).]I will look most closely at the American postmortems conducted over the past decade that examined major foreign policy failures. Because they were salient to the polity and generously funded, it is reasonable to expect they would have been done as well as possible. I have omitted only the congressional reports on the attack on the American diplomatic outpost at Benghazi on September 11 to 12, 2012, and the Mueller report on whether the Trump campaign conspired with Russia during the 2016 election and subsequently sought to obstruct the investigation. The former were so driven by the politics of attacking or defending Hillary Clinton, who was simultaneously the Secretary of State at the time and the Democratic candidate for president in 2016, that to take them as serious attempts at unraveling what happened would be a strain. The Mueller report was largely a fact-finding and legal document and so did not have the same purpose of understanding the events and the causal relationships at play. I believe that I have avoided the trap, discussed below, of only looking at cases that are likely to support my argument.There is no simple recipe for a successful postmortem, but there are roadmaps if not checklists that can help us judge the ones that have been done. To start with, humility in a double sense is in order. Not only is it likely that the case under consideration will be a difficult one, which means that the correct judgments were not likely to have been obvious at the time, but even later conclusions are likely to be disputable. A good postmortem then recognizes the ambiguities of the case, many of which may remain even after the best retrospective analysis.In this, it is important to separate judgments about why incorrect conclusions were reached from evaluations of the thinking and procedures that were involved. People can be right for the wrong reasons, and, even more troubling, can be wrong for the right reasons.In the analysis of why the contemporary judgments were reached and the causes of the errors that are believed to have been involved, standard social science methodology points to the value of the comparative method in trying to see whether the factors believed to have marred the effort were also present when the outcome was better. Related, it is not enough for postmortems to locate bits of evidence that are consistent with the explanations that they are providing; they must also try to see whether this evidence is inconsistent with alternative views. Common sense embodies cognitive shortcuts that need to be disciplined to avoid jumping to conclusions and seeing the evidence as pleasingly clear and consistent with favored views.It is of course easy—or at least easier—to be wise after the fact, something a good postmortem has to recognize and internalize. It is not only unfair to the contemporary actors but an impediment to good retrospective understanding to seize on evidence that now seems to be crucial or interpretations that we now think to be correct without taking the next step of analyzing whether and why they should have been seen as privileged at the time.The very fact that a disastrous failure occurred despite the existence of individuals and organizations designed to provide warning and take appropriate action indicates that the meaning of the course of events is not obvious, and this widens the space for the political and psychological biases that I will discuss below. Sometimes there may be bits of vital information that were not gathered, were overlooked, or blatantly misinterpreted, or there may have been outright incompetence, but most organizations are better than that. This means that in many instances, the case under examination will be an exception to many of our generalizations about how the world works (9, 10). To take a case I studied for the CIA, the Iranian revolution of 1979 is an exception to what political scientists and policy makers believe, which is that leaders who enjoy the support of their security forces will not be overthrown by mass movements. Indeed, the universal assumption behind postmortems that analysts and decision makers should have gotten right may not be correct. As I will discuss later, we live in a probabilistic universe and sometimes what happens is very unlikely, in which case it is far from clear that the decision makers should have acted differently. Such a conclusion is almost always psychologically and politically unacceptable if not unimaginable, however, and is almost never considered, let alone asserted as correct. The fact that the explanation for the events is not obvious also means that postmortems that are conducted without more attention to good methodology than was true of the analysis that produced the policy and the underpinning beliefs are likely to fall into traps similar to those that played a role in the failure itself.The area of my own expertise, international politics, presents three additional reasons why it is difficult both for contemporary actors to judge their environments and for postmortems to do better. Although these issues are not unique to international politics, they arise with great frequency there. First, it is often hard to understand why others are behaving as they are, especially when we are dealing with actors (individuals, organizations, or governments) who live in very different cultural and perceptual worlds from us. One of the other postmortems that I did for the US intelligence community (IC) was the failure to recognize that Saddam Hussein did not have active WMD programs in 2002 (11). The belief that he did rested in part on the fact that he had expelled the United Nations weapons inspectors at great cost to his regime. We now know that, contrary to what was believed not only by the United States, but by almost all countries, the reason behind Saddam’s decision to do so was not that he was hiding his programs. In retrospect and with access to interviews and an extensive documentary record, it is generally believed that Saddam felt he had to pretend to have WMD in order to deter Iran (12, 13). Even this explanation has been disputed (14, 15), however, which underscores the point that contemporary observers face very difficult problems when it comes to understanding the behavior of others, problems that may not be readily resolved even later with much more and better evidence. Iraq may be a particularly difficult case, but it is telling that more than 100 y later and with access to all existing records (some have been destroyed) (16), historians still debate the key issues related to the origins of World War I, especially the motives and intentions of Germany and Russia. In fact these debates largely mirror the ones that occurred among policy makers in 1914. It is very hard to get inside others’ heads.The second problem is that many situations that lead to postmortems involve competition and conflict. In the classical account (17, 18), the actors are in strategic interaction in that each tries to anticipate how others will act, knowing that others are doing likewise. This poses great intellectual difficulties for the actors and is one reason that policies can fail so badly that postmortems follow. These interactions also pose difficulties for the postmortems themselves. In some cases, leaders behave contrary to the theory and act as though they were playing a game against nature rather than against a protagonist in strategic interaction (19). When this is not the case, tracing how the actor expected others to behave is often difficult because decision makers rarely gratify historians by spelling out their thinking. Additionally, antagonists in conflict often have good reason to engage in concealment and deception (2023). Of course actors understand this and try to penetrate these screens. But success is not guaranteed and so errors are common and can result in disastrous policy failures. Furthermore, the knowledge that concealment and deception are possible can lead the actor to discount accurate information. This was the case in the American judgment that Iraq had WMD programs at the start of the 21st century. Intelligence analysts knew that Iraq had been trained by the Soviet Union in elaborate concealment and deception techniques and they plausibly—but incorrectly—believed that this explained why they were not seeing more signs of these programs (24). In retrospect these puzzles are easier to unravel, but that does not mean they are always easy to solve and deception can pose challenges to postmortems.A third problem both for contemporary decision makers and for those conducting postmortems is that when mass behavior plays an important role, events can be subject to rapid feedback that is difficult to predict. Revolutions, for example, can be unthinkable until they become inevitable, to borrow the subtitle of a perceptive book about the overthrow of the Shah of Iran (25). That is, in a situation in which a large portion of the population opposes a dictatorial regime backed by security forces many people will join mass protests only when they come to believe that these protests will be so large that the chance of success is high and that of being killed by participating is low. And the bigger the protests one day, the greater the chances of even larger ones the next day because of the information that has been revealed. Related dynamics were at work with the disintegration of the Afghan security forces in August 2021. But the tipping points (2628) involved are hard to foresee at the time and only somewhat less difficult to tease out in retrospect.The difficulties of the task of conducting adequate postmortems make it easier for biases to play a large role. In prominent cases the political needs and preferences of powerful groups and individuals come in, sometimes blatantly. When President Lyndon Johnson established the Warren Commission to analyze the assassination of his predecessor, he made it clear to Chief Justice Earl Warren and other members that any hint that the USSR or Cuba were involved would increase the chance of nuclear war. In parallel, he did not object when Allen Dulles, Director of the CIA and member of the commission, withheld information on the plots to assassinate Fidel Castro and on Lee Harvey Oswald’s contacts with Cuba since knowledge of them would point to the possibility that the Cuban leader was involved (29). When the space shuttle Challenger exploded a few minutes into its takeoff, President Ronald Reagan similarly appointed a national commission chaired by former Secretary of State William Rogers to get to the bottom of what happened. But Rogers understood that the program was essential to American prestige and the heated competition with the Soviet Union and so while the commission looked at the technical problems that caused the disaster, it did not probe deeply into the organizational and cultural characteristics of NASA that predisposed it to overlook potentially deadly problems. It also shied away from acknowledging that the complex advanced technology incorporated into the program made it essentially experimental and that even with reforms another accident was likely (30). It took a superb study by organizational sociologists to elucidate these issues (30, 31), and in an example of the impact of organizational politics, NASA ignored this situation until it suffered another disaster with the shuttle Columbia.Politics can also limit the scope of the inquiry. When the bipartisan 9/11 Commission decided that its report would be unanimous this had the effect if not the purpose of preventing a close examination of the policies of the George W. Bush administration. The public record was quite clear: President Bush and his colleagues believed that the main threat to the United States came from other powerful states, most obviously China and Russia, and terrorism was not only a secondary concern, but could only be significant if it was supported by a strong state. It is then not surprising that al Qaeda received little high-level attention in the first 9 mo of the administration. While incorrect, I do not believe this approach was unreasonable, a product of blind ideology, or the result of the rejection of everything the previous administration had done. But it was an important part of the story, and one that could not be recounted if the report was to be endorsed by all its members.Sometimes the political bias is more subtle, as in the Senate report on the program of Rendition, Detention, and Interrogation (RDI) involving secret prisons and torture that the Bush administration adopted in the wake of the terrorist attacks of 9/11. According to the executive summary (the only part of the report that is declassified), the Democratic majority of the Senate Select Committee on Intelligence (SSCI) concluded that congressional leaders (including the chair of the SSCI) were kept in the dark about the program and that it did not produce information that was necessary to further the counterterrorism effort, including tracking down Osama bin Laden’s hiding place (32). These conclusions are very convenient: the SSCI and Congress do not deserve any blame and because torture is ineffective the United States can abjure it without paying any price. If the report had found torture to be effective, even on some occasions, it would have made the committee, the government, and the American public face a trade-off between safety and morality, and obviously this would have been politically and psychologically painful. It was much more comfortable to say that no such choice was necessary.As a political scientist, not only am I not shocked by this behavior, but I also believe that it is somewhat justifiable. Keeping tensions with the Soviet Union and Cuba under control in the wake of Kennedy’s assassination was an admirable goal, and Reagan’s desire to protect the space program could also be seen as in the national interest. The SSCI was more narrowly political, but it is not completely illegitimate for political parties to seek advantage. What is crucial in this context, however, is that the politicization of the postmortems limits their ability to explain the events under consideration. This is not to say that many of their conclusions are necessarily incorrect. Although Johnson suspected otherwise, Oswald probably did act alone (33); cold weather did cause the shuttle’s O rings to become rigid and unable to provide a protective seal. But it is unlikely that the relevant congressional leaders were not informed about the RDI program. More importantly, the claim that even had torture not been used the evidence could and should have yielded the correct conclusions, while impossible to disprove, was not supported by valid analytical methods, as I will discuss below.Both politics and psychology erect barriers to the consideration of the argument that a conclusion that in retrospect is revealed to have been disastrously wrong may have been appropriate at the time. Given the ambiguity and incompleteness of the information that is likely to be available, the most plausible inferences may turn out to be incorrect (34). Being wrong does not necessarily mean that a person or organization did something wrong, but any postmortem that reached this conclusion would surely be scorned as a whitewash. This may be the best interpretation of the intelligence findings on Iraq leading up to the Second Gulf War, however. Although the IC’s judgment that Iraq had active WMD programs in 2002 was marred by excessive confidence and several methodological errors, including the analysts lack of awareness that their inferences were guided at least as much by Saddam’s otherwise inexplicable expulsion of international inspectors as by the bits of secret information that they cited as reasons for their conclusions (35), it was more plausible than the alternative explanations that now appear to be correct. It is telling that the official postmortems and the journalistic accounts do not reject this argument, they do not even consider it. The idea that a disastrously wrong conclusion was merited is deeply disturbing and not only conflicts with our desire to hold people and organizations accountable, but clashes with the “just world” intuition that even though evil and error are common, at bottom people tend to get what they deserve (3638) and our sense that well-designed organizations with excellent staffs should be able to understand the world.To put this another way, people tend to be guided by the outcomes when judging the appropriateness of the procedures and thinking behind analysis and policies. If the conclusions are later revealed to be correct or if the policies succeed, there will be a strong presumption that everything was done right. While intuitively this makes sense, and it would be surprising if there were no correlation between process and outcome, the correlation is not likely to be 1.0. Toward the end of the deliberations on whether the mysterious person that the United States located in a hideout in Abottabad really was Osama bin Laden, Michael Morell, Deputy Director of the CIA, said that the case for this was much weaker than that for the claim that Saddam had active WMD programs in 2002 (39). Although this assessment is disputable and by necessity subjective, Morell was very experienced in these matters and his argument is at minimum plausible.The quality of postmortems is impeded by the almost universal tendency to ignore the fact that people may be right for the wrong reasons and wrong for the right ones. For example, the SSCI praised those elements of the IC who dissented from the majority opinion that Iraq had active WMD programs without investigating the quality of their evidence or analysis (40). It was assumed rather than demonstrated that the skeptics looked more carefully, reasoned more clearly, or used better methods than did those who reached less accurate conclusions. To turn this around, this and many other postmortems failed to use standard social science comparative methods to probe causation but instead criticized those who were later shown to be wrong for being sloppy or using a faulty approach without looking at whether those who were later shown to be right followed the same procedures (41).Both the analysis being examined and the subsequent postmortems are prone to neglect the comparative method in another way as well. They usually look at whether a bit of evidence is consistent with the favored explanation without taking the next step of asking whether it is also consistent with alternative explanations. Just as CIA analysts seized on the fact that trucks of a type previously associated with chemical weapons were being used at suspicious sites and did not ask themselves whether an Iraq without active chemical production would find other uses for the trucks (42), so the SSCI report did not consider that the errors they attributed to bad tradecraft could also be explained by political pressures (which is not to say that the latter explanation is in fact correct), and those who argued for the importance of these pressures did not look at the other areas of intelligence, especially on the links between Saddam and al Qaeda, to see whether the same pressures had the expected effect (in fact, they did not). Here factors were judged to be causal without looking at other cases in which the factors were present to see if the outcomes were the same.A related set of failings, and ones that are central, include the hindsight fallacy, cherry picking of evidence, confirmation bias, and ignoring the costs of false positives. These are epitomized by what I noted at the start—in the aftermath of a disaster, people ask how the decision makers and analysts could have been so wrong, why the dots remained unconnected, and why warning signs were missed or discounted. Hindsight bias is very strong (43, 44); when we know how an episode turned out we see it as predictable (and indeed often believe that we did predict it) if not inevitable. Just as we assimilate new information to our preexisting beliefs (45, 46), once we know the outcome, when we go back over the information or look at old information for the first time (as is often the case for those conducting postmortems), there is a very strong propensity to see that it unequivocally points to the outcome that occurred. In a form of confirmation bias and premature cognitive closure that comes with our expectations about what we are likely to see, knowledge of how events played out skews our perceptions of how informative the data were at the time. The problematic way of thinking compounds because once we think we know the answer, we search for information that fits with it and interpret ambiguous evidence as supportive.This is not to suggest that good postmortems should ignore the light shed by the outcome and what in retrospect appears to be the correct view. Without this, after all, later analysts would have no inherent advantages over contemporary ones. But authors of postmortems must struggle against the natural tendency to weigh the evidence by what we now know—or believe—to be correct. Given all the information that flows into the organization, it is usually easy to pick out reports, anecdotes, and data that pointed in the right direction. But to do this is to engage in cherry picking unless we can explain why at the time these indicators should have been highlighted and interpreted as we now do and why the evidence relied on at the time should have been ignored or seen differently. As Roberta Wohlstetter pointed out in her path-breaking study of why the United States was taken by surprise by the Japanese attack on Pearl Harbor (47), most of the information is noise (if not, in competitive situations, deception), and the problem is to identify the signals. To use the language that became popular after 9/11, to say that analysts failed to connect the dots is almost always misleading because there are innumerable dots that could be connected in many ways. For example, official and journalistic analyses of 9/11 stress the mishandling of information about the suspicious behavior of some people attending flight instruction schools. True, but if the attack had been by explosive-laden trucks I suspect that we would be pointing to suspicious behavior at trucking schools.Fighting against the strong attraction of knowing how the dots were in fact connected is a difficult task, and one that must be recognized at the start of a postmortem lest it make debilitating errors. A prime example is the SSCI majority report on the RDI program mentioned above. The grounds for the conclusion that torture was ineffective were that the correct conclusions could have been drawn from the mass of information obtained from other sources, especially interrogations that used more benign techniques. The problem is not that this claim is necessarily incorrect, but that it commits the hindsight fallacy and cherry picks. Knowing the right answer, one can find strong clues to it in the enormous store of data that was available to the analysts. But this information could have yielded multiple inferences and pictures; to have established its conclusion the postmortem would have to show why what we now believe to be correct was more plausible and better supported than the many alternatives (48, 49). Such an argument will always be difficult and subject to dispute, but like most retrospective analyses, the SSCI report did not even recognize that this was necessary.The frequently asked question of what warning signs were missed points to the related problem of the failure to recognize the potential importance of false positives. This comes up with special urgency after mass shootings and often yields a familiar and plausible list of indicators: mental health problems, withdrawal from social interactions, expressed hostility, telling others that something dramatic will happen soon. But even if we find that these signs universally preceded mass shootings, this would not be enough to tell us that bystanders and authorities who saw them should have stepped in: looking only at cases of mass shootings (what is known as searching on the dependent variable) makes it impossible to determine the extent to which these supposed indicators differentiate cases in which people go on shooting sprees from those in which they do not. Acting on these indicators could then lead to large numbers of false positives. Economists have the saying that the stock market predicted seven of the last three recessions; those who do postmortems should take heed.Put differently, the “warning signs” may be necessary but not sufficient conditions for the behavior. Their significance depends on how common they are in the general population, and this we can tell only by looking at the behavior of people who do not commit these terrible crimes. The same point applies to looking for predictors of other noxious behavior, such as terrorism, violent radicalization, and domestic abuse, to mention just several that receive a great deal of attention (or of unusually good behavior like bravery or great altruism).Examining people who do not behave in these ways or instances that do not lead to the undesired outcomes would be labor intensive and can raise issues of civil liberties. But this is not always the case, and was not in one well-known instance (although this was not a postmortem, but predecisional analysis). Before the doomed launch of the Challenger, because the engineers knew that the O rings that were supposed to keep the flames from escaping through the boosters’ joints might be a problem in cold weather, they provided the higher authorities with a slide showing the correlation between the air temperature and the extent of the damage to the rings in previous launches. Because this showed some but not an overwhelming negative correlation it was not seen as decisive. The slide omitted data for launches in which there was no damage to the rings at all, however. This showed that partial burn-throughs occurred only when the temperature fell below 53°, and had these negative cases been displayed the role of cold would have been more apparent.With these past cases as guides, we can expect that absent self-conscious efforts to counter the biases discussed above, the attempt to understand the failure of the authorities to anticipate and control events of January 6, 2021, will be suboptimal. That is not to say that these postmortems will be without value. There are things to be learned about why communication channels were not clearer, what the barriers to cooperation among the diverse law enforcement organizations were, why there was so little contingency planning, and why the decision to deploy the National Guard was delayed. The analysis of the lack of forewarning, however, is likely to fit the pattern of hindsight bias, cherry picking, and the neglect of comparisons. The media has said that an alarming report from an FBI field station was not passed on and that insufficient attention was paid to the “chatter” on social media (5053). It is almost certain that a more thorough investigation will turn up additional bits of information that in retrospect pointed to the violence that ensued. If the analysis stops here, however, the obvious conclusion that these indicators should have been heeded will be incorrect. Better methodology points to the next steps of looking at some of the multiple cases in which large-scale violence did not occur or was easily contained in order to ascertain whether the reports preceding January 6 were markedly different. We should also look for indications and reports that the demonstration would be peaceful.It is not surprising that the fall of Kabul has led to widespread calls for an inquiry into what went wrong. In all probability, however, these postmortems are also likely to be deficient. They will be highly politicized and subject to the hindsight fallacy and related methodological shortcomings. Because the stakes are so high and involve so many different entities and actors, political pressures will be generated not only by the Democrats and Republicans, but also by different parts of the government, especially the military and the civilian intelligence community.Politics will be involved in how the postmortem is framed. Democrats will want to focus on the intelligence while Republicans will seek a broader scope to include if not concentrate on the decisions that were made. Disputes about the time period to be examined are also likely. Republicans will want to limit the study to the consequences of President Biden’s April 14, 2021, announcement that all troops would be withdrawn by September 11; Democrats will want to start the clock earlier, with the Trump administration’s February 2020 agreement with the Taliban to withdraw by May 1, 2021. More specifically, Democrats will want to look for evidence that the Trump agreement led to secret arrangements between the Taliban and local authorities that laid the foundations for the latter’s defections in the summer.Because the issues are so salient and emotion-laden, there will be pressure to have the investigations be entirely disinterested, which means that the people conducting them should not have had deep experience with any of the organizations involved. This impulse is reasonable, but comes at a price because reading and interpreting intelligence reports requires a familiarity with their forms and norms (which differ from one organization to another), and outsiders are at a disadvantage here. The distinction between strategic and tactical warning is easily missed by those new to the subject, as is the need to determine whether and how assessments are contingent (i.e., dependent on certain events occurring or policies being adopted). An understanding of how policy makers (known as “consumers”) are likely to interpret intelligence is also important and needs to be factored in. In the case of Afghanistan, as in other military engagements, military intelligence is prone to paint a relatively optimistic picture of the progress that is being made. Experienced consumers understand this and can apply an appropriate discount factor. A further complication is that as the pressure for a full American withdrawal increased, the military’s incentives changed, and pessimism about the prospects for the Afghan army in the absence of American support came to the fore. If a postmortem takes these estimates at face value and legitimately faults them for their organizational bias, it would be important to also probe how the estimates were interpreted.Here as in other cases, the overarching threats to a valuable postmortem are hindsight bias and cherry picking, which will be especially strong because we now more clearly see the weaknesses of the government and the positive feedback that led Taliban victories to multiply. If the study tries to uncover when various consumers and intelligence analysts and organizations concluded that a quick Taliban victory was likely, it will have to confront not only the obvious problem that people have incentives to exaggerate their prescience, but that memories about exactly when certain conclusions were arrived at are especially unreliable when events are coming thick and fast, as was true of the analysts charting the unrest that led to the fall of the Shah.* In retrospect, the more accurate assessments will stand out and there will be strong impulses to claim that at the time they had more support than the alternatives and so should have been believed.None of this is to say that a well-designed postmortem of either Afghanistan or the events of January 6 would conclude that few errors were made or that the information was analyzed appropriately. But it would provide a more solid grounding for any conclusions, attributions of blame, and proposals for change if blame is the ending rather than the starting point, hindsight and confirmation biases are held in check, and the ambiguity of the evidence and frequent need for painful value trade-offs are recognized. Postmortems are hard and fallible, but the country can ill afford to continue mounting ones that have unnecessary flaws.  相似文献   

5.
Autism is a neurodevelopmental disorder that manifests as a heterogeneous set of social, cognitive, motor, and perceptual symptoms. This system-wide pervasiveness suggests that, rather than narrowly impacting individual systems such as affection or vision, autism may broadly alter neural computation. Here, we propose that alterations in nonlinear, canonical computations occurring throughout the brain may underlie the behavioral characteristics of autism. One such computation, called divisive normalization, balances a neuron’s net excitation with inhibition reflecting the overall activity of the neuronal population. Through neural network simulations, we investigate how alterations in divisive normalization may give rise to autism symptomatology. Our findings show that a reduction in the amount of inhibition that occurs through divisive normalization can account for perceptual consequences of autism, consistent with the hypothesis of an increased ratio of neural excitation to inhibition (E/I) in the disorder. These results thus establish a bridge between an E/I imbalance and behavioral data on autism that is currently absent. Interestingly, our findings implicate the context-dependent, neuronal milieu as a key factor in autism symptomatology, with autism reflecting a less “social” neuronal population. Through a broader discussion of perceptual data, we further examine how altered divisive normalization may contribute to a wide array of the disorder’s behavioral consequences. These analyses show how a computational framework can provide insights into the neural basis of autism and facilitate the generation of falsifiable hypotheses. A computational perspective on autism may help resolve debates within the field and aid in identifying physiological pathways to target in the treatment of the disorder.Autism is a neurodevelopmental disorder that is dramatically increasing in prevalence (Fig. 1). Recent data place the number of children being diagnosed with autism in the United States at 1 in 68, more than doubling in the last decade (14). The disorder is highly pervasive, affecting individuals at cognitive, motor, and perceptual levels. It is furthermore a “spectrum disorder,” with symptoms that manifest in varying degrees across individuals. This heterogeneity presents significant challenges to establishing a comprehensive characterization of the disorder.Open in a separate windowFig. 1.Increasing prevalence of autism and research on the disorder. The incidence of autism (black curve) is compiled from studies by Wing and Gould (1), Newschaffer et al. (2), and the Centers for Disease Control and Prevention (3, 4). Paralleling this rapid rise in prevalence is increased research on the disorder. The number of publications in which “autism” appears in any PubMed field (blue curve) is shown for every year from 1946 to 2014.Research investigating the genetic and molecular basis of autism implicates over 100 genes (5), many of which are involved in synaptic development and function (68). As such, one prominent hypothesis is that autism arises from a neurophysiological excitation-to-inhibition (E/I) imbalance (9, 10). However, the connection between an E/I imbalance and the behavioral characteristics of the disorder remains unclear. Considering the pervasive nature of autism, and the covariance of loosely related symptoms (1114), one possibility is that an E/I imbalance widely affects neural computation, in turn giving rise to the broad behavioral symptoms recognized as autism.Here, we propose that autism symptomatology arises from alterations in nonlinear, canonical computations occurring throughout the brain; in particular, divisive normalization, a computation that divides the activity of individual neurons by the combined activity of the neuronal population in which they are embedded. Divisive normalization inherently reflects the E/I balance, and is implicated in a wide range of processes ranging from sensory encoding to decision making (1517). Using neural network simulations, we show that a reduction in the amount of inhibition that occurs through divisive normalization can account for perceptual consequences reported in the disorder, providing a bridge between an E/I imbalance and the behavioral characteristics of autism. The simulations further establish a link between divisive normalization and high-level theories about how autism may alter the influence of past experience on the interpretation of current sensory information (1820). A key result of the simulations is the implication of the neuronal milieu (the contextual environment of neuronal population activity in which neurons are embedded) in autism. Specifically, autism-like symptomatology arises in the model when the influence of the population on the activity of individual neurons is reduced, in essence making the neurons less “social.” A broader discussion of behavioral data further suggests that alterations in divisive normalization may contribute to the phenotypic diversity of autism.  相似文献   

6.
Insect societies such as those of ants, bees, and wasps consist of 1 or a small number of fertile queens and a large number of sterile or nearly sterile workers. While the queens engage in laying eggs, workers perform all other tasks such as nest building, acquisition and processing of food, and brood care. How do such societies function in a coordinated and efficient manner? What are the rules that individuals follow? How are these rules made and enforced? These questions are of obvious interest to us as fellow social animals but how do we interrogate an insect society and seek answers to these questions? In this article I will describe my research that was designed to seek answers from an insect society to a series of questions of obvious interest to us. I have chosen the Indian paper wasp Ropalidia marginata for this purpose, a species that is abundantly distributed in peninsular India and serves as an excellent model system. An important feature of this species is that queens and workers are morphologically identical and physiologically nearly so. How then does an individual become a queen? How does the queen suppress worker reproduction? How does the queen regulate the nonreproductive activities of the workers? What is the function of aggression shown by different individuals? How and when is the queen''s heir decided? I will show how such questions can indeed be investigated and will emphasize the need for a whole range of different techniques of observation and experimentation.  相似文献   

7.
Previous studies showed that baby monkeys separated from their mothers develop strong and lasting attachments to inanimate surrogate mothers, but only if the surrogate has a soft texture; soft texture is more important for the infant’s attachment than is the provision of milk. Here I report that postpartum female monkeys also form strong and persistent attachments to inanimate surrogate infants, that the template for triggering maternal attachment is also tactile, and that even a brief period of attachment formation can dominate visual and auditory cues indicating a more appropriate target.

Before the 1900s, affection and religion were thought to be the most important factors in child rearing, but starting around 1910 and lasting for at least 30 y there was a major shift toward cleanliness, order, and scientific principles of conditioning and training, motivated by the new behaviorist theory in psychology (1). A survey (2) of the advice found in popular women’s magazines during that era summarized the Zeitgeist as the following:
Mothers were admonished to insist upon obedience at all times, and if temper tantrums resulted, they should be ignored. Together with a more severe attitude toward the child went a new taboo on physical handling. Love, and particularly the physical manifestation of it, was discouraged in most of the articles on infant Disciplines. It was believed that stimulation of any sort would lead to precocity in the older child and dullness in the man. Furthermore, baby’s strength was needed for rapid growing, and picking the baby up deprived him of his strength. Still another reason for discouraging physical contact with baby was the belief that postnatal conditions for the infant should closely approximate prenatal conditions, and since the infant was not handled in the uterus, he should not be handled after birth.
Watson wrote (3): “There are serious rocks ahead for the overkissed child.” I distinctly remember in the early 1950s my grandfather admonishing my mother not to “coddle” me; I remember because I thought that was something you did to eggs.This hands-off view led to sterile, contactless nurseries across many countries. Holding and touching of premature infants or even visiting hospitalized children was generally forbidden, on the grounds that it spread diseases. Orphanages and other children’s institutions varied but often adhered to this trend, some even keeping infants in isolated cubicles to prevent the spread of infections. However, in the 1940s evidence began amassing that such institutionalization, or prolonged hospitalization, could hinder children’s growth and lead to severe psychological problems (46). Both my collaborator, David Hubel, and my husband recollect being hospitalized as young children and not allowed visitors; their mothers could only wave to them through a window in the door. Both found it traumatizing.Thus, the importance of infant/caregiver attachment began to be recognized, but its biological basis was disputed. The prevailing, behaviorist, view among psychologists and sociologists was that this attachment derived from a learned association between the mother and hunger satiation (7). A less popular theory was that humans and animals have fundamental innate drives beyond the physiological drives of discomfort and hunger, and these drives include attachment to a mother figure (Fig. 4).Open in a separate windowFig. 4.Monkey B2 still carrying her toy 2 mo after parturition. She is the same monkey as the infant in Fig. 1. On the evening of the day of parturition she retained this toy in preference to her own live infant.Innate visual and auditory triggers underlie imprinting in hatchling birds, which leads to persistent following responses (8), but almost nothing was known about innate mechanisms responsible for the strong and lasting ties mammalian infants form with their mothers. Gertrude Van Wagenen (9) observed that infant macaques, separated from their mothers and fed from tiny nursing bottles, would not feed properly unless they could cling to a soft towel (see also ref. 10). Harry Harlow adopted the towel technique for raising infant macaques and found that they developed strong attachments to the towels and were distressed when the cloths were removed for cleaning (11). These laboratory-reared monkeys were larger and healthier and had a higher survival rate than infants left to the care of their monkey mothers, as long as they had a soft cloth; without the cloth, survival was lower.To try to disentangle the relative importance of vision, touch, warmth, and nourishment for establishing infant bonding, Harlow and Zimmermann (12) constructed various mother surrogates for infant macaques. Some surrogates were soft and others rigid and unyielding, with or without faces, heated or not, with or without an attached milk bottle. Infants provided with two different surrogates overwhelmingly preferred the cloth surrogate over the rigid surrogate, irrespective of which provided milk or heat or had a face. Infants displayed strong sustained attachment to and derived security from cloth surrogates for more than a year and a half (the duration of the experiment). Infants’ attachment to and behavior toward the cloth surrogates was indistinguishable from mother-reared infants’ attachment to their monkey mothers (11). The parallels between the abnormal behaviors of macaque infants reared without any cloths or soft surrogates (13) and the psychological problems often found in children who as infants had been reared in sterile cubicles (4) were instrumental in changing child-rearing practices and hospital visiting rules (14).Beyond its social importance, Harlow’s work shows that infant monkey attachment is based on surprisingly few sensory features, i.e., primarily tactile, and may be established within only a limited time during development, as in imprinting. The complementary attachment, of mothers to their offspring, though widely acknowledged in literature and art, is not much studied. In rodents, olfactory and auditory cues emitted by pups trigger maternal behavior in hormonally primed females (15). Mother ewes form selective attachments to their own lambs via olfactory imprinting (16). Almost nothing is known about the sensory cues involved in generating the strong and lasting attachment of primate mothers to their infants (17). Macaque mothers hold and protect their infants continuously after birth and carry them around, groom them, and treat them protectively for many months (Fig. 1). Given the complex social organization of this species (18), massive visual system with specialized visual domains, comparable to those in humans, for face and body recognition (19), and auditory domains specific for monkey vocalizations (20), one might expect that multiple, complex sensory cues would be involved in mother macaques bonding with their infants.Open in a separate windowFig. 1.Typical maternal behavior of a rhesus macaque. Female is holding (Left), nursing (Center), and protecting her infant from the perceived threat of the author coming near her enclosure (Right). Most monkeys in our colony respond to familiar humans by indicating a desire for a scratch or expectation of a treat; mothers with infants are extra-defensive (10, 18) and will initially show aggression for a few seconds then calm down and accept treats. The infant in these photos is monkey B2, whose behavior as an adult is described below.Here I report some observations that indicate that primate maternal attachment to the infant may not depend on complex multisensory cues but rather that a single sufficient trigger for maternal behavior (in a hormonally primed female) is tactile. My first observation was of an 8-y-old primiparous female rhesus macaque, monkey Ve. She delivered a stillborn infant. She was holding the lifeless infant to her chest when I first observed her in the morning.* Appropriate veterinary care required that the dead infant be removed and examined, and to accomplish this, she was lightly anesthetized. When she recovered a few minutes later, she exhibited significant signs of distress: She vocalized loudly and constantly and seemed to be searching agitatedly around her enclosure. Other monkeys housed in the same room also began to vocalize and became agitated. To try to reduce the level of stress in the room I placed a stuffed animal in her enclosure. It was a 15-cm-tall, soft, furry toy, a faceless stuffed mouse, chosen because of its availability and its lack of potential choking hazards, such as sewn-on eyes. Monkey Ve immediately picked up the stuffed toy and held it to her chest. She stopped screeching and became calm while holding it, and the whole room quieted down. She held the toy to her chest continuously for more than a week, without any signs of distress. During this time, she behaved in a manner indistinguishable from other mothers in our colony with live infants, in that she continuously held the toy to her chest, and she exhibited aggressive behavior toward cohoused monkeys and toward even familiar humans when they approached her (Fig. 2). This enhanced level of defensive behavior is characteristic of females with infants (10, 18). About 10 d after parturition she discarded the stuffed toy and showed no further distress. This female a year later delivered and successfully nurtured a second infant.Open in a separate windowFig. 2.Female monkey Ve showing maternal behavior toward a stuffed toy mouse 2 and 3 d postpartum. She continuously holds the toy (Left and Center) and protects it from the perceived threat of the author approaching her home enclosure.I have offered stuffed toys to five different female monkeys immediately after eight births among them, after removal of the infant. Three of the females (monkeys Ve, Sv, and B2), after each of five births among them, picked up and carried the soft toy around for a week to several months (sometimes until the toy fell apart) (Fig. 3). The other two females who were offered toys (monkeys Ug and Sa), right after three births between them, showed no interest in any toy nor any distress after waking up from anesthesia. After one of the toy-adopting births, I had left both a soft toy and a similar-sized rigid pink baby doll in the monkey’s enclosure, and after another a kong toy along with the soft toy; in both cases the soft toy was picked up and carried around, not the rigid baby doll or the hard kong. In one case a brown Beanie Baby (a “monkey”) and a reddish one (an “orangutan”) were offered simultaneously, and the redder one was chosen and carried around for months (Fig. 3, Bottom). These stuffed toys matched a normal infant only in size, color, texture, and crude shape but did not possess any other infant characteristics such as odor, vocalization, movement, grasping, or suckling.Open in a separate windowFig. 3.Monkey Sv with adopted soft toys after her first parturition (Top) and her second (Bottom). The top row shows her still carrying around a toy 3 wk postparturition. The leftmost panel shows a red kong that was not chosen, and the rightmost panel shows her carrying the toy on her hips, a typical maternal behavior, but, as with live infants, the mother usually quickly grabs the infant back to her chest whenever anyone approaches, so it was difficult to get a picture of this. The bottom row shows the same monkey 3 wk after her second parturition; she chose this reddish toy over a brown one on the morning after birth.On one of the toy-adopting occasions described above, the mother, monkey B2, was initially anesthetized at 7 AM to remove the infant, and she “adopted” the soft toy as soon as she woke up (Fig. 4). She was anesthetized again at 11 AM because of a retained placenta, so the veterinarians could administer oxytocin and perform manual massage. Because the mother did not expel the placenta that day, the veterinary staff suggested returning the live infant to the mother overnight so the infant’s suckling could help expel the placenta. So, around 5 PM I brought the infant to the mother’s enclosure and placed it on a shelf just above where the mother was sitting, holding her stuffed toy. The mother looked back and forth between the toy she was holding and the wiggling, squeaking infant, and eventually moved to the back of her enclosure with the toy, leaving the lively infant on the shelf.§ Thus, the mother’s attachment to the stuffed toy that she had been holding for 6 h must have been stronger than her attraction to the real infant that she would have been holding for the few hours before lights-on at 7 AM. We were surprised that the auditory and visual cues emitted by the live infant did not convince the mother that she should trade the toy for the infant. It may be that she had already imprinted on the stuffed toy during the day and was subsequently unreceptive to any substitute. Alternatively, possession may play a role in sustaining attachment; we could have tested this by removing the stuffed toy, and presenting the toy and the real infant simultaneously, but did not, in order to avoid any potential aggressive behavior toward the infant.Three postpartum female macaques displayed strong sustained attachment to a small soft toy on five separate postpartum episodes. Two females did not. Thus, maternal attachment to an inanimate toy is not a rare occurrence. This attachment cannot be attributed to differences in rearing because two of the toy-adopting monkeys were reared by their monkey mothers (Figs. 1 and and4)4) and the third toy-adopting monkey was hand-reared by laboratory staff. Of the two females who did not adopt a toy and did not display any distress when they woke up after infant removal, one was hand-reared and the other reared by her monkey mother. Of the seven live births, the infant was found clinging to the mother at lights-on in the morning. It is unknown how long before lights-on the births occurred, the timing of which could have influenced any bonding, though the lack of distress by the mothers who did not adopt a toy suggests a weak bond. Mothers who have nurtured infants for more than a few days show distress when the infants are removed for testing or procedures, and they aggressively grab the infants when they are returned. Furthermore, a stuffed toy was not an acceptable substitute to two mothers of week-old infants, when those infants were briefly removed from the mother for procedures; thus, presumably, the mothers had by then formed attachments to their own living infants.Carrying of a dead infant corpse by its mother has been observed in more than a dozen nonhuman primate species, including macaques (2123). Carrying behavior occurs in ∼20% of macaque stillbirths or infant deaths and usually lasts for only a few days (22), though it can persist longer. Anthropomorphizing this behavior as “grief” may be misguided; possibly these animals were as satisfied with the corpses as our macaques were with their soft toys.The sparseness of the template for triggering maternal behavior is surprising, but a broadly specified target, coupled with learning, would be a mechanism for ensuring flexible nurturing behavior: Once the nurturing target is fixed on, experience can adjust, refine, and maintain the template for recognizing the target for maternal attachment. Harlow and Zimmermann (12) found that soft texture is critical for the attachment of infant monkeys to inanimate surrogate mothers, and soft texture is even more important for fostering an infant’s attachment than is providing milk. Their work helped lead to a transformation in child-rearing philosophy, so that nowadays parents are encouraged to hold and cuddle their children—to do otherwise would now be considered cruel. My observations indicate that the postpartum maternal attachment drive can also be satisfied by holding a soft inanimate object. The calming effect of the toy on monkey Ve was dramatic, and using such surrogates may be a useful technique for relieving the stress associated with infant death or removal in captive primates (24).Although there is no way of knowing the extent to which these observations bear on human maternal bonding, or on other kinds of bonding, they do suggest that soft touch may be calming, therapeutic, perhaps even psychologically necessary, throughout the lifetime, not just in infants. These results also suggest, at least to me, that attachment bonds, even those that seem to be based on complex, unique, or sophisticated qualities, may actually be based on, or at least triggered by, simple sensory cues.  相似文献   

8.
Macrocycles, formally defined as compounds that contain a ring with 12 or more atoms, continue to attract great interest due to their important applications in physical, pharmacological, and environmental sciences. In syntheses of macrocyclic compounds, promoting intramolecular over intermolecular reactions in the ring-closing step is often a key challenge. Furthermore, syntheses of macrocycles with stereogenic elements confer an additional challenge, while access to such macrocycles are of great interest. Herein, we report the remarkable effect peptide-based catalysts can have in promoting efficient macrocyclization reactions. We show that the chirality of the catalyst is essential for promoting favorable, matched transition-state relationships that favor macrocyclization of substrates with preexisting stereogenic elements; curiously, the chirality of the catalyst is essential for successful reactions, even though no new static (i.e., not “dynamic”) stereogenic elements are created. Control experiments involving either achiral variants of the catalyst or the enantiomeric form of the catalyst fail to deliver the macrocycles in significant quantity in head-to-head comparisons. The generality of the phenomenon, demonstrated here with a number of substrates, stimulates analogies to enzymatic catalysts that produce naturally occurring macrocycles, presumably through related, catalyst-defined peripheral interactions with their acyclic substrates.

Macrocyclic compounds are known to perform a myriad of functions in the physical and biological sciences. From cyclodextrins that mediate analyte separations (1) to porphyrin cofactors that sit in enzyme active sites (2, 3) and to potent biologically active, macrocyclic natural products (4) and synthetic variants (57), these structures underpin a wide variety of molecular functions (Fig. 1A). In drug development, such compounds are highly coveted, as their conformationally restricted structures can lead to higher affinity for the desired target and often confer additional metabolic stability (813). Accordingly, there exists an entire synthetic chemistry enterprise focused on efficient formation and functionalization of macrocycles (1418).Open in a separate windowFig. 1.(A) Examples of macrocyclic compounds with important applications. HCV, hepatitis C virus. (B) Use of chiral ligands in metal-catalyzed or mediated stereoselective macrocyclization reactions. (C) Remote desymmetrization using guanidinylated ligands via Ullmann coupling. (D) This work: use of copper/peptidyl complexes for macrocyclization and the exploration of matched and mismatched effect.In syntheses of macrocyclic compounds, the ring-closing step is often considered the most challenging step, as competing di- and oligomerization pathways must be overcome to favor the intramolecular reaction (14). High-dilution conditions are commonly employed to favor macrocyclization of linear precursors (19). Substrate preorganization can also play a key role in overcoming otherwise high entropic barriers associated with multiple conformational states that are not suited for ring formation. Such preorganization is most often achieved in synthetic chemistry through substrate design (14, 2022). Catalyst or reagent controls that impose conformational benefits that favor ring formation are less well known. Yet, critical precedents include templating through metal-substrate complexation (23, 24), catalysis by foldamers (25) or enzymes (2629), or, in rare instances, by small molecules (discussed below). Characterization of biosynthetic macrocyclization also points to related mechanistic issues and attributes for efficient macrocyclizations (3034). Coupling macrocyclization reactions to the creation of stereogenic elements is also rare (35). Metal-mediated reactions have been applied toward stereoselective macrocyclizations wherein chiral ligands transmit stereochemical information to the products (Fig. 1B). For example, atroposelective ring closure via Heck coupling has been applied in the asymmetric total synthesis of isoplagiochin D by Speicher and coworkers (3640). Similarly, atroposelective syntheses of (+)-galeon and other diarylether heptanoid natural products were achieved via Ullman coupling using N-methyl proline by Salih and Beaudry (41). Finally, Reddy and Corey reported the enantioselective syntheses of cyclic terpenes by In-catalyzed allylation utilizing a chiral prolinol-based ligand (42). While these examples collectively illustrate the utility of chiral ligands in stereoselective macrocyclizations, such examples remain limited.We envisioned a different role for chiral catalysts when addressing intrinsically disfavored macrocyclization reactions. When unfavorable macrocyclization reactions are confronted, we hypothesized that a catalyst–substrate interaction might provide transient conformational restriction that could promote macrocyclization. To address this question, we chose to explore whether or not a chiral catalyst-controlled macrocyclization might be possible with peptidyl copper complexes. In the context of the medicinally ubiquitous diarylmethane scaffold, we had previously demonstrated the capacity for remote asymmetric induction in a series of bimolecular desymmetrizations using bifunctional, tetramethylguanidinylated peptide ligands. For example, we showed that peptidyl copper complexes were able to differentiate between the two aryl bromides during C–C, C–O, and C–N cross-coupling reactions (Fig. 1C) (4345). Moreover, in these intermolecular desymmetrizations, a correlation between enantioselectivity and conversion was observed, revealing the catalyst’s ability to perform not only enantiotopic group discrimination but also kinetic resolution on the monocoupled product as the reaction proceeds (44). This latter observation stimulated our speculation that if an internal nucleophile were present to undergo intramolecular cross-coupling to form a macrocycle, stereochemically sensitive interactions (so-called matched and mismatched effects) (46) could be observed (Fig. 1D). Ideally, we anticipated that transition state–stabilizing interactions might even prove decisive in matched cases, and the absence of catalyst–substrate stabilizing interactions might account for the absence of macrocyclization for these otherwise intrinsically unfavorable reactions. Herein, we disclose the explicit observation of these effects in chiral catalyst-controlled macrocyclization reactions.  相似文献   

9.
We describe the problem of “selective inference.” This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have “cherry-picked”—searched for the strongest associations—means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.Statistical science has changed a great deal in the past 10–20 years, and is continuing to change, in response to technological advances in science and industry. The world is awash with big and complicated data, and researchers are trying to make sense out of it. Leading examples include data from “omic” assays in the biomedical sciences, financial forecasting from economic and business indicators, and the analysis of user click patterns to optimize ad placement on websites. This has led to an explosion of interest in the fields of statistics and machine learning and spawned a new field some call “data science.”In the words of Yoav Benjamini, statistical methods have become “industrialized” in response to these changes. Whereas traditionally scientists fit a few statistical models by hand, now they use sophisticated computational tools to search through a large number of models, looking for meaningful patterns. Having done this search, the challenge is then to judge the strength of the apparent associations that have been found. For example, a correlation of 0.9 between two measurements A and B is probably noteworthy. However, suppose that I had arrived at A and B as follows: I actually started with 1,000 measurements and I searched among all pairs of measurements for the most correlated pair; these turn out to be A and B, with correlation 0.9. With this backstory, the finding is not nearly as impressive and could well have happened by chance, even if all 1,000 measurements were uncorrelated. Now, if I just reported to you that these two measures A and B have correlation 0.9, and did not tell which of these two routes I used to obtain them, you would not have enough information to judge the strength of the apparent relationship. This statistical problem has become known as “selective inference,” the assessment of significance and effect sizes from a dataset after mining the same data to find these associations.As another example, suppose that we have a quantitative value y, a measurement of the survival time of a patient after receiving either a standard treatment or a new experimental treatment. I give the old drug (1) or new drug (2) at random to a set of patients and compute the mean difference in the outcome z=(y¯2y¯1)/s, where s is an estimate of SD of the raw difference. Then I could approximate the distribution of z by a standard normal distribution, and hence if I reported to you a value of, say, z = 2.5 you would be impressed because a value that large is unlikely to occur by chance if the new treatment had the same effectiveness as the old one (the P value is about 1%). However, what if instead I tried out many new treatments and reported to you only ones for which |z| > 2? Then a value of 2.5 is not nearly as surprising. Indeed, if the two treatments were equivalent, the conditional probability that |z| exceeds 2.5, given that it is larger than 2, is about 27%. Armed with knowledge of the process that led to the value z = 2.5, the correct selective inference would assign a P value of 0.27 to the finding, rather than 0.01.If not taken into account, the effects of selection can greatly exaggerate the apparent strengths of relationships. We feel that this is one of the causes of the current crisis in reproducibility in science (e.g., ref. 1). With increased competiveness and pressure to publish, it is natural for researchers to exaggerate their claims, intentionally or otherwise. Journals are much more likely to publish studies with low P values, and we (the readers) never hear about the great number of studies that showed no effect and were filed away (the “file-drawer effect”). This makes it difficult to assess the strength of a reported P value of, say, 0.04.The challenge of correcting for the effects of selection is a complex one, because the selective decisions can occur at many different stages in the analysis process. However, some exciting progress has recently been made in more limited problems, such as that of adaptive regression techniques for supervised learning. Here the selections are made in a well-defined way, so that we can exactly measure their effects on subsequent inferences. We describe these new techniques here, as applied to two widely used statistical methods: classic supervised learning, via forward stepwise regression, and modern sparse learning, via the “lasso.” Later, we indicate the broader scope of their potential applications, including principal components analysis.  相似文献   

10.
To move on land, in water, or in the air, even at constant speed and at the same level, always requires an expenditure of energy. The resistance to motion that has to be overcome is of many different kinds depending on size, speed, and the characteristics of the medium, and is a fascinating subject in itself. Even more interesting are nature’s stratagems and solutions toward minimizing the effort involved in the locomotion of different types of living creatures, and humans’ imitations and inventions in an attempt to do at least as well.  相似文献   

11.
When corresponding areas of the two eyes view dissimilar images, stable perception gives way to visual competition wherein perceptual awareness alternates between those images. Moreover, a given image can remain visually dominant for several seconds at a time even when the competing images are swapped between the eyes multiple times each second. This perceptual stability across eye swaps has led to the widespread belief that this unique form of visual competition, dubbed stimulus rivalry, is governed by eye-independent neural processes at a purely binocular stage of cortical processing. We tested this idea by investigating the influence of stimulus rivalry on the buildup of the threshold elevation aftereffect, a form of contrast adaptation thought to transpire at early cortical stages that include eye-specific neural activity. Weaker threshold elevation aftereffects were observed when the adapting image was engaged in stimulus rivalry than when it was not, indicating diminished buildup of adaptation during stimulus-rivalry suppression. We then confirmed that this reduction occurred, in part, at eye-specific neural stages by showing that suppression of an image at a given moment specifically diminished adaptation associated with the eye viewing the image at that moment. Considered together, these results imply that eye-specific neural events at early cortical processing stages contribute to stimulus rivalry. We have developed a computational model of stimulus rivalry that successfully implements this idea.  相似文献   

12.
The domestication of plants and animals is a key transition in human history, and its profound and continuing impacts are the focus of a broad range of transdisciplinary research spanning the physical, biological, and social sciences. Three central aspects of domestication that cut across and unify this diverse array of research perspectives are addressed here. Domestication is defined as a distinctive coevolutionary, mutualistic relationship between domesticator and domesticate and distinguished from related but ultimately different processes of resource management and agriculture. The relative utility of genetic, phenotypic, plastic, and contextual markers of evolving domesticatory relationships is discussed. Causal factors are considered, and two leading explanatory frameworks for initial domestication of plants and animals, one grounded in optimal foraging theory and the other in niche-construction theory, are compared.The domestication of plants and animals marks a major evolutionary transition in human history—one with profound and lasting global impacts. The origins of domestication—when and where, how, and why our ancestors targeted plant and animal species for domestication—is an enduring and increasingly active area of scientific inquiry for researchers from many different disciplines. Enhancing present-day productivity of long-standing and recently domesticated species and exploring social and biological issues surrounding their role in feeding rapidly expanding global populations are topics of pressing concern. The volume and breadth of domestication research is underscored by a keyword search on the term “domestication” for the year 2013 which yielded a total of 811 papers in more than 350 different journals (Table S1), including 42 articles published in PNAS (Table S2).Given the large and growing number of studies on domestication across a wide array of disciplines, it is worthwhile to address three central questions. (i) Is there a definition of domestication applicable to both plants and animals from the distant past to present day that distinguishes domestication from related processes of resource management and agriculture? (ii) How does domestication change both the domesticate and domesticator, and how can we track these changes through time? (iii) Why did humans domesticate plants and animals, and are there common causal factors that underlie the process of domestication wherever it takes place?  相似文献   

13.
The brain mechanisms of fear have been studied extensively using Pavlovian fear conditioning, a procedure that allows exploration of how the brain learns about and later detects and responds to threats. However, mechanisms that detect and respond to threats are not the same as those that give rise to conscious fear. This is an important distinction because symptoms based on conscious and nonconscious processes may be vulnerable to different predisposing factors and may also be treatable with different approaches in people who suffer from uncontrolled fear or anxiety. A conception of so-called fear conditioning in terms of circuits that operate nonconsciously, but that indirectly contribute to conscious fear, is proposed as way forward.
Hunger, like, anger, fear, and so forth, is a phenomenon that can be known only by introspection. When applied to another…species, it is merely a guess about the possible nature of the animal’s subjective state.Nico Tinbergen (1)Neuroscientists use “fear” to explain the empirical relation between two events: for example, rats freeze when they see a light previously associated with electric shock. Psychiatrists, psychologists, and most citizens, on the other hand, use…“fear” to name a conscious experience of those who dislike driving over high bridges or encountering large spiders. These two uses suggest…several fear states, each with its own genetics, incentives, physiological patterns, and behavioral profiles.Jerome Kagan (2)
My research focuses on how the brain detects and responds to threats, and I have long argued that these mechanisms are distinct from those that make possible the conscious feeling of fear that can occur when one is in danger (36). However, I, and others, have called the brain system that detects and responds to threats the fear system. This was a mistake that has led to much confusion. Most people who are not in the field naturally assume that the job of a fear system is to make conscious feelings of fear, because the common meaning of fear is the feeling of being afraid. Although research on the brain mechanisms that detect and respond to threats in animals has important implications for understanding how the human brain feels fear, it is not because the threat detection and defense responses mechanisms are fear mechanisms. It is instead because these nonconscious mechanisms initiate responses in the brain and body that indirectly contribute to conscious fear.In this article, I focus on Pavlovian fear conditioning, a procedure that has been used extensively to study the so-called fear system. I will propose and defend a different way of talking about this research, one that focuses on the actual subject matter and data (threat detection and defense responses) and that is less likely to compel the interpretation that conscious states of fear underlie defense responses elicited by conditioned threats. It will not be easy to give up the term fear conditioning, but I think we should.  相似文献   

14.
Some aspects of real-world road networks seem to have an approximate scale invariance property, motivating study of mathematical models of random networks whose distributions are exactly invariant under Euclidean scaling. This requires working in the continuum plane, so making a precise definition is not trivial. We introduce an axiomatization of a class of processes we call scale-invariant random spatial networks, whose primitives are routes between each pair of points in the plane. One concrete model, based on minimum-time routes in a binary hierarchy of roads with different speed limits, has been shown to satisfy the axioms, and two other constructions (based on Poisson line processes and on dynamic proximity graphs) are expected also to do so. We initiate study of structure theory and summary statistics for general processes in the class. Many questions arise in this setting via analogies with diverse existing topics, from geodesics in first-passage percolation to transit node-based route-finding algorithms.We introduce and study a mathematical structure inspired by road networks. Although not intended as literally realistic, we do believe it raises and illustrates several interesting conceptual points and potential connections with other fields, summarized in the final section. Details will appear in a long technical paper (1). Here, we seek to explain in words rather than mathematical symbols.Consider two differences between traditional paper maps and modern online maps for roads, which will motivate two conceptual features of our model. On paper, one needs different maps for different scales—for the intercity network and for the street network in one town. The usual simplified mathematical models involve different mathematical objects at the two scales, for instance representing cities as points for an intercity network model (2). Online maps allow you to “zoom in” so that the window you see covers less real-world area but shows more detail, specifically (for our purpose) showing comparatively minor roads that are not shown when you “zoom out” again. As a first conceptual feature, we seek a mathematical model that represents roads consistently over all scales. Next, a paper map shows roads, and the user then chooses a “route” between start and destination. In contrast, a typical use of an online map is to enter the start and destination address and receive a suggested route. As a second conceptual feature, our model will treat routes as the basic objects. That is, somewhat paradoxically, in our model routes determine roads.Returning to the image of zooming in and out, the key assumption in our models is that statistical features of what we see in the map inside a window do not depend on the real-world width of the region being shown—on whether it is 5 miles or 500 miles. We call this property “scale invariance,” in accord with the usual meaning of that phrase within physics. Of course, our phrase “what we see” is very vague; we mean quantifiable aspects of the road network, and this is best understood via examples of quantifiable aspects described in the following section, and then the mathematical definition in the subsequent section. Note that scale invariance is not “scale-free network,” a phrase that has become attached (3) to the quite different notion of a (usually nonspatial) network in which the proportion of vertices with i edges scales for large i as for some γ. Our title reads “true scale invariance” to emphasize the distinction.More precisely, the property is “statistical” scale invariance, and two analogies with classical subjects may be helpful. Modeling English text as “random” seems ridiculous at first sight—authors are not monkeys on typewriters. However, the Shannon theory of “information” (4) (better described as “data compression”) does assume randomness in a certain sense, called “stationarity” or “translation invariance.” Roughly, the assumption is that the frequency of any particular word such as “the” does not vary in different parts of a text. Such an assumption is intuitively plausible and is very different from any sort of explicit dice-throwing model of pure randomness. Analogously, roads are designed rather than arising from some explicit random mechanism, but this does not contradict the possibility that statistical properties of road networks are similar in different locations and on different scales. So, just as information theory imagines the actual text of Pride and Prejudice as if it were a realization from some translation-invariant random process, we will imagine the actual road network of the United States as if it were a realization from some random process with certain invariance properties.A second analogy is with the “Wiener process,” a mathematical model in topics as diverse as physical Brownian motion, stock prices, and heavily-loaded queues. The mathematical model is exactly scale invariant (as explained and illustrated in a dynamic simulation in ref. 5) even though the real-world entities it models cannot be scale invariant at very small scales. Analogously, the exact scale invariance of our models is unrealistic at very small scales—we do not really have an arbitrarily dense collection of arbitrarily minor roads—but this is not an obstacle to interpreting the models over realistic distances.  相似文献   

15.
There have been many remarkable developments in our understanding of superstring theory in the past few years, a period that has been described as “the second superstring revolution.” In particular, what once appeared to be five distinct theories are now recognized to be different manifestations of a single (unique) underlying theory. Some of the evidence for this, based on dualities and the appearance of an eleventh dimension, is presented. Also, a specific proposal for the underlying theory, called “Matrix Theory,” is described. The presentation is intended primarily for the benefit of nonexperts.  相似文献   

16.
Cells in a developing embryo have no direct way of “measuring” their physical position. Through a variety of processes, however, the expression levels of multiple genes come to be correlated with position, and these expression levels thus form a code for “positional information.” We show how to measure this information, in bits, using the gap genes in the Drosophila embryo as an example. Individual genes carry nearly two bits of information, twice as much as would be expected if the expression patterns consisted only of on/off domains separated by sharp boundaries. Taken together, four gap genes carry enough information to define a cell’s location with an error bar of along the anterior/posterior axis of the embryo. This precision is nearly enough for each cell to have a unique identity, which is the maximum information the system can use, and is nearly constant along the length of the embryo. We argue that this constancy is a signature of optimality in the transmission of information from primary morphogen inputs to the output of the gap gene network.Building a complex, differentiated body requires that individual cells in the embryo make decisions, and ultimately adopt fates, that are appropriate to their position. There are wildly diverging models for how cells acquire this “positional information” (1), but there is general consensus that they encode positional information in the expression levels of various key genes. A classic example is provided by anterior/posterior patterning in the fruit fly, Drosophila melanogaster, where a small set of gap genes and then a larger set of pair rule and segment polarity genes are involved in the specification of the body plan (2). These genes have expression levels that vary systematically along the body axis, forming a blueprint for the segmented body of the developed larva that we can “read” within hours after the start of development (3).Although there is consensus that particular genes carry positional information, less is known quantitatively about how much information is being represented by the expression levels in individual cells. Do the broad, smooth expression profiles of the gap genes, for example, provide enough information to specify the exact pattern of development, cell by cell, along the anterior/posterior axis? How much information does the whole embryo use in making this pattern? Answering these questions is important, in part, because we know that crucial molecules involved in the regulation of gene expression are present at low concentrations and even low absolute copy numbers, so that expression is noisy (410), and this noise must limit the transmission of information (1114). Is it possible, as suggested theoretically (1518), that the information transmitted through these regulatory networks is close to the physical limits set by the irreducible randomness of counting individual molecular events? To answer this and other questions, we need to measure positional information quantitatively, in bits. We do this here using the gap genes in Drosophila as an example.There are many ways in which positional information could be represented during the process of development. Cells could make decisions based on the integration of signals over time or by comparing their internal states with those of their neighbors. Eventually, the internal state of each individual cell must carry enough information to specify that cell’s fate, but it is not clear at what point in development this happens. Thus, when we look at the gap genes during the 14th nuclear cycle after fertilization, there is no guarantee that their expression levels will carry all the information that cells eventually will acquire, either from maternal inputs or via communication with their neighbors. Because our experimental methods give us access to snapshots of gene expression levels, however, we will start by asking how much positional information is carried by local measurements in individual cells at a moment in time. These expression levels themselves reflect an integration of many inputs over space and time (9, 19), but these molecular mechanisms do not influence the definition or measurement of the information that the expression levels carry.  相似文献   

17.
A recent poll showed that most people think of science as technology and engineering—life-saving drugs, computers, space exploration, and so on. This was, in fact, the promise of the founders of modern science in the 17th century. It is less commonly understood that social and behavioral sciences have also produced technologies and engineering that dominate our everyday lives. These include polling, marketing, management, insurance, and public health programs.  相似文献   

18.
Mechanical tension along the length of axons, dendrites, and glial processes has been proposed as a major contributor to morphogenesis throughout the nervous system [D. C. Van Essen, Nature 385, 313–318 (1997)]. Tension-based morphogenesis (TBM) is a conceptually simple and general hypothesis based on physical forces that help shape all living things. Moreover, if each axon and dendrite strive to shorten while preserving connectivity, aggregate wiring length would remain low. TBM can explain key aspects of how the cerebral and cerebellar cortices remain thin, expand in surface area, and acquire their distinctive folds. This article reviews progress since 1997 relevant to TBM and other candidate morphogenetic mechanisms. At a cellular level, studies of diverse cell types in vitro and in vivo demonstrate that tension plays a major role in many developmental events. At a tissue level, I propose a differential expansion sandwich plus (DES+) revision to the original TBM model for cerebral cortical expansion and folding. It invokes tangential tension and “sulcal zipping” forces along the outer cortical margin as well as tension in the white matter core, together competing against radially biased tension in the cortical gray matter. Evidence for and against the DES+ model is discussed, and experiments are proposed to address key tenets of the DES+ model. For cerebellar cortex, a cerebellar multilayer sandwich (CMS) model is proposed that can account for many distinctive features, including its unique, accordion-like folding in the adult, and experiments are proposed to address its specific tenets.

Morphogenesis, the process whereby bodies and brains acquire their distinctive shapes, has fascinated scientists for centuries. Morphogenesis of the nervous system is particularly intriguing, given the sheer number of neural subdivisions, their intricate shapes, and the staggering complexity of local and long-distance wiring. The mammalian cerebral cortex has attracted special attention, as it is physically dominant and mediates a wide range of functions. The convolutions of human cerebral cortex are notable for their complexity, individual variability, and susceptibility to abnormal folding in many brain disorders. Cerebellar cortex has received less attention but is equally intriguing as to how it acquires its accordion-like parallel folds.In 1997, I proposed that mechanical tension along the length of axons, dendrites, and glial processes contributes to many aspects of morphogenesis throughout the nervous system (1). Cerebral cortex was the premier examplar for this general tension-based morphogenesis (TBM) hypothesis. As originally formulated, radially biased tension along dendrites and axons within cortical gray matter (CGM) can explain why the cortex is a thin sheet that expands preferentially in the tangential domain. With increasing brain size, cerebral cortex increases disproportionately relative to subcortical domains (2). Consequently, beyond a critical surface area, tangential cortical expansion exceeds what is needed to envelop the underlying subcortical structures, and cortical folding ensues (3, 4). Axonal tension in the underlying core that includes white matter (WM) can explain why the cortex folds to bring strongly connected regions closer together and make WM compact. The TBM hypothesis is based on physical forces (tension and pressure) that must shape all living things (5). TBM would naturally reduce overall wiring length if all neuronal processes strive to reduce their length while preserving their connectivity, thereby benefitting processing speed and energy efficiency. Initial support for TBM included pioneering in vitro studies showing that neurites can generate tension, elongate when towed, and retract on release of tension (68).Here, I review a burgeoning literature since 1997 that provides extensive evidence and arguments bearing on the TBM hypothesis. Studies at molecular and cellular levels have deepened our understanding of how mechanical tension works against osmotic pressure and focally directed pressure to maintain and modify cell shape. The workhorse “toolkit” for tissue morphogenesis is the intracellular cytoskeleton, an intricate network dominated by elongated macromolecular filaments whose length, bundling, and anchoring to the plasma membrane and to various organelles are regulated by a plethora of adjunct molecules.At the tissue level, evidence both for and against the TBM cortical folding hypothesis has been reported. Here, I propose a “differential expansion sandwich plus” (DES+) model that preserves key aspects of the original TBM model but incorporates additional features that enhance its explanatory power. In brief, the DES+ model includes the following tenets: 1) Tangential cortical expansion is promoted by radially biased tension in CGM, supplemented by cerebrospinal fluid (CSF) pressure at early ages. 2) Differential tangential expansion along the cortex/core boundary promotes folding via two complementary mechanisms: 2A) Pathway-specific tension promotes gyral folds at specific locations and 2B) tethering tension promotes buckling along the cortex/core boundary. 3) Tangential tension in an outer cortical layer (the third layer of the DES+ “sandwich”) combines with transsulcal adhesion of the leptomeninges (pia and arachnoid layers) to promote buckling, sulcal invagination, and “sulcal zippering.” 4) Patterns of proliferation and migration impact early three-dimensional (3D) brain geometry, indirectly influence the location of cortical folds and the axis of folding, and constitute the “+” of the DES+ model. 5) Tension throughout the central nervous system (CNS) reduces wiring length and interstitial space, subject to the topological constraints imposed by axonal interdigitation.A variety of approaches are proposed to test the first three tenets. With regard to cerebellar cortex, a different type of multilayer sandwich model is proposed that can account for many distinctive aspects of cerebellar morphogenesis and adult architecture. Before detailed consideration of the DES+ model, it is useful to 1) introduce a biomechanical perspective and framework, 2) summarize key evidence regarding how tension operates at a cellular level, and 3) review key developmental events during forebrain morphogenesis.  相似文献   

19.
We explore charge migration in DNA, advancing two distinct mechanisms of charge separation in a donor (d)–bridge ({Bj})–acceptor (a) system, where {Bj} = B1,B2,… , BN are the N-specific adjacent bases of B-DNA: (i) two-center unistep superexchange induced charge transfer, d*{Bj}a → d{Bj}a±, and (ii) multistep charge transport involves charge injection from d* (or d+) to {Bj}, charge hopping within {Bj}, and charge trapping by a. For off-resonance coupling, mechanism i prevails with the charge separation rate and yield exhibiting an exponential dependence exp(−βR) on the d-a distance (R). Resonance coupling results in mechanism ii with the charge separation lifetime τ Nη and yield Y (1 + Nη)−1 exhibiting a weak (algebraic) N and distance dependence. The power parameter η is determined by charge hopping random walk. Energetic control of the charge migration mechanism is exerted by the energetics of the ion pair state dB1±B2… BNa relative to the electronically excited donor doorway state d*B1B2… BNa. The realization of charge separation via superexchange or hopping is determined by the base sequence within the bridge. Our energetic–dynamic relations, in conjunction with the energetic data for d*/d and for B/B+, determine the realization of the two distinct mechanisms in different hole donor systems, establishing the conditions for “chemistry at a distance” after charge transport in DNA. The energetic control of the charge migration mechanisms attained by the sequence specificity of the bridge is universal for large molecular-scale systems, for proteins, and for DNA.  相似文献   

20.
We report a molecular switching ensemble whose states may be regulated in synergistic fashion by both protonation and photoirradiation. This allows hierarchical control in both a kinetic and thermodynamic sense. These pseudorotaxane-based molecular devices exploit the so-called Texas-sized molecular box (cyclo[2]-(2,6-di(1H-imidazol-1-yl)pyridine)[2](1,4-dimethylenebenzene); 14+, studied as its tetrakis-PF6 salt) as the wheel component. Anions of azobenzene-4,4′-dicarboxylic acid (2H+•2) or 4,4′-stilbenedicarboxylic acid (2H+•3) serve as the threading rod elements. The various forms of 2 and 3 (neutral, monoprotonated, and diprotonated) interact differently with 14+, as do the photoinduced cis or trans forms of these classic photoactive guests. The net result is a multimodal molecular switch that can be regulated in synergistic fashion through protonation/deprotonation and photoirradiation. The degree of guest protonation is the dominating control factor, with light acting as a secondary regulatory stimulus. The present dual input strategy provides a complement to more traditional orthogonal stimulus-based approaches to molecular switching and allows for the creation of nonbinary stimulus-responsive functional materials.

Multifactor regulation of biomolecular machines is essential to their ability to carry out various biological functions (1 11). Construction of artificial molecular devices with multifactor regulation features may allow us to understand and simulate biological systems more effectively (12 31). However, creating and controlling such synthetic constructs remains challenging (16, 32 37). Most known systems involving multifactor regulation, including most so-called molecular switches and logic devices (38 43), have been predicated on an orthogonal strategy wherein the different control factors that determine the distribution of accessible states do not affect one another (20, 44 56). However, in principle, a greater level of control can be achieved by using two separate regulatory inputs that operate in synergistic fashion. Ideally, this could lead to hierarchical control where different states are specifically accessed by means of appropriately selected nonorthogonal inputs. However, to our knowledge, only a limited number of reports detailing controlled hierarchical systems have appeared (57). Furthermore, the balance between specific effects (e.g., kinetics vs. thermodynamics) under conditions of stimulus regulation is still far from fully understood (54). There is thus a need for new systems that can provide further insights into the underlying design determinants. Here we report a set of pseudorotaxane molecular shuttles that act as multimodal chemical switches subject to hierarchical control.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号