首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Unlike crystalline atomic and ionic solids, texture development due to crystallographically preferred growth in colloidal crystals is less studied. Here we investigate the underlying mechanisms of the texture evolution in an evaporation-induced colloidal assembly process through experiments, modeling, and theoretical analysis. In this widely used approach to obtain large-area colloidal crystals, the colloidal particles are driven to the meniscus via the evaporation of a solvent or matrix precursor solution where they close-pack to form a face-centered cubic colloidal assembly. Via two-dimensional large-area crystallographic mapping, we show that the initial crystal orientation is dominated by the interaction of particles with the meniscus, resulting in the expected coalignment of the close-packed direction with the local meniscus geometry. By combining with crystal structure analysis at a single-particle level, we further reveal that, at the later stage of self-assembly, however, the colloidal crystal undergoes a gradual rotation facilitated by geometrically necessary dislocations (GNDs) and achieves a large-area uniform crystallographic orientation with the close-packed direction perpendicular to the meniscus and parallel to the growth direction. Classical slip analysis, finite element-based mechanical simulation, computational colloidal assembly modeling, and continuum theory unequivocally show that these GNDs result from the tensile stress field along the meniscus direction due to the constrained shrinkage of the colloidal crystal during drying. The generation of GNDs with specific slip systems within individual grains leads to crystallographic rotation to accommodate the mechanical stress. The mechanistic understanding reported here can be utilized to control crystallographic features of colloidal assemblies, and may provide further insights into crystallographically preferred growth in synthetic, biological, and geological crystals.

As an analogy to atomic crystals, colloidal crystals are highly ordered structures formed by colloidal particles with sizes ranging from 100 nm to several micrometers (16). In addition to engineering applications such as photonics, sensing, and catalysis (4, 5, 7, 8), colloidal crystals have also been used as model systems to study some fundamental processes in statistical mechanics and mechanical behavior of crystalline solids (914). Depending on the nature of interparticle interactions, many equilibrium and nonequilibrium colloidal self-assembly processes have been explored and developed (1, 4). Among them, the evaporation-induced colloidal self-assembly presents a number of advantages, such as large-size fabrication, versatility, and cost and time efficiency (35, 1518). In a typical synthesis where a substrate is immersed vertically or at an angle into a colloidal suspension, the colloidal particles are driven to the meniscus by the evaporation-induced fluid flow and subsequently self-assemble to form a colloidal crystal with the face-centered cubic (fcc) lattice structure and the close-packed {111} plane parallel to the substrate (2, 3, 1923) (see Fig. 1A for a schematic diagram of the synthetic setup).Open in a separate windowFig. 1.Evaporation-induced coassembly of colloidal crystals. (A) Schematic diagram of the evaporation-induced colloidal coassembly process. “G”, “M”, and “N” refer to “growth,” “meniscus,” and “normal” directions, respectively. The reaction solution contains silica matrix precursor (tetraethyl orthosilicate, TEOS) in addition to colloids. (B) Schematic diagram of the crystallographic system and orientations used in this work. (C and D) Optical image (Top Left) and scanning electron micrograph (SEM) (Bottom Left) of a typical large-area colloidal crystal film before (C) and after (D) calcination. (Right) SEM images of select areas (yellow rectangles) at different magnifications. Corresponding fast-Fourier transform (see Inset in Middle in C) shows the single-crystalline nature of the assembled structure. (E) The 3D reconstruction of the colloidal crystal (left) based on FIB tomography data and (right) after particle detection. (F) Top-view SEM image of the colloidal crystal with crystallographic orientations indicated.While previous research has focused on utilizing the assembled colloidal structures for different applications (4, 5, 7, 8), considerably less effort is directed to understand the self-assembly mechanism itself in this process (17, 24). In particular, despite using the term “colloidal crystals” to highlight the microstructures’ long-range order, an analogy to atomic crystals, little is known regarding the crystallographic evolution of colloidal crystals in relation to the self-assembly process (3, 22, 25). The underlying mechanisms for the puzzling—yet commonly observed—phenomenon of the preferred growth along the close-packed <110> direction in evaporation-induced colloidal crystals are currently not understood (3, 2529). The <110> growth direction has been observed in a number of processes with a variety of particle chemistries, evaporation rates, and matrix materials (3, 2528, 30), hinting at a universal underlying mechanism. This behavior is particularly intriguing as the colloidal particles are expected to close-pack parallel to the meniscus, which should lead to the growth along the <112> direction and perpendicular to the <110> direction (16, 26, 31)*.Preferred growth along specific crystallographic orientations, also known as texture development, is commonly observed in crystalline atomic solids in synthetic systems, biominerals, and geological crystals. While current knowledge recognizes mechanisms such as the oriented nucleation that defines the future crystallographic orientation of the growing crystals and competitive growth in atomic crystals (3234), the underlying principles for texture development in colloidal crystals remain elusive. Previous hypotheses based on orientation-dependent growth speed and solvent flow resistance are inadequate to provide a universal explanation for different evaporation-induced colloidal self-assembly processes (3, 2529). A better understanding of the crystallographically preferred growth in colloidal self-assembly processes may shed new light on the crystal growth in atomic, ionic, and molecular systems (3537). Moreover, mechanistic understanding of the self-assembly processes will allow more precise control of the lattice types, crystallography, and defects to improve the performance and functionality of colloidal assembly structures (3840).  相似文献   

2.
3.
Numerical cognition is ubiquitous in the animal kingdom. Domestic chicks are a widely used developmental model for studying numerical cognition. Soon after hatching, chicks can perform sophisticated numerical tasks. Nevertheless, the neural basis of their numerical abilities has remained unknown. Here, we describe number neurons in the caudal nidopallium (functionally equivalent to the mammalian prefrontal cortex) of young domestic chicks. Number neurons that we found in young chicks showed remarkable similarities to those in the prefrontal cortex and caudal nidopallium of adult animals. Thus, our results suggest that numerosity perception based on number neurons might be an inborn feature of the vertebrate brain.

Be it a number of conspecifics in a group (1), a number of food items (2), or a number of motifs in a song (3), correct estimation of quantities is of vital importance for animals. Several behavioral studies have confirmed that numerical competence is not a prerogative of human beings but is a widespread phenomenon in the animal kingdom (reviewed by refs. 4 and 5). Mammals (68), birds (3, 9, 10), reptilians (11), amphibians (12), fishes (13), and invertebrates (14), although evolutionarily distant, all can spontaneously assess quantities using an approximate number system (15).For the approximate number system, which is based on Weber’s law (16), the perception of cardinal numbers resembles the perception of continuous physical stimuli, and the just noticeable difference is proportionate to the quantity being estimated. As a consequence, discrimination of quantities is imprecise and depends on the numerical distance between stimuli. In other words, it is easier to tell apart 5 and 10 than 9 and 10. Moreover, discrimination of quantities becomes increasingly difficult with the numerical size. For a given numerical distance (e.g., one), it is easier to discriminate between numbers with low magnitudes (1 vs. 2) than with high magnitudes (9 vs. 10).Recent research has uncovered that the approximate number system relies on the activity of a specific neuronal population. Neurons that respond to abstract numerosity irrespective of objects’ physical appearance (shape, color, size) have been found in the forebrain of human and nonhuman primates (17, 18) and in crows (19). In mammals, numerical responses were recorded in the parietal and the prefrontal cortices (PFCs) (17). In birds, similar neurons have been described in the caudolateral nidopallium (NCL) (19). The NCL is believed to be an analog of the PFC in the avian brain (20) and is involved in a variety of cognitive processes, including memory formation (21, 22), abstract rule learning (23), and action planning (24).Both monkeys and crows are among the most evolutionarily advanced species of their phylogenetic groups. They independently developed sophisticated intellectual capacities (25), and both possess enlarged forebrains (26). The neural representations of numerosities described in these species also share remarkable similarities (19, 2729). In both species, the number neurons show the strongest response to a preferred numerosity, which gradually decreases along with the numerical distance (numerical distance effect, but see ref. 30). Their tuning curves are skewed toward larger numerosities and become progressively broader (less selective) with increasing numerosities (numerical size effect). However, it is unclear whether the presence of similar number neurons in these two species emerges as a consequence of their elaborate cognitive skills and enlarged forebrains. To understand the evolution of the number sense, we need to explore its neural correlates in distant bird species with more ancestral traits.Moreover, until now, number neurons have been described only in adult animals (e.g., refs. 19, 2729, and 31). At the same time, behavioral data from human infants (32) and young domestic chicks (10, 33) indicate that some core numerical abilities might be an inborn or spontaneously emerging (34, 35) property of the vertebrate brain. Testing the presence of number neurons in young and untrained organisms is crucial to verify this hypothesis.In our study, we aimed to describe the neural correlates of the number sense in domestic chicks (Gallus gallus), which belong to a sister group of modern Neoaves (36). The domestic chick is a well-established developmental model for studying numerical cognition. Soon after hatching, these birds are already capable of discriminating quantities (33, 37) and even performing basic arithmetic operations (10). It has also been shown that young chicks represent numbers across the mental number line (38), a cognitive ability that had been previously attributed only to humans.We hypothesized that neural processing of numerical information in young untrained chicks might be similar to that in crows, despite them having evolved independently over the last ∼70 million years (36). In a domestic chicken, the NCL is morphologically different from that of corvids (39), but it is unclear whether this reflects any functional difference. Therefore, we decided to search for neural responses to numerical stimuli in the NCL of domestic chicks. For this purpose, we habituated young chicks to a computer monitor, where numerical stimuli were presented (Fig. 1A). We explored neural responses to numerosities from one to five. To control for nonnumerical parameters, we presented three different categories of stimuli: “radius-fixed,” “area-fixed,” and “perimeter-fixed” (Fig. 1B).Open in a separate windowFig. 1.Experimental design. (A) Schematic drawing of the experimental setup. Young chicks were placed in a small wooden box in front of the screen, where numerical stimuli appeared. They were trained to pay attention to the stimuli without any further discrimination between different numerosities. (B) Examples of different types of numerosity stimuli that we presented in every neural recording: “radius-fixed,” “area-fixed,” and “perimeter-fixed.”  相似文献   

4.
Materials containing heterogeneous nanostructures hold great promise for achieving superior mechanical properties. However, the strengthening effect due to plastically inhomogeneous deformation in heterogeneous nanostructures has not been clearly understood. Here, we investigate a prototypical heterogeneous nanostructured material of gradient nanotwinned (GNT) Cu to unravel the origin of its extra strength arising from gradient nanotwin structures relative to uniform nanotwin counterparts. We measure the back and effective stresses of GNT Cu with different nanotwin thickness gradients and compare them with those of homogeneous nanotwinned Cu with different uniform nanotwin thicknesses. We find that the extra strength of GNT Cu is caused predominantly by the extra back stress resulting from nanotwin thickness gradient, while the effective stress is almost independent of the gradient structures. The combined experiment and strain gradient plasticity modeling show that an increasing structural gradient in GNT Cu produces an increasing plastic strain gradient, thereby raising the extra back stress. The plastic strain gradient is accommodated by the accumulation of geometrically necessary dislocations inside an unusual type of heterogeneous dislocation structure in the form of bundles of concentrated dislocations. Such a heterogeneous dislocation structure produces microscale internal stresses leading to the extra back stress in GNT Cu. Altogether, this work establishes a fundamental connection between the gradient structure and extra strength in GNT Cu through the mechanistic linkages of plastic strain gradient, heterogeneous dislocation structure, microscale internal stress, and extra back stress. Broadly, this work exemplifies a general approach to unraveling the strengthening mechanisms in heterogeneous nanostructured materials.

Heterogeneous nanostructured metals exhibit excellent mechanical properties such as ultrahigh strength, ductility, toughness, and their combinations (17). The strengthening effects arising from various types of heterogeneous nanostructures have been recently studied from different perspectives, including back and forward stresses (813), Bauschinger effect (14, 15), plastic strain gradient (1619), and geometrically necessary dislocations (GNDs) (2023), among others (5, 6, 24, 25). However, there is a critical lack of a general framework and associated exemplary studies that unify these different perspectives. Such unification is essential to vastly accelerating efforts for understanding the origin of strengthening caused by heterogeneous nanostructures and therefore enabling more advanced development of heterogeneous nanostructured metals.Recently, gradient nanotwinned (GNT) Cu has been fabricated by stacking four homogeneous nanotwinned (HNT) components with increasing twin thickness (4, 17). Through tuning processing conditions, GNT Cu exhibits a periodic variation of nanotwin thickness through sample thickness. As a result, its overall yield strength surpasses the rule-of-mixture average of yield strengths of four HNT components, giving a substantial extra strength of GNT Cu. An increase of nanotwin thickness gradient (hereafter referred to as structural gradient) can result in a marked increase of the extra strength. Given the excellent control of structural gradient and the resultant tunability of extra strength, GNT Cu can serve as a prototypical heterogeneous nanostructured material to unravel the origin of extra strengthening in heterogeneous nanostructures.Fig. 1 presents a general framework for understanding the mechanics of heterogeneous nanostructures with GNT Cu as an example. Here it is important to take into consideration the size of a selected representative volume element (RVE) relative to the characteristic length scales of GNT Cu, which feature the wavelength of periodically varying twin thickness (on the order of hundreds of micrometers) as well as the nanotwin thickness (on the order of tens of nanometers). As shown in the red panel of Fig. 1, when the entire sample of GNT Cu is taken as a “large” RVE, the strengthening effect of structural gradient inside the RVE can be characterized by partitioning the overall stress into its back and effective stress components based on the local plasticity theory of kinematic hardening (2628). The back stress reflects the directional, long-range internal stresses arising from plastically inhomogeneous deformation in gradient structures, while the effective stress represents the nondirectional, short-range resistance to gliding dislocations from lattice friction and local pinning obstacles (12). Hence, quantification of the back and effective stresses can provide critical mechanistic information on the origin of strengthening in heterogeneous nanostructures. In contrast, the blue panel of Fig. 1 shows an alternative approach of choosing a “small” RVE that contains twin lamellae with a uniform thickness. Suppose a “small” RVE represents a “soft” region containing uniformly thick twin lamellae, while an adjacent “small” RVE represents a “hard” region containing uniformly thin twin lamellae. A structural gradient across the two RVEs results in a spatial gradient of plastic strains, whose strengthening effect can be characterized by the nonlocal theory of strain gradient plasticity (SGP) (17). Note that these “small” RVEs with uniform twin thickness also contain structural heterogeneity due to the presence of twin boundaries (TBs) and twin lamellae with different orientations. The strengthening effect of this type of structural heterogeneity can be characterized by the back and effective stresses that prevail locally within each “small” RVE (29). Therefore, the strengthening effects arising from the nanotwin gradients and uniform nanotwins are separated in the “small-RVE” approach, while these two strengthening effects are combined in the “large-RVE” approach.Open in a separate windowFig. 1.A general framework to study the mechanics of heterogeneous nanostructures (with GNT Cu as an example) in terms of large and small RVEs at different length scales.In this work, we first measure the back stress and effective stress for four types of freestanding HNT Cu samples with different average twin thicknesses, and each type of HNT Cu is treated as a “small” RVE. Then, we measure the sample-level back stress and effective stress for four types of GNT Cu samples with different structural gradients, and each type of GNT Cu is considered as a “large” RVE. These results enable us to establish a direct connection between the structural gradient and extra back stress. Moreover, the small-RVE approach is applied to study GNT Cu via SGP modeling, in order to quantitatively evaluate the effect of plastic strain gradients across small RVEs on the generation of extra back stress as a function of structural gradient. The combined experimental results from the two RVE approaches, in conjunction with SGP modeling, allow us to quantitatively understand the origin of extra back stress and resultant extra strength in GNT Cu, thereby enabling an in-depth mechanistic understanding of the strengthening mechanisms in heterogeneous nanostructures.  相似文献   

5.
The brain supports adaptive behavior by generating predictions, learning from errors, and updating memories to incorporate new information. Prediction error, or surprise, triggers learning when reality contradicts expectations. Prior studies have shown that the hippocampus signals prediction errors, but the hypothesized link to memory updating has not been demonstrated. In a human functional MRI study, we elicited mnemonic prediction errors by interrupting familiar narrative videos immediately before the expected endings. We found that prediction errors reversed the relationship between univariate hippocampal activation and memory: greater hippocampal activation predicted memory preservation after expected endings, but memory updating after surprising endings. In contrast to previous studies, we show that univariate activation was insufficient for understanding hippocampal prediction error signals. We explain this surprising finding by tracking both the evolution of hippocampal activation patterns and the connectivity between the hippocampus and neuromodulatory regions. We found that hippocampal activation patterns stabilized as each narrative episode unfolded, suggesting sustained episodic representations. Prediction errors disrupted these sustained representations and the degree of disruption predicted memory updating. The relationship between hippocampal activation and subsequent memory depended on concurrent basal forebrain activation, supporting the idea that cholinergic modulation regulates attention and memory. We conclude that prediction errors create conditions that favor memory updating, prompting the hippocampus to abandon ongoing predictions and make memories malleable.

In daily life, we continuously draw on past experiences to predict the future. Expectation and surprise shape learning across many situations, such as when we discover misinformation in the news, receive feedback on an examination, or make decisions based on past outcomes. When our predictions are incorrect, we must update our mnemonic models of the world to support adaptive behavior. Prediction error is a measure of the discrepancy between expectation and reality; this surprise signal is both evident in brain activity and related to learning (16). The brain dynamically reconstructs memories during recall, recreating and revising past experiences based on current information (7). The intuitive idea that surprise governs learning has long shaped our understanding of memory, reward learning, perception, action, and social behavior (2, 814). Yet, the neural mechanisms that allow prediction error to update memories remain unknown.Past research has implicated the hippocampus in each of the mnemonic functions required for learning from prediction errors: retrieving memories to make predictions, identifying discrepancies between past and present, and encoding new information (2, 1520). Functional MRI (fMRI) studies have shown that hippocampal activation increases after predictions are violated; this surprise response has been termed “mismatch detection” (18, 19, 2123) or “mnemonic prediction error” (20). These past studies have shown that the hippocampus detects mnemonic prediction errors. Several theoretical frameworks have hypothesized that this hippocampal prediction error signal could update memories (17, 20, 2427), but this crucial link for understanding how we learn from error has not yet been demonstrated.What mechanisms could link hippocampal prediction errors to memory updating? A leading hypothesis is that prediction errors shift the focus of attention and adjust cognitive processing (20, 2832). After episodes that align with expectations, we should continue generating predictions and shift attention internally, sustaining and reinforcing existing memories. However, after mnemonic prediction errors, we should reset our expectations and shift attention externally, preparing to encode new information and update memories. Consistent with this idea, mnemonic prediction errors have been shown to enhance the hippocampal input pathway that supports encoding, but suppress the output pathway that supports retrieval (20). We propose that surprising events may also change intrinsic hippocampal processing, changing the effect of hippocampal activation on memory outcomes.Neuromodulation may be a critical factor that regulates hippocampal processing and enables memory updating. Currently, there is mixed evidence supporting two hypotheses: acetylcholine or dopamine could act upon the hippocampus to regulate processing after surprising events (2427, 29, 31, 33, 34). Several models have proposed that acetylcholine from the medial septum (within the basal forebrain) regulates the balance between input and output pathways in the hippocampus (2729, 3538), thus allowing stored memories to be compared with perceptual input (31, 38, 39). After prediction errors, acetylcholine release could change hippocampal processing and enhance encoding or memory updating (26, 29, 33, 37, 39). On the other hand, dopamine released from the ventral tegmental area (VTA), if transmitted to the hippocampus, could also modulate hippocampal plasticity after prediction errors. Past studies have shown that the hippocampus and VTA are coactivated after surprising events (40, 41). Other work has shown that coactivation of the hippocampus and VTA predicts memory encoding and integration (4245). Overall, basal forebrain and VTA neuromodulation are both candidate mechanisms for regulating hippocampal processing and memory updating.In the present study, we used an fMRI task with human participants to examine trial-wise hippocampal responses to prediction errors during narrative videos. During the “encoding phase,” participants viewed 70 full-length videos that featured narrative episodes with salient endings (e.g., a baseball batter hitting a home run) (Fig. 1A). During the “reactivation phase” the following day, participants watched the videos again (Fig. 1B). We elicited mnemonic prediction errors by interrupting half of the videos immediately before the expected narrative ending (e.g., the video ends while the baseball batter is midswing). These surprising interruptions were comparable to the prediction errors employed in prior studies of memory updating (1). Half of the videos were presented in full-length form (Full, as previously seen during the encoding phase) and half were presented in interrupted form (Interrupted, eliciting prediction error).Open in a separate windowFig. 1.Overview of experimental paradigm. (A) During the encoding phase, all videos were presented in full-length form. Here we show example frames depicting a stimulus video. (B) During the reactivation phase, participants viewed the 70 videos again, but half (35 videos) were interrupted to elicit mnemonic prediction error. Participants were cued with the video name, watched the video (Full or Interrupted), and then viewed a fixation screen. The “baseball” video was interrupted when the batter was midswing. fMRI analyses focused on the postvideo fixation periods (red highlighted boxes). Thus, visual and auditory stimulation were matched across Full and Interrupted conditions, allowing us to compare postvideo neural activation while controlling for perceptual input. (C) During the test phase, participants answered structured interview questions about all 70 videos, and were instructed to answer based on their memory of the Full video originally shown during the Encoding phase. Here we show example text illustrating the memory test format and scoring of correct details (our measure of memory preservation) and false memories (our measure of memory updating, because false memories indicate that the memory has been modified). The void response (“I don’t remember”) is not counted as a false memory. (D) Overview of the experiment. All participants completed encoding, reactivation, and test phases of the study. The Delayed group (fMRI participants) completed the test phase 24 h after reactivation, because prior studies have shown that memory updating becomes evident only after a delay (e.g., to permit protein synthesis). The Immediate group completed the test phase immediately after reactivation and was not scanned. The purpose of the Immediate group was to test the behavioral prediction that memory updating required a delay.During the “test phase,” participants completed a memory test in the form of a structured interview (Fig. 1C). On each trial, participants were cued with the name of the video and recalled the narrative. The experimenter then probed for further details with predetermined questions (e.g., “Can you describe the baseball batter’s ethnicity, age range, or clothing?”). Our critical measure of memory updating was “false memories,” because the presence of a false memory indicates that the original memory was changed in some way. Although it can be adaptive to update real-world memories by incorporating relevant new information, we expected that our laboratory paradigm would induce false memories because participants would integrate interfering details across similar episodes (1, 7). Because we were interested in false memories as a measure of memory updating, we instructed participants not to guess and permitted them to skip details they could not recall.Prior research in human and animals has shown that some memory-updating effects only emerge after delays that allow protein synthesis to occur during consolidation and reconsolidation (1, 4648). Therefore, to test our primary question about the neural correlates of memory updating, fMRI participants completed the encoding, reactivation, and test phases over 3 d, with 24-h between each session (Delayed group, n = 24). In addition, we tested the behavioral prediction that memory updating would require a delay (i.e., because transforming a memory trace requires protein synthesis) by recruiting a separate group of participants who completed the test phase immediately after the reactivation phase on day 2 (Immediate group, n = 24) (Fig. 1D). Delayed group participants completed the reactivation phase while undergoing an fMRI scan, whereas Immediate group participants (n = 24) were not scanned. Our primary fMRI analyses examined the fixation period immediately following the offset of Full and Interrupted videos (postvideo period) (Fig. 1 B, Right) during the reactivation phase in the Delayed group. Importantly, this design compares neural responses to surprising and expected video endings while controlling for visual and auditory input.Our approach allowed us to test several questions set up by the prior literature. First, we used naturalistic video stimuli to examine the effect of mnemonic prediction error on hippocampal activation and episodic memories. Second, to investigate hippocampal processing, we used multivariate analyses to track how episodic representations were sustained or disrupted over time. Third, to test hypotheses about neuromodulatory mechanisms, we related hippocampal activation and memory updating to activation in the basal forebrain and VTA.  相似文献   

6.
To address the hotly debated question of motor system involvement in language comprehension, we recorded neuromagnetic responses elicited in the human brain by unattended action-related spoken verbs and nouns and scrutinized their timecourse and neuroanatomical substrates. We found that already very early on, from ∼80 ms after disambiguation point when the words could be identified from the available acoustic information, both verbs and nouns produced characteristic somatotopic activations in the motor strip, with words related to different body parts activating the corresponding body representations. Strikingly, along with this category-specific activation, we observed suppression of motor-cortex activation by competitor words with incompatible semantics, documenting operation of the neurophysiological principles of lateral/surround inhibition in neural word processing. The extremely early onset of these activations and deactivations, their emergence in the absence of attention, and their similar presence for words of different lexical classes strongly suggest automatic involvement of motor-specific circuits in the perception of action-related language.The old debate on localization of cognitive functions in the brain was recently reinvigorated with the advent of a concept of mirror neurons and a closely related framework of grounded cognition (18). The mirror neuron theory stemmed from a seminal discovery of neurons that activate equally when a specific action is performed by the tested individual or when observing the same action performed by others, giving a strong neurophysiological proof for the concept of comprehension and learning through simulation (for a review, see ref. 1). This is enabled by the presence of perception-action circuits in the brain that can provide motor areas with multimodal sensory information (2). An array of findings in mirror neuron and related research strongly suggest that the motor system is not merely a “slave” or an “output” of any central processing, but that it also takes an active role in perception and comprehension of external events. In cognitive science, which had suggested the emergence of concepts from individual experiences long before these neurophysiological discoveries (35), a similar strand of research led to a more general framework of “grounding” (or “embodiment”) of cognitive functions and representations in bodily sensations and actions, which was supported through a range of behavioral and neurophysiological experiments (68).Nowhere these approaches resonated more than in the neuroscience of language. Following breakthrough neurological studies of the 19th century (9, 10), the human language function was for many decades confined to a small set of cortical areas in the left hemisphere. More recent research, however, challenged these views in favor of linguistic representations distributed over a range of brain areas, which span beyond the core language cortices of Broca and Wernicke and form circuits whose configuration depends on the exact sensory and motor reference of a specific representation (11). Based on neurobiological principle of associative learning, coactive neurons become linked in a distributed neuronal circuit that is formed in the process of language acquisition and that may, for example, bind the information of the word’s sensory perception (temporal cortex) with its articulatory program (inferior-frontal cortex) and a sensory reference (e.g., visual cortex for imageable concrete objects) and/or a motor one (e.g., motor cortex for words describing actions). The cortical systems for language and actions are reciprocally connected allowing for language and action-related information to interact in such distributed neuronal assemblies. It has been shown, for example, that words referring to different body parts (e.g., kick, pick, lick) lead to differential activation in the motor strip (12, 13), organized in somatotopic fashion similar to the somatotopy of body representations (14). Further, even perception of individual speech sounds (e.g., labial “p” vs. dental “t”) leads to differential motor strip activity in lip and tongue areas, respectively (15), in line with predictions of the motor theory of speech developed long before the advent of neuroimaging or the discovery of mirror neurons (16).These views are, however, hotly contested in the literature, the most common argument being that the motor activation for language or action observation is epiphenomenal and does not constitute a part of the comprehension process per se (17, 18). This argument is especially easy to make with respect to hemodynamic neuroimaging data (such as functional magnetic resonance imaging, fMRI) that have very poor time resolution and, hence, delayed covert action simulation or imagery indeed cannot be excluded. It is, however, more difficult to argue against a small but growing body of time-resolved electro- and magneto-encephalogprahic (EEG, MEG) results that show a rapid (∼140–200 ms) activation of motor areas in response to action words (13, 1921). With different theories of language, diverse as they may be, placing lexically and semantically specific processing at 150–500 ms and most often at 350–400 ms (22), it may be hard to argue that word-specific activations before 200 ms reflect a late postcomprehension process.However, most recent investigations suggested that the earliest brain reflections of lexical access can be seen much earlier, already at 50–80 ms (23). This earliness, in turn, may indicate that the speed of language processing in the brain is faster than believed previously and that even the 150- to 200-ms activations may therefore be late and possibly even secondary. Importantly, previous research focused on the main peaks of event-related responses, possibly failing to locate a specific activation outside these peak intervals. Further, the earlier focus of research on action-related verbs may have confounded the results because verbs had been suggested to be preferentially represented in the more frontal cortices (2426). In a typical experiment using English language, it is not possible to fully differentiate a verb from a noun (e.g., “pick” could be both interpreted as the action of picking or the object of choice); although some recent work tried addressing this confound (e.g., refs. 27 and 28), it did not have the temporal resolution to answer the neural timing question. Finally, although fMRI studies controlled the localization of motor cortices through a motor localizer task (12), previous EEG/MEG studies suggesting rapid motor systems involvement in comprehension mainly used a crude localization of brain activity relying on blurry source solutions built on template brain surfaces or even spherical models, usually in the absence of a localizer task.Thus, to more fully elucidate the role of motor circuits in language perception, it appears essential to (i) scrutinize the entire time course of action word processing in the brain rather than concentrate on response maxima, (ii) use experimental language where verbs and nouns are unambiguously distinguished to investigate perception of action words that are/are not verbs, (iii) remove stimulus-related experimental task and even attention on stimuli to minimize the risk on imagery or simulation, and (iv) use time-resolved electrophysiological imaging techniques in combination with a motor localizer task and individual brain surfaces for source localization precision. These challenges were successfully tackled in the current study. We used high-density magentoencephalography to record auditory mismatch field responses, a neurophysiological index of linguistic memory circuit activation (29), elicited by a set of tightly controlled Russian action-related verbs and nouns, which were related to different body parts (kick, throw, swallow) and which the subjects were instructed to ignore while concentrating on a primary visual task. We then scrutinized motor cortex activity in response to these items by means of calculating focal cortical current sources based on individual magnetic resonance (MR) images, comparing them between different semantic subcategories and benchmarking them against a movement-related cortical activity as such. What we found is somatotopically specific ultrarapid activation of cortical motor structures in response to passively presented spoken words, providing strong evidence for the automatic involvement of motor-specific circuits in spoken language comprehension. Furthermore, these results show that the word comprehension process in the brain is subject to the operation of the neurophysiological mechanism of surround inhibition, whereby activation in competing motor representations could be suppressed by semantically incoherent verbal input.  相似文献   

7.
The peopling of Remote Oceanic islands by Austronesian speakers is a fascinating and yet contentious part of human prehistory. Linguistic, archaeological, and genetic studies have shown the complex nature of the process in which different components that helped to shape Lapita culture in Near Oceania each have their own unique history. Important evidence points to Taiwan as an Austronesian ancestral homeland with a more distant origin in South China, whereas alternative models favor South China to North Vietnam or a Southeast Asian origin. We test these propositions by studying phylogeography of paper mulberry, a common East Asian tree species introduced and clonally propagated since prehistoric times across the Pacific for making barkcloth, a practical and symbolic component of Austronesian cultures. Using the hypervariable chloroplast ndhF-rpl32 sequences of 604 samples collected from East Asia, Southeast Asia, and Oceanic islands (including 19 historical herbarium specimens from Near and Remote Oceania), 48 haplotypes are detected and haplotype cp-17 is predominant in both Near and Remote Oceania. Because cp-17 has an unambiguous Taiwanese origin and cp-17–carrying Oceanic paper mulberries are clonally propagated, our data concur with expectations of Taiwan as the Austronesian homeland, providing circumstantial support for the “out of Taiwan” hypothesis. Our data also provide insights into the dispersal of paper mulberry from South China “into North Taiwan,” the “out of South China–Indochina” expansion to New Guinea, and the geographic origins of post-European introductions of paper mulberry into Oceania.The peopling of Remote Oceania by Austronesian speakers (hereafter Austronesians) concludes the last stage of Neolithic human expansion (13). Understanding from where, when, and how ancestral Austronesians bridged the unprecedentedly broad water gaps of the Pacific is a fascinating and yet contentious subject in anthropology (18). Linguistic, archaeological, and genetic studies have demonstrated the complex nature of the process, where different components that helped to shape Lapita culture in Near Oceania each have their own unique history (13). Important evidence points to Taiwan as an Austronesian ancestral homeland with a more distant origin in South China (S China) (3, 4, 912), whereas alternative models suggest S China to North Vietnam (N Vietnam) (7) or a Southeast Asian (SE Asian) origin based mainly on human genetic data (5). The complexity of the subject is further manifested by models theorizing how different spheres of interaction with Near Oceanic indigenous populations during Austronesian migrations have contributed to the origin of Lapita culture (13), ranging from the “Express Train” model, assuming fast migrations from S China/Taiwan to Polynesia with limited interaction (4), to the models of “Slow Boat” (5) or “Voyaging Corridor Triple I,” in which “Intrusion” of slower Austronesian migrations plus the “Integration” with indigenous Near Oceanic cultures had resulted in the “Innovation” of the Lapita cultural complex (2, 13).Human migration entails complex skills of organization and cultural adaptations of migrants or colonizing groups (1, 3). Successful colonization to resource-poor islands in Remote Oceania involved conscious transport of a number of plant and animal species critical for both the physical survival of the settlers and their cultural transmission (14). In the process of Austronesian expansion into Oceania, a number of animals (e.g., chicken, pigs, rats, and dogs) and plant species (e.g., bananas, breadfruit, taro, yam, paper mulberry, etc.), either domesticated or managed, were introduced over time from different source regions (3, 8, 15). Although each of these species has been shown to have a different history (8), all these “commensal” species were totally dependent upon humans for dispersal across major water gaps (6, 8, 16). The continued presence of these species as living populations far outside their native ranges represents legacies of the highly skilled seafaring and navigational abilities of the Austronesian voyagers.Given their close association to and dependence on humans for their dispersal, phylogeographic analyses of these commensal species provide unique insights into the complexities of Austronesian expansion and migrations (6, 8, 17). This “commensal approach,” first used to investigate the transport of the Pacific rat Rattus exulans (6), has also been applied to other intentionally transported animals such as pigs, chickens, and the tree snail Partula hyalina, as well as to organisms transported accidentally, such as the moth skink Lipinia noctua and the bacterial pathogen Helicobacter pylori (see refs. 2, 8 for recent reviews).Ancestors of Polynesian settlers transported and introduced a suite of ∼70 useful plant species into the Pacific, but not all of these reached the most isolated islands (15). Most of the commensal plants, however, appear to have geographic origins on the Sahul Plate rather than being introduced from the Sunda Plate or East Asia (16). For example, Polynesian breadfruit (Artocarpus altilis) appears to have arisen over generations of vegetative propagation and selection from Artocarpus camansi that is found wild in New Guinea (18). Kava (Piper methysticum), cultivated for its sedative and anesthetic properties, is distributed entirely to Oceania, from New Guinea to Hawaii (16). On the other hand, ti (Cordyline fruticosa), also a multifunctional plant in Oceania, has no apparent “native” distribution of its own, although its high morphological diversity in New Guinea suggests its origin there (19). Other plants have a different history, such as sweet potato, which is of South American origin and was first introduced into Oceania in pre-Columbian times and secondarily transported across the Pacific by Portuguese and Spanish voyagers via historically documented routes from the Caribbean and Mexico (17).Of all commensal species introduced to Remote Oceania as part of the “transported landscapes” (1), paper mulberry (Broussonetia papyrifera; also called Wauke in Hawaii) is the only species that has a temperate to subtropical East Asian origin (15, 20, 21). As a wind-pollinated, dioecious tree species with globose syncarps of orange–red juicy drupes dispersed by birds and small mammals, paper mulberry is common in China, Taiwan, and Indochina, growing and often thriving in disturbed habitats (15, 20, 21). Because of its long fiber and ease of preparation, paper mulberry contributed to the invention of papermaking in China in A.D. 105 and continues as a prime source for high-quality paper (20, 21). In A.D. 610, this hardy tree species was introduced to Japan for papermaking (21). Subsequently it was also introduced to Europe and the United States (21). Paper mulberry was introduced to the Philippines for reforestation and fiber production in A.D. 1935 (22). In these introduced ranges, paper mulberry often becomes naturalized and invasive (2022). In Oceania, linguistic evidence suggests strongly an ancient introduction of paper mulberry (15, 20); its propagation and importance across Remote Oceanic islands were well documented in Captain James Cook’s first voyage as the main material for making barkcloth (15, 20).Barkcloth, generally known as tapa (or kapa in Hawaii), is a nonwoven fabric used by prehistoric Austronesians (15, 21). Since the 19th century, daily uses of barkcloth have declined and were replaced by introduced woven textiles; however, tapa remains culturally important for ritual and ceremony in several Pacific islands such as Tonga, Fiji, Samoa, and the SE Asian island of Sulawesi (23). The symbolic status of barkcloth is also seen in recent revivals of traditional tapa making in several Austronesian cultures such as Taiwan (24) and Hawaii (25). To make tapa, the inner bark is peeled off and the bark pieces are expanded by pounding (20, 21, 23). Many pieces of the bark are assembled and felted together via additional poundings to create large textiles (23). The earliest stone beaters, a distinctive tool used for pounding bark fiber, were excavated in S China from a Late Paleolithic site at Guangxi dating back to ∼8,000 y B.P. (26) and from coastal Neolithic sites in the Pearl River Delta dating back to 7,000 y B.P. (27), providing the earliest known archaeological evidence for barkcloth making. Stone beaters dated to slightly later periods have also been excavated in Taiwan (24), Indochina, and SE Asia, suggesting the diffusion of barkcloth culture to these regions (24, 27). These archaeological findings suggest that barkcloth making was invented by Neolithic Austric-speaking peoples in S China long before Han-Chinese influences, which eventually replaced proto-Austronesian language as well as culture (27).In some regions (e.g., Philippines and Solomon Islands), tapa is made of other species of the mulberry family (Moraceae) such as breadfruit and/or wild fig (Ficus spp.); however, paper mulberry remains the primary source of raw material to produce the softest and finest cloth (20, 23). Before its eradication and extinction from many Pacific islands due to the decline of tapa culture, paper mulberry was widely grown across Pacific islands inhabited by Austronesians (15, 20). Both the literature (15, 20) and our own observations (2830) indicate that extant paper mulberry populations in Oceania are only found in cultivation or as feral populations in abandoned gardens as on Rapa Nui (Easter Island), with naturalization only known from the Solomon Islands (20). For tapa making, its stems are cut and harvested before flowering, and as a majority of Polynesian-introduced crops (16), paper mulberry is propagated clonally by cuttings or root shoots (15, 20), reducing the possibility of fruiting and dispersal via seeds. The clonal nature of the Oceanic paper mulberry has been shown by the lack of genetic variability in nuclear internal transcribed spacer (ITS) DNA sequences (31). With a few exceptions (30), some authors suggest that only male trees of paper mulberry were introduced to Remote Oceania in prehistoric time (15, 20). Furthermore, because paper mulberry has no close relative in Near and Remote Oceania (20), the absence of sexual reproduction precludes the possibility of introgression and warrants paper mulberry as an ideal commensal species to track Austronesian migrations (6, 30).To increase our understanding of the prehistoric Austronesian expansion and migrations, we tracked geographic origins of Oceanic paper mulberry, the only Polynesian commensal plant likely originating in East Asia, using DNA sequence variation of the maternally inherited (32) and hypervariable (SI Text) chloroplast ndhF-rpl32 intergenic spacer (33). We sampled broadly in East Asia (Taiwan, S China, and Japan) and SE Asia (Indochina, the Philippines, and Sulawesi) as well as Oceanic islands where traditional tapa making is still practiced. Historical herbarium collections (A.D. 1899–1964) of Oceania were also sampled to strengthen inferences regarding geographic origins of Oceanic paper mulberry. The employment of ndhF-rpl32 sequences and expanded sampling greatly increased phylogeographic resolution not attainable in a recent study (31) using nuclear ITS sequences (also see SI Text and Fig. S1) and intersimple sequence repeat (ISSR) markers with much smaller sampling.Open in a separate windowFig. S1.ITS haplotype network (n = 17, A–Q) and haplotype distribution and frequency. The haplotype network was reconstructed using TCS (34), with alignment gaps treated as missing data. The sizes of the circles and pie charts are proportional to the frequency of the haplotype (shown in parentheses). Squares denote unique haplotypes (haplotype found only in one individual).  相似文献   

8.
Cellular respiration is powered by membrane-bound redox enzymes that convert chemical energy into an electrochemical proton gradient and drive the energy metabolism. By combining large-scale classical and quantum mechanical simulations with cryo-electron microscopy data, we resolve here molecular details of conformational changes linked to proton pumping in the mammalian complex I. Our data suggest that complex I deactivation blocks water-mediated proton transfer between a membrane-bound quinone site and proton-pumping modules, decoupling the energy-transduction machinery. We identify a putative gating region at the interface between membrane domain subunits ND1 and ND3/ND4L/ND6 that modulates the proton transfer by conformational changes in transmembrane helices and bulky residues. The region is perturbed by mutations linked to human mitochondrial disorders and is suggested to also undergo conformational changes during catalysis of simpler complex I variants that lack the “active”-to-“deactive” transition. Our findings suggest that conformational changes in transmembrane helices modulate the proton transfer dynamics by wetting/dewetting transitions and provide important functional insight into the mammalian respiratory complex I.

In mitochondrial cellular respiration, the membrane-bound enzyme complexes I, III, and IV convert chemical energy into a flux of electrons toward dioxygen (16). The free energy of the process is transduced by pumping protons across the inner mitochondrial membrane (IMM), powering oxidative phosphorylation and active transport (7, 8). The electron transport process is initiated by the respiratory complex I (NADH:ubiquinone oxidoreductase), a 45-subunit modular enzyme machinery that shuttles electrons from nicotinamide adenine dinucleotide (NADH) to ubiquinone (Q10) and transduces the free energy by pumping protons across the IMM, generating a proton motive force (pmf) (1, 46) (Fig. 1). This proton-coupled electron transfer reaction is fully reversible, and complex I can also operate in reverse electron transfer (RET) mode, powering ubiquinol oxidization by consumption of the pmf. Such RET modes become prevalent under hypoxic or anoxic conditions that may result, e.g., from stroke or tissue damage (9), during which the electrons leak from complex I to molecular oxygen and result in the formation of reactive oxygen species (ROS) with physiologically harmful consequences (911). To regulate this potentially dangerous operation mode, the mammalian complex I can transition into a “deactive” (D) state with low Q10-turnover activity (12, 13). Although some structural changes involved in the “active”-to-“deactive” (A/D) transition were recently resolved (1416), the molecular details of how this transition regulates enzyme turnover and its relevance during in vivo conditions still remain puzzling. Moreover, it is also debated whether conformational changes linked to this transition are involved in the native catalytic cycle of all members of the complex I superfamily or whether this transition is specific for the mitochondrial enzyme (12, 13, 17).Open in a separate windowFig. 1.Structure and function of the mammalian complex I. (A) Electron transfer from NADH reduces quinone (Q) to quinol (QH2) and triggers proton pumping across the membrane domain. (Inset) Closeup of the ND1/ND3/ND4L/ND6 interface involved in the "active" to "deactive" transition. TM3ND6, which has been linked to conformational changes in the A/D transitions, is marked. (B) An intact atomic model of the deactive state was constructed using MDFF based on the cryoEM structure of the “active” state and the density map of the “deactive” state. (C and D) Conformational changes during the A (in pink)/D (in brown) transition during the MDFF simulations at the TM3ND6 region, with the D state density map shown. Refer to SI Appendix, Figs. S2, S3, and S12 for other conformational changes. (E) The dihedral angle, ϕ, for ND6 residues Leu51(Cβ)-Leu51(Cα)-Phe67(Cα)-Phe67(Cβ) during MD simulations in the “active” and “deactive” states in comparison to refined cryoEM models.The recently resolved cryo-electron microscopy (cryoEM) structures of the “deactive” mammalian complex I at around 4-Å resolution highlighted conformational changes around several subunits close to the interface between the hydrophilic and membrane domains of complex I. Particularly interesting are the conformational changes around the membrane domain subunits ND4L, ND3, and ND6 (ND for NADH Dehydrogenase) that form a bundle of 11 transmembrane (TM) helices, connected by long loop regions (1416). Notably, it was observed that TM3 of ND6 transitions from a fully α-helical form in the “active” state to a π-bulge around residues 60 to 65 during the deactivation process (1416). Although the exact relevance of these conformational transitions remains debated, it is notable that several point mutations in the vicinity of these regions have been linked to mitochondrial disease (11), supporting their possible functional relevance. Structural changes during complex I deactivation, inferred from the lack of density in the cryoEM maps (1416), were also suggested to take place in several loop regions of the membrane domain, near the Q10 binding site tunnel, and around the supernumerary subunits NDUFA5/NDUFA10 (16, 18). However, the functional consequences of these structural changes and their coupling to the biological activity still remain unclear.To probe how structural changes linked to deactivation could affect the protonation and quinone dynamics of the mammalian complex I, we combine here atomistic molecular dynamics (MD) simulations and hybrid quantum/classical (QM/MM) free energy calculations with cryoEM data (16, 19). Our combined findings suggest that conformational changes around the ND1/ND3/ND4L/ND6 interface and conserved loop regions could block the coupling between proton pumping- and redox-modules upon complex I deactivation. The explored molecular principles are of general importance for elucidating energy transduction mechanisms in the mammalian respiratory complex I and possibly other bioenergetic enzyme complexes, but also for understanding the development of mitochondrial diseases.  相似文献   

9.
Eutrophication is a major driver of species loss in plant communities worldwide. However, the underlying mechanisms of this phenomenon are controversial. Previous studies have raised three main explanations: 1) High levels of soil resources increase standing biomass, thereby intensifying competitive interactions (the “biomass-driven competition hypothesis”). 2) High levels of soil resources reduce the potential for resource-based niche partitioning (the “niche dimension hypothesis”). 3) Increasing soil nitrogen causes stress by changing the abiotic or biotic conditions (the “nitrogen detriment hypothesis”). Despite several syntheses of resource addition experiments, so far, no study has tested all of the hypotheses together. This is a major shortcoming, since the mechanisms underlying the three hypotheses are not independent. Here, we conduct a simultaneous test of the three hypotheses by integrating data from 630 resource addition experiments located in 99 sites worldwide. Our results provide strong support for the nitrogen detriment hypothesis, weaker support for the biomass-driven competition hypothesis, and negligible support for the niche dimension hypothesis. The results further show that the indirect effect of nitrogen through its effect on biomass is minor compared to its direct effect and is much larger than that of all other resources (phosphorus, potassium, and water). Thus, we conclude that nitrogen-specific mechanisms are more important than biomass or niche dimensionality as drivers of species loss under high levels of soil resources. This conclusion is highly relevant for future attempts to reduce biodiversity loss caused by global eutrophication.

A decline in species richness with increasing resource availability is a universal pattern in plant communities (13). This pattern is particularly common in herbaceous plant communities and has been documented in hundreds of experiments worldwide (310). The recognition that anthropogenic eutrophication is a major threat to global diversity (11, 12) has accelerated research of the extent and implications of this phenomenon (13, 14). Nevertheless, the mechanisms by which high levels of resources cause a decline in species richness are not fully understood (1521).Early attempts to explain the decrease of richness under high levels of soil resources have attributed this pattern to an increase in biomass, leading to intensified interspecific competition (22, 23). According to this hypothesis (hereafter, the “biomass-driven competition hypothesis”), high levels of soil resources provide a competitive advantage for fast-growing and large species, excluding smaller and slow-growing species from the community (2225). It has also been proposed (23) and demonstrated (26) that such competitive exclusion is primarily related to competition for light. Recent work attributes the pattern to the asymmetric nature of this competition [i.e., tall plants shade shorter ones but not the opposite (27)]. However, other works suggest that root competition may also contribute to species loss under high resource levels (15).Another hypothesis that has gained support in the last decade has its roots in niche theory (28, 29). This hypothesis, known as the “niche dimension hypothesis” (30), is based on the idea that species coexistence requires niche partitioning via differences in resource requirements (29). According to this hypothesis, limiting resources function as “niche axes.” Thus, high levels of soil resources reduce the number of limiting resources, thereby reducing the number of species that can coexist in the community (30). The strongest support for this hypothesis comes from a global-scale experiment (8) where the same experimental protocol was applied in all sites. This initiative is the most extensive experimental effort ever undertaken to evaluate diversity responses to resource addition (45 sites from five continents) and is unique in its factorial design: All communities in all sites received all possible combinations of nitrogen, phosphorus, and potassium (i.e., N, P, K, NP, NK, PK, and NPK). This factorial design allowed the authors to test the effect of the number of added resources on species richness. Consistent with their expectations, species loss in fertilized plots was strongly and positively related to the number of added resources. Similar results were found in other studies (3032) and were interpreted as support for the niche dimension hypothesis (although see ref. 20).A third hypothesis suggests that the decline in species richness under high levels of soil resources is specifically related to nitrogen (hereafter, N). High levels of N may reduce plant performance by several mechanisms, including ammonium toxicity (33), acidification (34), changes in soil microbiome (35), and increased susceptibility to various stress agents (13, 14). This “nitrogen detriment hypothesis” is supported by studies showing that N addition has a stronger negative effect on species richness than other soil resources (refs. 4, 9, and 36, although see ref. 18).In the last few decades, numerous studies, including a large number of meta-analyses, have investigated the drivers of species loss under high levels of resource availability (2, 3, 610, 37). However, each of these studies has focused on particular resources or mechanisms, and no study has attempted to test the three hypotheses simultaneously. This is a significant shortcoming because the mechanisms underlying the three hypotheses are not independent. Such a lack of independence increases the likelihood of confounding effects and may result in biased conclusions concerning the effects of the underlying mechanisms.Here, we test the three hypotheses together using an extensive dataset collected from 630 different resource addition experiments in 99 different sites worldwide (Fig. 1 and SI Appendix, Table S1). Our analysis was designed to explicitly test distinct predictions derived from the above hypotheses. The first, derived from the biomass-driven competition hypothesis, is a negative effect of biomass on species richness. The second, derived from the niche dimension hypothesis, is a negative effect of the number of added resources on species richness. The third, derived from the nitrogen detriment hypothesis, is a negative effect of the presence of N on species richness (with all other resources having much weaker effects).Open in a separate windowFig. 1.General characteristics of the data included in our meta-analysis. (A) Geographical distribution of the sites included in the meta-analysis [red, sites of the nutrient network included in Harpole et al.’s study (8); green, other sites]. (B) The experimental treatments included in the meta-analysis and their prevalence in the dataset.As emphasized above, the three hypotheses are not mutually exclusive, and more than a single mechanism might be involved in causing richness decline in response to resource addition. Thus, rather than considering the three hypotheses as alternatives, we aimed to evaluate the degree to which each hypothesis receives support from previously published experiments. To this end, we analyzed the data in two steps. First, we tested each hypothesis separately in order to verify that the patterns obtained from our dataset are consistent with those obtained in previous studies when testing each hypothesis by itself. Then, in a second step, we tested the three hypotheses simultaneously using two complementary approaches: multiple regression models and structural equation models. These approaches allowed us to quantitatively compare the effects of the three previously proposed drivers of species loss (biomass, number of resources, and presence of N) based on their predictive power and effect size and compare their direct vs. indirect effects on species richness.  相似文献   

10.
Empiricist philosophers such as Locke famously argued that people born blind might learn arbitrary color facts (e.g., marigolds are yellow) but would lack color understanding. Contrary to this intuition, we find that blind and sighted adults share causal understanding of color, despite not always agreeing about arbitrary color facts. Relative to sighted people, blind individuals are less likely to generate “yellow” for banana and “red” for stop sign but make similar generative inferences about real and novel objects’ colors, and provide similar causal explanations. For example, people infer that two natural kinds (e.g., bananas) and two artifacts with functional colors (e.g., stop signs) are more likely to have the same color than two artifacts with nonfunctional colors (e.g., cars). People develop intuitive and inferentially rich “theories” of color regardless of visual experience. Linguistic communication is more effective at aligning intuitive theories than knowledge of arbitrary facts.

What and how do we learn from others, and what must we see for ourselves? A common intuition is that sensory phenomena have to be experienced directly to be fully grasped. Locke (1) and Hume (2) argued that an understanding of color was inaccessible to people born blind. More recently, Frank Jackson (3, 4) suggested that Mary, a fictional color scientist living in a black-and-white room, would miss out on essential elements of color understanding that could only be gained through first-person experience (see also ref. 5). Many contemporary theories of cognition, including embodiment theories, link knowledge of sensory phenomena to first-person experience. According to such views, visual experience is central to concepts like “red” (612). Once created, the original sensory trace is activated by language and thinking. When one speaker says to another, “This car is red,” mutual understanding makes use of a sensory common ground (i.e., prior visual experiences of "red"). Consistent with this idea, hearing color words activates brain regions involved in color perception (e.g., refs. 1315). Such views propose that people with different sensory experiences have different conceptual representations of sensory phenomena (e.g., each person’s concept of "red" reflects the specific "reds" they have seen) (7, 11, 14). Exactly what aspects of sensory knowledge come from sensory experience remains an open question.In domains other than sensory phenomena, we gain much of our knowledge from other people through cultural transmission rather than from direct sensory experience (e.g., refs. 16 and 17). Humans are highly adept at sharing knowledge within a society and across generations (1822). Part of what makes cultural transmission so effective is language, a uniquely human and remarkably efficient communication system. Religious beliefs, internal contents of people’s minds, and social categories (e.g., gender) are among the many things we learn from others through language (e.g., refs. 2327). Here, we ask what kind of understanding of sensory phenomena is transmitted via language by comparing knowledge of color among people who are blind and sighted living in the same culture.As noted above, a longstanding view in philosophy and psychology is that color knowledge in blindness is fragmented and empty (1, 2, 28, 29). However, unlike Mary, the lone color scientist living in a black-and-white room, people born blind engage in ordinary linguistic communication with sighted people who experience color. What does such communication convey? Landau and Gleitman (30) were the first to challenge the idea of deficient “visual” knowledge in blindness, by showing that a congenitally blind 4-y-old, Kelli, applied color words to concrete objects but not mental entities (e.g., ideas) and understood that color could only be perceived visually, unlike texture or size. Blind and sighted adults also share knowledge of similarities between colors (e.g., "green" and "blue" are similar but different from "orange" and "red"), although this knowledge is more variable among blind individuals (3133).Potentially consistent with the idea that sensory experience is necessary, several recent studies have identified substantial differences in blind and sighted people’s color knowledge. Sighted people can report the colors of many objects (e.g., hippos are "gray", and strawberries are "red") and show high agreement; by contrast, agreement is lower among people who are blind and between sighted and blind people (29, 34). Moreover, agreement is lower among blind adults for color relative to other physical dimensions, such as shape, texture, and size (34). Even when people who are blind agree with the sighted on the canonical color of an object (e.g., strawberries are "red"), blind individuals are less likely to use color as a dimension during semantic similarity judgments, leading to the suggestion that such knowledge is “merely stipulated” for blind but not sighted people (29). Converging evidence for the idea that language is limited in what it transmits about color comes from text corpus analyses, which are less successful at extracting color information from text, relative to other physical dimensions (e.g., shape) and abstract properties (e.g., taxonomy) (35, 36). One interpretation of these results is that despite a rich vocabulary of color terms in English, everyday linguistic communication is limited in what it conveys about color.However, these prior studies may underestimate the capacity of language to transmit color information. Like most studies of color knowledge in sighted people, these studies focused on knowledge of associative color facts such as that strawberries are "red", rather than on inferentially rich, causal understanding of color (e.g., refs. 3741). Such color factoids might be least likely to be culturally transmitted since, for both sighted and blind people alike, they are inferentially shallow and disconnected from other things we know about objects. Little follows specifically from the fact that strawberries are "red", as opposed to "blue" or "purple."In addition to such associative links between objects and their colors, even young children have causal-explanatory intuitions about color (42, 43). These intuitions are a part of broader frameworks, often referred to as “intuitive theories” about physical objects (e.g., refs. 4450). Children expect an object’s relationship with color to differ depending on whether it is a natural kind (e.g., plant, animal, gem) or an artifact (e.g., machine, tool). In response to “Why is this object yellow?” children prefer explanations that appeal to biological mechanisms for natural kinds but human intentions for artifacts (43). In contrast to associative color facts, causal object–color links are both explanatory and can generate predictions about objects that have not previously been experienced. When asked, “Could something still be a Glick even if it was a different color?,” 5-y-old children are more likely to say yes for an artifact than for an animal. By contrast, two instances of a natural kind (e.g., two strawberries) and two instances of an artifact (e.g., two cars) are judged equally likely to have consistent shapes (42). Causal object–color relationships also differ among artifacts in ways that are related to human intentions, although this type of knowledge has not previously been tested. For artifacts such as stop signs and paper, color plays a functional role and is therefore consistent across tokens. Stop signs are red for visibility and recognizability, and paper is white to make markings visible. By contrast, for artifacts like cars and mugs, color is not related to function (e.g., transportation and holding liquid) and therefore can vary freely.Is first person sensory experience instrumental to acquiring such causal-explanatory color knowledge? One possibility is that seeing stop signs, paper, mugs, and cars is necessary for viewers to infer causal object–color relationships and to generalize such knowledge to novel instances, just like seeing animals appears to be highly useful to learning their specific colors (34). Here, we predicted instead that linguistic communication would be more effective at transmitting causal-explanatory color knowledge than associative color facts. Laboratory experiments suggest that children and adults are better at learning such causal-explanatory knowledge (5153). Adults remember lists of features better if they can be related to each other and recall the same events and facts better if they are presented as coherent stories with causal structure (5256). People naturally search for explanatory information by asking “why” (5763). The process of explaining itself can boost memory for causal information: After being prompted to explain, children remember objects’ features better when there is a link between it and how the object works, as opposed to when the relation is an arbitrary association (51, 64). These laboratory experiments suggest that causal-explanatory knowledge is learned more effectively than isolated facts. The case of color knowledge in blindness offers a test case of whether linguistic communication transmits causal-explanatory knowledge more effectively in naturalistic settings.In the current study, we probed sighted and congenitally blind people’s associative and causal-explanatory knowledge of color in three experiments. Experiment 1 first queried associative memory for real objects’ colors by asking participants to generate “a common color of X” (Fig. 1). We next asked participants to judge how likely two instances of the same object are to have the same color, for natural kinds (e.g., two bananas) and artifacts (e.g., two cars). We reasoned that if people share intuitive theories about the relationship between color and object kind, blind and sighted people would make similar inferences about color consistency, even while disagreeing on associative facts (i.e., the particular colors of objects). We predicted that people would judge natural kinds and artifacts with function-relevant color (e.g., stop signs), but not artifacts with function-irrelevant color (e.g., cars), to have high color consistency across instances. For artifacts, to ask whether blind and sighted people make color consistency judgments by reasoning about the causal relationship between the object and its color, we additionally obtained judgments about the relevance of color to artifact function. We predicted that the color consistency ratings would correlate with functional relevance.Open in a separate windowFig. 1.Experimental conditions and trials for color consistency inference. Participants were asked about color and usage consistency for real (experiment 1) and novel (experiment 2) objects. In both experiments, color trials asked about natural kinds, artifacts with nonfunctional colors, and artifacts with functional colors, while usage trials asked about natural kinds and artifacts. Different items were used in every trial. For experiment 1, all items used are listed, and for experiment 2, one sample trial (an appendix with full list of trials can be found in SI Appendix).The ability to support generalization to novel instances is a key test of whether knowledge is inferentially rich (e.g., refs. 47 and 65). In experiment 2, we thus asked participants to make inferences about color consistency for novel objects (natural kinds, artifacts with function-relevant color, and artifacts with function-irrelevant color) in an imaginary island scenario (Fig. 1). If knowledge about the origins and causes of color is shared, then blind and sighted participants might make systematic predictions for color consistency for novel objects on the basis of object category (e.g., creature, gem, or gadget, coin). Finally, in experiment 3, we elicited open-ended explanations for why objects have their colors (e.g., “Why is a carrot orange?”). This allowed us to probe the specific nature of blind and sighted people’s knowledge of the causal mechanisms that give rise to object colors.  相似文献   

11.
The Pleistocene global dispersal of modern humans required the transit of arid and semiarid regions where the distribution of potable water provided a primary constraint on dispersal pathways. Here, we provide a spatially explicit continental-scale assessment of the opportunities for Pleistocene human occupation of Australia, the driest inhabited continent on Earth. We establish the location and connectedness of persistent water in the landscape using the Australian Water Observations from Space dataset combined with the distribution of small permanent water bodies (springs, gnammas, native wells, waterholes, and rockholes). Results demonstrate a high degree of directed landscape connectivity during wet periods and a high density of permanent water points widely but unevenly distributed across the continental interior. A connected network representing the least-cost distance between water bodies and graded according to terrain cost shows that 84% of archaeological sites >30,000 y old are within 20 km of modern permanent water. We further show that multiple, well-watered routes into the semiarid and arid continental interior were available throughout the period of early human occupation. Depletion of high-ranked resources over time in these paleohydrological corridors potentially drove a wave of dispersal farther along well-watered routes to patches with higher foraging returns.Considerable debate has surrounded the timing, routes, and mechanisms of early human colonization of the continent of Australia. Initial occupation from the north appears to have begun before 47 kyBP (18) with relatively rapid movement therafter; for example, the Willandra Lakes region in the southeast of the continent may have been occupied within 1,000 y after the arrival of humans (1, 9, 10). Birdsell (11) considered that dispersal occurred rapidly and throughout the continent, whereas Bowdler (12) considered that early dispersal took place along the coastlines, with limited initial occupation of the interior. Horton (13) and Tindale (14) added the postulates that, upon arrival in the northwest, or north, respectively, humans dispersed through the northern and eastern interior woodlands along riverine corridors and thence to the coast. These “end-member” dispersal scenarios (Fig. 1) subsequently have been reworked to include a more nuanced understanding of “the filling of the continent” (p. 453 in ref. 15) as variably dependent on a matrix of biogeographic (16), ecological/climatic (17), and sociological/technological (18, 19) facilitators of—or barriers to—dispersal from an initial point of entry in the north (20). The vast interior of the continent is now viewed as a mosaic of potential oases, corridors, and barriers, with the viability of a specific region for occupation or transit also depending on the trajectories of environmental change (2124).Open in a separate windowFig. 1.Proposed colonization models for the Australian continent. (A) Birdsell (11). (B) Tindale (14). (C) Horton (13). (D) Bowdler (12).O’Connell and Allen (1), building on previous work (25) and drawing on optimal foraging theory, propose a model of human dispersal throughout the continental interior driven by resource availability/depletion, with the major interior rivers/river basins representing the environments most attractive to human foragers; these environments extended into other areas for short periods, at times of rain-related resource “flushes.” Smith (26) attributes human dispersal through the desert to access to the food resources provided by stepping stones of small and variable water features, rather than to the resources themselves. All treatments of human dispersal in Pleistocene Sahul to date have lacked an explicit spatial dimension. What potential dispersal routes were available, where, and under what circumstances? These questions relate specifically to water in the landscape, because water is critical for human survival (2730), and three-quarters of Australia is semiarid or arid. In the absence of spatial information, discussion of the patterns of human colonization in Australia usually have been framed in general terms of aridity—the absence of water—although it is well known that even the driest deserts in Australia are periodically flooded (21, 31, 32). In the Western Desert, for example, Peterson (p. 65 in ref. 33) noted that “after substantial falls of rain the population disperses widely to the most ephemeral sources far out on the plains. As the water supplies disappear the people retreat back to the more permanent water supplies, where they may become trapped for a period” (34). It is perhaps for this reason that Gould (35) observed that indigenous Australians prioritize foraging near satellite water holes before settling closer to the main water hole. In the Western Desert, Veth (36) notes a positive correlation between the number of extractive artifacts and the permanency of water.Aridity in isolation therefore is not necessarily a barrier either to habitation or to transit. It is the duration of inundation, the connectedness of water at times of inundation, and the location of permanent water in the landscape that dictates where, and for what length of time, humans could reside in or transit through most of interior Australia. O’Connell and Allen note that “terrestrial patch rank was determined primarily by the availability of freshwater, as measured by the volume and reliability of precipitation and/or local stream flow” (pp.7–8 in ref. 25).The Water Observations from Space (WOfS) dataset (37) allows an assessment of the spatial distribution and permanency of standing water in the modern Australian landscape. Here we use this information, coupled with the distribution of small natural permanent water bodies (springs, gnammas, native wells, waterholes, and rockholes) compiled from the 1:250,000 topographic sheets (nationalmap.gov.au) to provide a spatially explicit assessment of the opportunities for Pleistocene human occupation of, and dispersal throughout, Australia. A connected network was produced representing the least-cost distance between water bodies and graded according to terrain cost.  相似文献   

12.
13.
Implementation of complex computer circuits assembled from the bottom up and integrated on the nanometer scale has long been a goal of electronics research. It requires a design and fabrication strategy that can address individual nanometer-scale electronic devices, while enabling large-scale assembly of those devices into highly organized, integrated computational circuits. We describe how such a strategy has led to the design, construction, and demonstration of a nanoelectronic finite-state machine. The system was fabricated using a design-oriented approach enabled by a deterministic, bottom–up assembly process that does not require individual nanowire registration. This methodology allowed construction of the nanoelectronic finite-state machine through modular design using a multitile architecture. Each tile/module consists of two interconnected crossbar nanowire arrays, with each cross-point consisting of a programmable nanowire transistor node. The nanoelectronic finite-state machine integrates 180 programmable nanowire transistor nodes in three tiles or six total crossbar arrays, and incorporates both sequential and arithmetic logic, with extensive intertile and intratile communication that exhibits rigorous input/output matching. Our system realizes the complete 2-bit logic flow and clocked control over state registration that are required for a finite-state machine or computer. The programmable multitile circuit was also reprogrammed to a functionally distinct 2-bit full adder with 32-set matched and complete logic output. These steps forward and the ability of our unique design-oriented deterministic methodology to yield more extensive multitile systems suggest that proposed general-purpose nanocomputers can be realized in the near future.It is widely agreed (1, 2) that because of fundamental physical limits, the microelectronics industry is approaching the end of its present Roadmap (1) for the miniaturization of computer circuits based upon lithographically fabricated bulk-silicon (Si) transistors. Therefore, much effort has been invested in the nanoelectronics field for the development of novel, alternative, nanometer-scale electronic device and fabrication technologies that could serve as potential routes for ever-denser and more capable systems to enable continued technological and economic advancement (317). These efforts have yielded simple nanoelectronic circuits (35, 817) and more complex circuit systems (6, 7) that use novel nanomaterials but are not integrated on the nanometer scale. In this regard, building a nanocomputer that transcends the ultimate scaling limitations of conventional semiconductor electronics has been a central goal of the nanoscience field and a long-term objective of the computing industry.A finite-state machine (FSM) is a representation for a nanocomputer in that it is a fundamental model for clocked, programmable logic circuits (18, 19) and integrates key arithmetic and memory logic elements. In general, a FSM must maintain its internal state, modify this state in response to external stimuli, and then output commands to the external environment on that basis (18, 19). A basic state transition diagram for the 2-bit four-state FSM investigated in our work (Fig. 1A) highlights the four binary representations “00,” “01,” “10,” and “11,” and the transition from one state to another triggered by a binary input signal, “0” or “1.” Larger, more complex FSMs may be constructed using longer binary representations.Open in a separate windowFig. 1.Architecture and fabrication of FSM. (A) Logic diagram of the FSM, with the gray circles representing the states. Upon triggering, the straight arrows indicate the transition of the current state to the next one for an input of 1; the curved arrows indicate maintaining the current state for an input of 0. (B) Schematic of the three-tile circuit of the nanoFSM. Each tile consists of two blocks, and each block consists of a nanowire array (vertical) with lithography-defined top gate lines (horizontal). A1A0, Cin, and CLK correspond to the 2-bit state, control, and clock signal, respectively. The green dots indicate the programmed active transistor nodes. For simplicity, the circuit only shows the drain contacts (blue) but not the source contacts or load resistors. The arrows indicate external wirings, with the red ones indicating feedback loops. (C) Deterministic fabrication scheme. Key steps include (l) definition of the anchoring sites (gray stripes), (Il) single-nanowire anchoring to the specific anchoring sites with highly directional alignment, (III) nanowire trimming to yield uniform lengths, and (lV) definition of contacts (light blue) and gates (orange) to the trimmed nanowires (dark blue) without registration. (D) SEM image of a 10 × 10 nanowire array from the nanoFSM circuit. The horizontal lines are metal gates with the top and bottom pads the source and drain contacts. (Scale bar, 1 µm.) (E) SEM image of the entire three-tile/six-array nanoFSM circuit. The red enclosed region corresponds to the image area shown in D. (Scale bar, 10 µm.)Previous efforts have yielded circuit elements that perform simple logic functions using small numbers of individual nanoelectronic devices (817), but have fallen far short of demonstrating the combination of arithmetic and register elements required to realize a FSM. Specifically, integration of distinct functional circuit elements necessitates the capability to fabricate and precisely organize circuit systems that interconnect large numbers of addressable nanometer-scale electronic devices in a readily extensible manner. As a result, implementation of a nanoelectronic FSM (nanoFSM) via bottom–up assembly of individually addressable nanoscale devices has been well beyond the state of the art. Moreover, it represents a general gap between the current single-unit circuits and modular architectures for increasing complex and functional nanoelectronic systems (8, 2024). Below we describe how we overcome the above challenges in design, assembly, and circuit fabrication for the realization of a nanoFSM in programmable multitile architecture, which also provides a general paradigm for further cascading nanoelectronic systems from the bottom up.  相似文献   

14.
Language’s expressive power is largely attributable to its compositionality: meaningful words are combined into larger/higher-order structures with derived meaning. Despite its importance, little is known regarding the evolutionary origins and emergence of this syntactic ability. Although previous research has shown a rudimentary capability to combine meaningful calls in primates, because of a scarcity of comparative data, it is unclear to what extent analog forms might also exist outside of primates. Here, we address this ambiguity and provide evidence for rudimentary compositionality in the discrete vocal system of a social passerine, the pied babbler (Turdoides bicolor). Natural observations and predator presentations revealed that babblers produce acoustically distinct alert calls in response to close, low-urgency threats and recruitment calls when recruiting group members during locomotion. On encountering terrestrial predators, both vocalizations are combined into a “mobbing sequence,” potentially to recruit group members in a dangerous situation. To investigate whether babblers process the sequence in a compositional way, we conducted systematic experiments, playing back the individual calls in isolation as well as naturally occurring and artificial sequences. Babblers reacted most strongly to mobbing sequence playbacks, showing a greater attentiveness and a quicker approach to the loudspeaker, compared with individual calls or control sequences. We conclude that the sequence constitutes a compositional structure, communicating information on both the context and the requested action. Our work supports previous research suggesting combinatoriality as a viable mechanism to increase communicative output and indicates that the ability to combine and process meaningful vocal structures, a basic syntax, may be more widespread than previously thought.Syntax is often considered one of the key defining features of human language (1). Through combining meaningful words together, larger sequences with related, compositional meaning can be constructed (2). One consequence of such productive compositional syntax in humans is that, with a finite inventory of words, an infinite range of ideas and concepts can be communicated (2, 3). Despite the central role that syntax plays in determining language’s generativity, very little is known about its evolutionary origins or early, rudimentary forms (4, 5). Elucidating the proto forms of compositional syntax, although nontrivial (5, 6), represents a key step in understanding the evolution of language more holistically.One means of investigating early forms and function of compositionality is to assess analog examples in animals (5, 7). Indeed, recent observational and experimental work on two related guenon monkeys has shown the propensity to combine context-specific, “meaningful” signals into sequences that resemble compositional structures in language (810). Male Campbell’s monkeys (Cercopithecus campbelli), for example, produce predator-specific alarm calls that can be affixed with an acoustic modifier (8, 11). The affix acts to alter the “meaning” of the alarm calls in a predictable way, transforming them into general disturbance calls (8, 11, 12). Similarly, male putty-nosed monkeys (Cercopithecus nictitans) combine two predator-specific alarm calls into a higher-order sequence (9, 13). Although the two calls are generally associated with the presence of aerial or terrestrial predators, the resultant combination initiates group movement in nonpredatory contexts (9, 13). Given the discrepancies between the responses elicited by the individual calls and the sequence, it remains unclear whether the putty-nosed monkey call sequence represents a basic form of compositional syntax or rather a combinatorial syntax, where the meaning of the whole is not directly related to the parts, akin to idiomatic expressions in language (i.e., “kick the bucket” for dying) (10, 13, 14). The existence of such “semantic combinations” (13) in primates has nevertheless been argued to support an evolutionarily ancient origin of human syntax rooted within the primate lineage (8, 15). However, it is unclear whether similar call concatenations and compositional processing of information might also exist in other lineages (see ref. 14 for review) and if so, whether they take analogous forms and serve analogous functions (1).The last 50 y of comparative research have shown that a number of nonprimate animals, particularly songbirds, are capable of stringing sounds together into larger, often more structurally complex sequences (1618). However, there is no indication that any of these song sequences are compositional in structure, because the individual sounds composing the songs of birds and other animals do not convey any independent meaning (1618), ultimately precluding any attempt to test for protosyntactic abilities in these species in the first place. Although the absence of compositional structure in songs might suggest that syntactic abilities are potentially confined to the primate lineage (8, 15), it may also be an artifact of limited focus on bird vocal systems other than song that are more likely to support the capacity for syntax (19).Here, we address this ambiguity through investigating the prevalence of compositional vocal sequences in a highly social, nonsinging passerine bird that possesses a discrete vocal system: the cooperatively breeding southern pied babbler (Turdoides bicolor) (20, 21). Pied babblers are territorial and live in stable groups of 3–15 individuals (22). Reproduction is usually restricted to the dominant pair of the group (23), with subordinate individuals engaging in a number of helping behaviors, such as territorial and nest defense, daytime incubation, and feeding of the offspring during the nestling and postfledgling stages (22). Individuals of the cohesive foraging group spend most of the time on the ground searching for invertebrates hidden in the substrate, which they excavate using their bill (22, 24). Consequently, most of the time, pied babblers forage in a head-down position within and around forbs and shrubs and hence, rely heavily on vocalizations to keep track of changes in their surroundings (21, 2529). As such, the pied babbler vocal system exhibits around 17 discrete vocalizations, including alarm calls and sentinel calls, as well as a diverse array of social calls produced during intra- and intergroup contexts (21, 2529).Observational work has indicated that pied babblers produce broadband, noisy alert calls in response to sudden but generally low-urgency threats (e.g., abruptly approaching animals) and more tonal, repetitive recruitment calls when recruiting group members to a new location or during locomotion, mainly in foraging or roosting contexts. Moreover, alert and recruitment calls can be combined into a sequence on encountering and mobbing, mainly terrestrial, predators (Fig. 1). Given the context in which the two independent calls are produced, we aimed to investigate whether the sequence might, therefore, function specifically to recruit group members in a dangerous situation (e.g., when mobbing a predator) by combining information on both the danger and the requested action. Accordingly, the combination of alert and recruitment calls (hereafter termed the “mobbing sequence”) might constitute a rudimentary compositional structure, where the meaning of the whole is a product of the meaning of its parts (30).Open in a separate windowFig. 1.Spectrogram of a mobbing sequence composed of one alert and seven recruitment calls.To verify the context-specific information conveyed by the independent vocalizations and test whether pied babblers extract the meaning of the sequence in a compositional way, we conducted additional natural observations combined with acoustic analyses and experimental manipulations. First, acoustic analyses were applied to confirm that alert and recruitment calls constitute two distinct vocalizations. Second, to determine the contexts in which the individual calls and the call sequence are produced, we conducted natural observations and predator presentation experiments combined with audio recordings. Third, we carried out systematic natural, artificial, and control playback experiments to investigate whether birds perceive the sequence compositionally. Key support for compositionality requires that the context in which mobbing sequences are produced and the responses of receivers to playbacks of these sequences are related to the information encoded in alert and recruitment calls (30, 31).  相似文献   

15.
The level of antagonism between political groups has risen in the past years. Supporters of a given party increasingly dislike members of the opposing group and avoid intergroup interactions, leading to homophilic social networks. While new connections offline are driven largely by human decisions, new connections on online social platforms are intermediated by link recommendation algorithms, e.g., “People you may know” or “Whom to follow” suggestions. The long-term impacts of link recommendation in polarization are unclear, particularly as exposure to opposing viewpoints has a dual effect: Connections with out-group members can lead to opinion convergence and prevent group polarization or further separate opinions. Here, we provide a complex adaptive–systems perspective on the effects of link recommendation algorithms. While several models justify polarization through rewiring based on opinion similarity, here we explain it through rewiring grounded in structural similarity—defined as similarity based on network properties. We observe that preferentially establishing links with structurally similar nodes (i.e., sharing many neighbors) results in network topologies that are amenable to opinion polarization. Hence, polarization occurs not because of a desire to shield oneself from disagreeable attitudes but, instead, due to the creation of inadvertent echo chambers. When networks are composed of nodes that react differently to out-group contacts, either converging or polarizing, we find that connecting structurally dissimilar nodes moderates opinions. Overall, our study sheds light on the impacts of social-network algorithms and unveils avenues to steer dynamics of radicalization and polarization in online social networks.

Online social networks are increasingly used to access political information (1), engage with political elites, and discuss politics (2). These new communication platforms can benefit democratic processes in several ways: They reduce barriers to information and, subsequently, increase citizen engagement, allow individuals to voice their concerns, help debunk false information, and improve accountability and transparency in political decision-making (3). In principle, individuals can use social media to access ideologically diverse viewpoints and make better-informed decisions (4, 5).At the same time, internet and online social networks reveal a dark side. There are mounting concerns over possible linkages between social media and affective polarization (6, 7). Other than healthy political deliberation, social networks can foster so-called “echo chambers” (8, 9) and “information cocoons” (3, 10) where individuals are only exposed to like-minded peers and homogeneous sources of information, which polarizes attitudes (for counterevidence, see ref. 5). As a result, social media can trigger political sectarianism (6, 7, 1113) and fuel misinformation (14, 15). Averting the risks of online social networks for political institutions, and potentiating their advantages, requires multidisciplinary approaches and novel methods to understand long-term dynamics on social platforms.That is not an easy task. As pointed out by Woolley and Howard, “to understand contemporary political communication we must now investigate the politics of algorithms and automation” (16). While traditional media outlets are curated by humans, online social media resorts to computer algorithms to personalize contents through automatic filtering. To understand information dynamics in online social networks, one needs to take into account the interrelated subtleties of human decision making [e.g., only share specific contents (17), actively engage with other users, follow or befriend particular individuals, interact offline] and the outcomes of automated decisions (e.g., news sorting and recommendation systems) (18, 19). In this regard, much attention has been placed on the role of news filters and sorting (1, 18, 19). Shmargad and Klar (20) provide evidence that algorithms sorting news impact the way users engage with and evaluate political news, likely exacerbating political polarization. Likewise, Levy (21) notes that social media algorithms can substantially affect users’ news consumption habits.While past studies have examined how algorithms may affect which information appears on a person’s newsfeed, and subsequent polarization, social matching (22) or link recommendation (23) algorithms [also called user, contact, or people recommender systems (24, 25)] constitute another class of algorithms that can affect the way users engage in (and with) online social networks (examples of such systems in SI Appendix, Fig. S13). These algorithms are implemented to recommend new online connections—“friends” or “followees”—to social network users, based on supposed offline familiarity, likelihood of establishing a future relation, similar interests, or the potential to serve as a source of useful information. Current data provide evidence that link recommendation algorithms impact network topologies and increase network clustering: Daly et al. (26) show that an algorithm recommending friends-of-friends, in an IBM internal social network platform, increases clustering and network modularity. Su et al. (27) analyzed the Twitter graph before and after this platform implemented link recommendation algorithms and show that the “Who To Follow” feature led to a sudden increase in edge growth and the network clustering coefficient. Similarly, Zignani et al. (28) show that, on a small sample of the Facebook graph, the introduction of the “People You May Know” (PYMK) feature led to a sudden increase in the number of links and triangles [i.e., motifs comprising three nodes (A, B, C) where the links AB, AC, and BC exist] in the network. The fact that PYMK is responsible for a significant fraction of link creations is alluded to in other works (29). Furthermore, recent work shows, through experiments with real social media users (30) and simulations (31), that link recommendation algorithms can effectively be used as an intervention mechanism to increase networks’ structural diversity (30, 31) and minimize disagreements (32). It is thereby relevant to understand, 1) How do algorithmic link recommendations interplay with opinion formation? and 2) What are the long-term impacts of such algorithms on opinion polarization?Here, we tackle the previous questions from a complex adaptive–systems perspective (33), designing and analyzing a simple model where individuals interact in a dynamic social network. While several models explain the emergence of polarization through link formation based on opinion similarity (3441) and information exchange (42), here we focus instead on rewiring based on “structural similarity,” which is defined as similarity based on common features that exclusively depend on the network structure (43). This contrasts with the broader concept of homophily, which typically refers to similarity based on common characteristics besides network properties (e.g., opinions, taste, age, background). Compared with rewiring based on homophily—which can also contribute to network fragmentation—rewiring based on structural similarity can be less restrictive in contexts where information about opinions and beliefs is not readily available to individuals before the connection is established. Furthermore, rewiring based on structural similarity is a backbone of link recommendation algorithms [e.g., “People you may know” or “Whom to follow” (25) suggestions], which rely on link prediction methods to suggest connections to users (43, 44). Importantly, our model combines three key ingredients: 1) Links are formed according to structural similarity, based on common neighbors, which is one of the simplest link prediction methods (43); this way, we do not assume a priori that individuals with similar opinions are likely to become connected [as recent works underline, sorting can be incidental to politics (45, 46)]. 2) Then, to examine opinion updating, we adapt a recent model that covers the interplay of social reinforcement and issue controversy to promote radicalization on social networks (39). 3) Last, we explicitly consider that nodes can react differently to out-group links, either converging in their opinions (10, 47) or polarizing further (4850).We find that establishing links based on structural similarity alone [a process that is likely to be reinforced by link recommendation algorithms—see SI Appendix, Fig. S10 and previous work pointing that such algorithms affect a social network topology and increase their clustering coefficient (2628)] contributes to opinion polarization. While our model sheds light on the effect of link recommendation algorithms on opinion formation and polarization dynamics, we also offer a justification for polarization to emerge through structural similarity-based rewiring, in the absence of explicit opinion-similarity rewiring (34, 36, 39, 51), confidence-bounds (37, 38, 40), or rewiring based on concordant messages (42).* Second, we find that the effects of structural similarity-based rewiring are exacerbated if even moderate opinions have high social influence. Finally, we combine nodes that react differently to out-group contacts: “converging” nodes, which converge if exposed to different opinions (10, 21, 52), and “polarizing” nodes, which diverge when exposed to different viewpoints (4850). We observe that the coexistence of both types of nodes can contribute to moderate opinions. Polarizing nodes develop radical opinions, and converging nodes, influenced by opposing viewpoints, yield more temperate ones. However, again, link recommendation algorithms impact this process: Given the existence of communities isolated to a greater degree through link recommendation, converging nodes may find it harder to access diverse viewpoints, which, in general, contributes to increasing the adoption of extreme opinions.  相似文献   

16.
Human brains flexibly combine the meanings of words to compose structured thoughts. For example, by combining the meanings of “bite,” “dog,” and “man,” we can think about a dog biting a man, or a man biting a dog. Here, in two functional magnetic resonance imaging (fMRI) experiments using multivoxel pattern analysis (MVPA), we identify a region of left mid-superior temporal cortex (lmSTC) that flexibly encodes “who did what to whom” in visually presented sentences. We find that lmSTC represents the current values of abstract semantic variables (“Who did it?” and “To whom was it done?”) in distinct subregions. Experiment 1 first identifies a broad region of lmSTC whose activity patterns (i) facilitate decoding of structure-dependent sentence meaning (“Who did what to whom?”) and (ii) predict affect-related amygdala responses that depend on this information (e.g., “the baby kicked the grandfather” vs. “the grandfather kicked the baby”). Experiment 2 then identifies distinct, but neighboring, subregions of lmSTC whose activity patterns carry information about the identity of the current “agent” (“Who did it?”) and the current “patient” (To whom was it done?”). These neighboring subregions lie along the upper bank of the superior temporal sulcus and the lateral bank of the superior temporal gyrus, respectively. At a high level, these regions may function like topographically defined data registers, encoding the fluctuating values of abstract semantic variables. This functional architecture, which in key respects resembles that of a classical computer, may play a critical role in enabling humans to flexibly generate complex thoughts.Yesterday, the world’s tallest woman was serenaded by 30 pink elephants. The previous sentence is false, but perfectly comprehensible, despite the improbability of the situation it describes. It is comprehensible because the human mind can flexibly combine the meanings of individual words (“woman,” “serenade,” “elephants,” etc.) to compose structured thoughts, such as the meaning of the aforementioned sentence (1, 2). How the brain accomplishes this remarkable feat remains a central, but unanswered, question in cognitive science.Given the vast number of sentences we can understand and produce, it would be implausible for the brain to allocate individual neurons to represent each possible sentence meaning. Instead, it is likely that the brain employs a system for flexibly combining representations of simpler meanings to compose more complex meanings. By “flexibly,” we mean that the same meanings can be combined in many different ways to produce many distinct complex meanings. How the brain flexibly composes complex, structured meanings out of simpler ones is a matter of long-standing debate (310).At the cognitive level, theorists have held that the mind encodes sentence-level meaning by explicitly representing and updating the values of abstract semantic variables (3, 5) in a manner analogous to that of a classical computer. Such semantic variables correspond to basic, recurring questions of meaning such as “Who did it?” and “To whom was it done?” On such a view, the meaning of a simple sentence is partly represented by filling in these variables with representations of the appropriate semantic components. For example, “the dog bit the man” would be built out of the same semantic components as “the man bit the dog,” but with a reversal in the values of the “agent” variable (“Who did it?”) and the “patient” variable (“To whom was it done?”). Whether and how the human brain does this remains unknown.Previous research has implicated a network of cortical regions in high-level semantic processing. Many of these regions surround the left sylvian fissure (1119), including regions of the inferior frontal cortex (13, 14), inferior parietal lobe (12, 20), much of the superior temporal sulcus and gyrus (12, 15, 21), and the anterior temporal lobes (17, 20, 22). Here, we describe two functional magnetic resonance imaging (fMRI) experiments aimed at understanding how the brain (in these regions or elsewhere) flexibly encodes the meanings of sentences involving an agent (“Who did it?”), an action (“What was done?”), and a patient (“To whom was it done?”).First, experiment 1 aims to identify regions that encode structure-dependent meaning. Here, we search for regions that differentiate between pairs of visually presented sentences, where these sentences convey different meanings using the same words (as in “man bites dog” and “dog bites man”). Experiment 1 identifies a region of left mid-superior temporal cortex (lmSTC) encoding structure-dependent meaning. Experiment 2 then asks how the lmSTC represents structure-dependent meaning. Specifically, we test the long-standing hypothesis that the brain represents and updates the values of abstract semantic variables (3, 5): here, the agent (“Who did it?”) and the patient (“To whom was it done?”). We search for distinct neural populations in lmSTC that encode these variables, analogous to the data registers of a computer (5).  相似文献   

17.
18.
A number of studies have shown that pupil size increases transiently during effortful decisions. These decision-related changes in pupil size are mediated by central neuromodulatory systems, which also influence the internal state of brain regions engaged in decision making. It has been proposed that pupil-linked neuromodulatory systems are activated by the termination of decision processes, and, consequently, that these systems primarily affect the postdecisional brain state. Here, we present pupil results that run contrary to this proposal, suggesting an important intradecisional role. We measured pupil size while subjects formed protracted decisions about the presence or absence (“yes” vs. “no”) of a visual contrast signal embedded in dynamic noise. Linear systems analysis revealed that the pupil was significantly driven by a sustained input throughout the course of the decision formation. This sustained component was larger than the transient component during the final choice (indicated by button press). The overall amplitude of pupil dilation during decision formation was bigger before yes than no choices, irrespective of the physical presence of the target signal. Remarkably, the magnitude of this pupil choice effect (yes > no) reflected the individual criterion: it was strongest in conservative subjects choosing yes against their bias. We conclude that the central neuromodulatory systems controlling pupil size are continuously engaged during decision formation in a way that reveals how the upcoming choice relates to the decision maker’s attitude. Changes in brain state seem to interact with biased decision making in the face of uncertainty.Changes in pupil size at constant luminance have long been used as a marker of central autonomic processes linked to cognition (14). Many studies over the past decades reported that the pupil dilates while subjects engage in demanding perceptual, cognitive, or economic decision tasks (13, 517). This decision-related pupil dilation has commonly been linked to the final choice terminating the decision process (6, 14, 16) and the consolidation of the committed decision (6, 16).Changes in pupil size are also linked to changes in brain state. It has been proposed that the decision-related pupil dilation tracks the activity of certain neuromodulatory systems of the brainstem—in particular, the noradrenergic locus coeruleus (5, 79, 18) and, possibly, the cholinergic basal forebrain (19) systems. These neuromodulatory systems also activate briefly (“phasically”) during perceptual decisions, such as visual target detection (5, 2024), likely mediated via feedback connections from the prefrontal cortex (5, 25). The modulatory neurotransmitters released from the projections of these brainstem systems, in turn, shape the internal state of cortical networks, for instance, by boosting the gain of neural interactions (5, 7, 26). Thus, these brainstem systems might also shape decision computations in cortical networks—provided that they are activated already during decision formation. If so, these systems might affect the decision process, over and above shortening the time to respond. For instance, they might govern the decision maker’s ability to overcome his or her intrinsic bias.Here, we addressed these issues noninvasively in humans by linking decision-related pupil dilation to the time course, outcome, and bias of a protracted perceptual decision process. Many perceptual decisions are not transient events but evolve gradually over several hundreds of milliseconds, due to the slow accumulation of noisy sensory information (2733). Further, perceptual decisions are, like economic decisions (34), prone to strong biases that are not due to external asymmetries in the magnitude or probability of payoffs for certain choices. In particular “yes” vs. “no” detection decisions depend on the idiosyncratic (liberal or conservative) attitude of the decision maker with respect to saying “yes” or “no” (35, 36).We thus measured pupil size in subjects performing a challenging yes–no visual contrast detection task at constant luminance (Fig. 1A). A general linear model (GLM) (37) allowed us to disentangle different temporal components of the neural input to the sluggish system controlling pupil size. This approach revealed that decision-related pupil dilation was not only driven by subjects’ final choice and the concomitant motor response, but also by a (stronger) sustained component throughout the preceding decision process. Further, the dilation amplitude was bigger for yes than for no choices. This pupil choice effect was due to the conservative subjects who decided yes against their bias. Taken together, our findings point to an intricate interplay between changes in internal brain state and biased decision making in the face of uncertainty.Open in a separate windowFig. 1.Task and behavioral results. (A) Sequence of events during a single trial. Dynamic noise is continuously present in a circular aperture around fixation. During the decision interval (onset cued by a tone), the subject searches for a faint grating signal superimposed onto the noise and indicates the yes or no choice by button press. The signal is shown at high contrast for illustration purposes only. In the actual experiment, its contrast was titrated to each individual’s detection threshold. (B) Stimulus types during the decision interval and possible choices of the subject, yielding the four trial categories of signal-detection theory. (C) Distribution of trial types, pooled across all subjects. (D) Reaction-time distributions for each trial type, pooled across all subjects. (E) Normalized reaction times, sorted by trial type and averaged across the group. RT, reaction time. Error bars, SEM.  相似文献   

19.
This paper addresses an important debate in Amazonian studies; namely, the scale, intensity, and nature of human modification of the forests in prehistory. Phytolith and charcoal analysis of terrestrial soils underneath mature tierra firme (nonflooded, nonriverine) forests in the remote Medio Putumayo-Algodón watersheds, northeastern Peru, provide a vegetation and fire history spanning at least the past 5,000 y. A tree inventory carried out in the region enables calibration of ancient phytolith records with standing vegetation and estimates of palm species densities on the landscape through time. Phytolith records show no evidence for forest clearing or agriculture with major annual seed and root crops. Frequencies of important economic palms such as Oenocarpus, Euterpe, Bactris, and Astrocaryum spp., some of which contain hyperdominant species in the modern flora, do not increase through prehistoric time. This indicates pre-Columbian occupations, if documented in the region with future research, did not significantly increase the abundance of those species through management or cultivation. Phytoliths from other arboreal and woody species similarly reflect a stable forest structure and diversity throughout the records. Charcoal 14C dates evidence local forest burning between ca. 2,800 and 1,400 y ago. Our data support previous research indicating that considerable areas of some Amazonian tierra firme forests were not significantly impacted by human activities during the prehistoric era. Rather, it appears that over the last 5,000 y, indigenous populations in this region coexisted with, and helped maintain, large expanses of relatively unmodified forest, as they continue to do today.

More than 50 y ago, prominent scholars argued that due to severe environmental constraints (e.g., poor natural resources), prehistoric cultures in the Amazon Basin were mainly small and mobile with little cultural complexity, and exerted low environmental impacts (1, 2). Contentious debates ensued and have been ongoing ever since. Empirical data accumulated during the past 10 to 20 y have made it clear that during the late Holocene beginning about 3,000 y ago dense, permanent settlements with considerable cultural complexity had developed along major watercourses and some of their tributaries, in seasonal savannas/areas of poor drainage, and in seasonally dry forest. These populations exerted significant, sometimes profound, regional-scale impacts on landscapes, including with raised agricultural fields, fish weirs, mound settlements, roads, geometric earthworks called geoglyphs, and the presence of highly modified anthropic soils, called terra pretas or “Amazonian Dark Earths” (Fig. 1) (e.g., refs. 315).Open in a separate windowFig. 1.Location of study region (MP-A) and other Amazonian sites discussed in the text. River names are in blue. The black numbers represent major pre-Columbian archaeological sites with extensive human alterations (1, Marajó Island; 2, Santarém; 3, Upper Xingu; 4, Central Amazon Project; 5, Bolivian sites) (3, 510, 14, 15). ADE, terra preta locations (e.g., refs. 19 and 20); triangles are geoglyph sites (6, 8). The white circles are terrestrial soil locations previously studied by Piperno and McMichael (29, 3133, 54) (Ac, Acre; Am, Amacayacu; Ay, Lake Ayauchi; B, Barcelos; GP, lakes Gentry-Parker; Iq, Iquitos to Nauta; LA, Los Amigos; PVM, Porto Velho to Manaus; T, Tefe).An important, current debate that frames this paper centers not on whether some regions of the pre-Columbian Amazon supported large and complex human societies, but rather on the spatial scales, degrees, and types of cultural impacts across this continental-size landscape. Some investigators drawing largely on available archaeological data and studies of modern floristic composition of selected forests, argue that heavily modified “domesticated” landscapes were widespread across Amazonia at the end of prehistory, and these impacts significantly structure the vegetation today, even promoting higher diversity than before (e.g., refs. 1421). It is believed that widespread forms of agroforestry with planted, orchard-like formations or other forest management strategies involving the care and possible enrichment of several dozens of economically important native species have resulted in long-term legacies left on forest composition (e.g., refs. 1422). Some (20) propose that human influences played strong roles in the enrichment of “hyperdominant” trees, which are disproportionately common elements in the modern flora (sensu ref. 23). Some even argue that prehistoric fires and forest clearance were so spatially extensive that post-Columbian reforestation upon the tragic consequences of European contact was a principal contributor to decreasing atmospheric CO2 levels and the onset of the “Little Ice Age” (24, 25).However, modern floristic studies are often located in the vicinity of known archaeological sites and/or near watercourses (26). Many edible trees in these studies are early successional and would not be expected to remain as significant forest elements for hundreds of years after abandonment. Historic-period impacts well-known in some regions to have been profound have been paid little attention and may be mistaken for prehistoric legacies (2628). Moreover, existing phytolith and charcoal data from terrestrial soils underneath standing tierra firme forest in some areas of the central and western Amazon with no known archaeological occupations nearby exhibit little to no evidence for long-term human occupation, anthropic soils, agriculture, forest clearing or other significant vegetation change, or recurrent/extensive fires during the past several thousand years (Fig. 1) (2933). Even such analyses of terrestrial soils of lake watersheds in western Amazonia known to have been occupied and farmed in prehistory revealed no spatially extensive deforestation of the watersheds, as significant human impacts most often occurred in areas closest to the lakes (Fig. 1) (34). Furthermore, vast areas have yet to be studied by archaeologists and paleoecologists, particularly the tierra firme forests that account for 95% of the land area of Amazonia.To further inform these issues, we report here a vegetation and fire history spanning 5,000 y derived from phytolith and charcoal studies of terrestrial soils underneath mature tierra firme forest in northeastern Peru. Phytoliths, the silica bodies produced by many Neotropical plants, are well preserved in terrestrial soils unlike pollen, and are deposited locally. They can be used to identify different tropical vegetational formations, such as old-growth forest, early successional vegetation typical of human disturbances including forest clearings, a number of annual seed and root crops, and trees thought to have been cultivated or managed in prehistory (e.g., refs. 2933 and 35).  相似文献   

20.
The anterior end of the mammalian face is characteristically composed of a semimotile nose, not the upper jaw as in other tetrapods. Thus, the therian nose is covered ventrolaterally by the “premaxilla,” and the osteocranium possesses only a single nasal aperture because of the absence of medial bony elements. This stands in contrast to those in other tetrapods in whom the premaxilla covers the rostral terminus of the snout, providing a key to understanding the evolution of the mammalian face. Here, we show that the premaxilla in therian mammals (placentals and marsupials) is not entirely homologous to those in other amniotes; the therian premaxilla is a composite of the septomaxilla and the palatine remnant of the premaxilla of nontherian amniotes (including monotremes). By comparing topographical relationships of craniofacial primordia and nerve supplies in various tetrapod embryos, we found that the therian premaxilla is predominantly of the maxillary prominence origin and associated with mandibular arch. The rostral-most part of the upper jaw in nonmammalian tetrapods corresponds to the motile nose in therian mammals. During development, experimental inhibition of primordial growth demonstrated that the entire mammalian upper jaw mostly originates from the maxillary prominence, unlike other amniotes. Consistently, cell lineage tracing in transgenic mice revealed a mammalian-specific rostral growth of the maxillary prominence. We conclude that the mammalian-specific face, the muzzle, is an evolutionary novelty obtained by overriding ancestral developmental constraints to establish a novel topographical framework in craniofacial mesenchyme.

In the movie For Whom the Bell Tolls (1943, Paramount), a girl says, “I do not know how to kiss, or I would kiss you. Where do the noses go?” (1) Nothing could reveal more vividly the curious morphological fact that it is the nose, not the tip of the upper jaw, that is the most protruding part of the mammalian face. Therian mammals are thus characterized by a protruding nose, representing a morphologically and functionally semi-independent module for tactile sensory detection and for mammalian olfactory function (Fig. 1A) (27). The topographical relationship between the nose and cranial bones also shows an exceptional pattern in mammals: the rostral-most bone of the upper jaw, or premaxilla, is found on the ventrolateral sides of the external nostrils in therian mammals, unlike in other amniotes in whom the premaxilla covers the rostromedial tip of the snout (Fig. 1 A and B) (24, 7, 8). However, the evolutionary origin of this therian-specific face (the so-called muzzle) and homology of the therian premaxilla (also known as the incisive bone) have not been examined for a long time (24, 79).Open in a separate windowFig. 1.Murine “premaxilla” develops differently from premaxillae of other tetrapods. (A) The anatomy of the therian mammal’s face. (B) General scheme of craniofacial development in amniotes (10, 11, 15). (C) Three-dimensional models of tetrapod embryos. The murine premaxilla ossifies in the same topographical position as the septomaxilla (orange) of other species. The infraorbital branch (nerve branch for vibrissae) of V2 was removed in 13.5 dpc mouse. The summary is shown in D. sn, solum nasi; nld, nasolacrimal duct; V1, ophthalmic nerve; V2, maxillary nerve. (Not to scale.)During vertebrate embryogenesis, the upper jaw is primarily formed by growth of the maxillary prominence of the mandibular arch, except for the premaxilla, the rostral midline part of the upper jaw, which develops by the convergence of the premandibular ectomesenchyme (frontonasal prominence) that initially develops rostral to the mandibular arch ectomesenchyme (Fig. 1B) (4, 1012). This topographical configuration is recognized even in some placoderms; that is, the basic pattern of jaw morphology is thought to be constrained among the jawed vertebrates (1214). However, the topographical position of the therian premaxilla suggests that this highly conserved pattern is disrupted in mammals in association with the evolution of the mammalian muzzle. Specifically, the innervation pattern of the homonymous “premaxilla” is significantly different in mammals (15), which is also suggestive of fundamental embryological changes.In the present study, we conducted comparative experimental embryological analyses and cell lineage tracing of the facial primordia to investigate the origin of the mammalian face.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号