首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18924篇
  免费   1933篇
  国内免费   55篇
耳鼻咽喉   165篇
儿科学   728篇
妇产科学   321篇
基础医学   2616篇
口腔科学   377篇
临床医学   2038篇
内科学   3600篇
皮肤病学   476篇
神经病学   1731篇
特种医学   756篇
外国民族医学   11篇
外科学   2001篇
综合类   328篇
一般理论   25篇
预防医学   2036篇
眼科学   747篇
药学   1516篇
中国医学   12篇
肿瘤学   1428篇
  2021年   265篇
  2020年   153篇
  2019年   322篇
  2018年   317篇
  2017年   224篇
  2016年   259篇
  2015年   332篇
  2014年   427篇
  2013年   615篇
  2012年   827篇
  2011年   883篇
  2010年   508篇
  2009年   465篇
  2008年   716篇
  2007年   732篇
  2006年   738篇
  2005年   765篇
  2004年   765篇
  2003年   670篇
  2002年   642篇
  2001年   641篇
  2000年   680篇
  1999年   541篇
  1998年   248篇
  1997年   237篇
  1996年   223篇
  1995年   216篇
  1994年   193篇
  1993年   189篇
  1992年   434篇
  1991年   436篇
  1990年   417篇
  1989年   434篇
  1988年   401篇
  1987年   429篇
  1986年   350篇
  1985年   380篇
  1984年   289篇
  1983年   265篇
  1982年   159篇
  1981年   162篇
  1980年   160篇
  1979年   284篇
  1978年   223篇
  1977年   160篇
  1976年   171篇
  1975年   165篇
  1974年   182篇
  1973年   172篇
  1972年   171篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
81.
There is a growing body of research focused on developing and evaluating behavioral training paradigms meant to induce enhancements in cognitive function. It has recently been proposed that one mechanism through which such performance gains could be induced involves participants’ expectations of improvement. However, no work to date has evaluated whether it is possible to cause changes in cognitive function in a long-term behavioral training study by manipulating expectations. In this study, positive or negative expectations about cognitive training were both explicitly and associatively induced before either a working memory training intervention or a control intervention. Consistent with previous work, a main effect of the training condition was found, with individuals trained on the working memory task showing larger gains in cognitive function than those trained on the control task. Interestingly, a main effect of expectation was also found, with individuals given positive expectations showing larger cognitive gains than those who were given negative expectations (regardless of training condition). No interaction effect between training and expectations was found. Exploratory analyses suggest that certain individual characteristics (e.g., personality, motivation) moderate the size of the expectation effect. These results highlight aspects of methodology that can inform future behavioral interventions and suggest that participant expectations could be capitalized on to maximize training outcomes.

There is a great deal of current scientific interest as to whether and/or how basic cognitive skills can be improved via dedicated behavioral training (13). This potential, if realized, could lead to substantial real-world impact. Indeed, effective training paradigms would have significant value not only for populations that show deficits in cognitive skills (e.g., individuals diagnosed with Attention Deficit Hyperactivity Disorder [ADHD] or Alzheimer’s disease and related dementias) but also, for the general public, where core cognitive capacities underpin success in both academic and professional contexts (46). These possible translational applications, paired with an emerging understanding of how to best unlock neuroplastic change across the life span (7, 8), have spurred hundreds of behavioral intervention studies over the past few decades. While the results have not been uniformly positive (perhaps not surprising given the massive heterogeneity in theoretical approach, methods, etc.), multiple meta-analyses suggest that it is possible for cognitive functions to be improved via some forms of dedicated behavioral training (911). However, while these basic science results provide optimism that real-world gains could be realized [and in fact, real-world gain is already being realized in some spheres, such as a Food and Drug Administration (FDA)–cleared video game–based treatment supplement for ADHD (12, 13)], concerns have been raised as to whether those interventions that have produced positive outcomes are truly working via the proposed mechanisms or through other nonspecific third-variable mechanisms. Several factors have been proposed to explain improvements in behavioral interventions, including selective attrition, contextual factors, regression to the mean, and practice effects to name a few (14). Here, we focus on whether expectation-based (i.e., placebo) mechanisms can explain improvements in cognitive training (1517).In other domains, such as in clinical trials in the pharmaceutical domain for instance, expectation-based mechanisms are typically controlled for by making the experimental treatment and the control treatment perceptually indistinguishable (e.g., both might be clear fluids in an intravenous bag or a white unmarked pill). Because perceptual characteristics cannot be used to infer condition, this methodology is meant to ensure that expectations are matched between the experimental and control groups (both in terms of the expectations that the participants have and in terms of the expectations that the research team members who interact with the participants have). Under ideal circumstances, the use of such a “double-unaware” design ensures that expectations cannot be an explanatory mechanism underlying any differences between the groups’ outcomes [note that we use the double-unaware terminology in lieu of the more common “double-blind” terminology, which can be seen as ableist (18)].It is unclear whether most pharmaceutical trials do, in fact, truly meet the double-unaware standard (e.g., despite being perceptually identical, active and control treatments nonetheless often produce different patterns of side effects that could be used to infer condition) (19, 20). Yet, meeting the double-unaware standard is particularly difficult in the case of cognitive training interventions (16). Here, there is simply no way to make the experimental and control interventions perceptually indistinguishable while at the same time, ensuring that the experimental condition contains an “active ingredient” that the control condition lacks. In behavioral interventions, no matter what the active ingredient may be, it will necessarily produce a difference in look and feel as compared with a training condition that lacks the ingredient.Researchers designing cognitive training trials, therefore, typically attempt to utilize experimental and control conditions that, while differing in the proposed active ingredient, will nonetheless produce similar expectations about the likely outcomes (16, 2124). This type of matching process, however, is inherently difficult as it is not always clear what expectations will be induced by a given type of experience. Consistent with this, there is reason to believe that expectations have not always been successfully matched. In multiple cases, despite attempts to match expectations across conditions, participants in behavioral intervention studies have nonetheless indicated the belief that the true active training task will produce more cognitive gains than the control task (2527). Critically, the data as to whether differential expectations in these cases actually, in turn, influence the observed outcomes are decidedly mixed. In some cases, participant expectations differed between training and control conditions, and these expectations were at least partially related to differences in behavior (25). In other cases, participants expected to improve but did not show any actual improvements in cognitive skill (28), or the degree to which they improved was unrelated to their stated expectations (29).Regardless of the mixed nature of the data thus far, there is increasing consensus that training studies should 1) attempt to match the expectations generated by their experimental and control treatment conditions, 2) measure the extent to which this matching is successful and if the matching was not successful, and 3) evaluate the extent to which differential expectations explain differences in outcome (16, 30). Yet, such methods are not ideal with respect to getting to the core question of whether expectation-based mechanisms can, in fact, alter performance on cognitive tasks in the context of cognitive intervention studies in the first place. Indeed, there is a growing body of work suggesting that self-reported expectations do not necessarily fully reflect the types of predictions being generated by the brain (e.g., it is possible to produce placebo analgesia effects even in the absence of self-reported expectation of pain relief) (31, 32). Instead, addressing this question would entail purposefully maximizing the differences in expectations between groups (i.e., rather than attempting to minimize differential expectations and then, measuring the possible impact if the differences were not eliminated, as is done in most cognitive training studies).One key question then is how to maximize such expectations. In general, in those domains that have closely examined placebo effects, expectations are typically induced through two broad routes: an explicit route and an associative route. In the explicit route, as given by the name, participants are explicitly told what behavioral changes they should expect (e.g., “this pill will improve your symptoms” or “this cognitive training will improve your cognition”) (33). In the associative learning route, participants are made to experience a behavioral change associated with expected outcomes (e.g., feeling improvements of symptoms or gains in cognition) through some form of deception (34). For example, in an explicit expectation induction study, participants may first have a hot temperature probe applied to their skin, after which they are asked to rate their pain level. An inert cream is then applied that is explicitly described as an analgesic before the hot temperature probe is reapplied. If participants indicate less pain after the cream is applied, this is taken as evidence of an explicit expectation effect. In the associative expectation version, the study progresses identically as above except that when the hot temperature probe is applied the second time, it is at a physically lower temperature than it was initially (participants are not made aware of this fact). This is meant to create an associative pairing between the cream and a reduction in experienced pain (i.e., not only are they told that the cream will reduce their pain, they are provided “evidence” that the cream works as described). If then, after reapplying the cream and applying the hot temperature probe a third time (this time at the same temperature setting as the first application), if participants indicate even less pain than in the explicit condition, this is taken as evidence of an associative expectation effect. It remains to be clarified how associative learning approaches may be best applied to cognitive training; however, we suggest here that a reasonable approach to this would be to provide test sessions where test items are manipulated to provide participants with an experience where they perceive that they are performing better, or worse in the case of a nocebo, than they did at the initial test session. Notably, while there are cases where strong placebo effects have been induced via only explicit (35) or only associative methods (36), in general, the most consistent and robust effects have been induced when a combination of these methods has been utilized (3739).Within the cognitive training field, the corresponding literature is quite sparse. Few studies have deliberately attempted to create differences in participant expectations, and of those, all have used the explicit expectation route alone, have implemented the manipulation in the context of rather short interventions (e.g., utilizing 20 min of “training” within a single session rather than the multiple hours that are typically implemented in actual training studies), or both. Of these, the results are again at best mixed, with one study suggesting that expectations alone can result in a positive impact on cognitive measures (40), while others have found no such effects (33, 41, 42). Given this critical gap in knowledge, here we examined the impact of manipulations deliberately designed to maximize the presence of differential expectations in the context of a long-term cognitive training study.  相似文献   
82.
Understanding, prioritizing, and mitigating methane (CH4) emissions requires quantifying CH4 budgets from facility scales to regional scales with the ability to differentiate between source sectors. We deployed a tiered observing system for multiple basins in the United States (San Joaquin Valley, Uinta, Denver-Julesburg, Permian, Marcellus). We quantify strong point source emissions (>10 kg CH4 h−1) using airborne imaging spectrometers, attribute them to sectors, and assess their intermittency with multiple revisits. We compare these point source emissions to total basin CH4 fluxes derived from inversion of Sentinel-5p satellite CH4 observations. Across basins, point sources make up on average 40% of the regional flux. We sampled some basins several times across multiple months and years and find a distinct bimodal structure to emission timescales: the total point source budget is split nearly in half by short-lasting and long-lasting emission events. With the increasing airborne and satellite observing capabilities planned for the near future, tiered observing systems will more fully quantify and attribute CH4 emissions from facility to regional scales, which is needed to effectively and efficiently reduce methane emissions.

Due to its short atmospheric lifetime and strong contribution to global radiative forcing, methane (CH4) has been a focus for near-term climate mitigation efforts (1). Robust, unbiased accounting systems are requisite to prioritizing and validating CH4 mitigation, ideally from multiple independent data streams. Atmospheric observations of CH4 can be key for mitigation, as observed CH4 concentrations are used to quantify emission rates and attribute emissions to sources. Findings from many independent research efforts have shown that CH4 emissions across multiple sectors follow heavy-tailed distributions (25), meaning that a small fraction of emission sources emits at disproportionately higher rates than the full population of emitters. CH4 sources can be intermittent or persistent in duration, which may be associated with short-lasting process-driven releases or long-lasting emissions due to abnormal or otherwise avoidable operating conditions such as malfunctions or leaks (5). Isolating populations of large emitters at varying levels of intermittency while quantifying their contribution to regional budgets creates a clear direction for mitigation focus. This tiered observing system strategy can be deployed in data-rich regions where multiple independent layers of observations are jointly leveraged to quantify and isolate emissions, and then drive action.Advances in CH4 remote sensing have enabled quantification of emissions from global to facility scales. Generally, these observing systems operate by measuring solar backscattered radiance in shortwave infrared regions where CH4 is a known absorber. Global mapping satellite missions have been used to identify CH4 hotspots and infer global- to regional-scale CH4 emission fluxes (68). In particular, the TROPOspheric Monitoring Instrument [TROPOMI (9)] onboard the Sentinel-5p satellite has proven capable of quantifying fluxes at basin scales (10, 11). Due to the kilometer-scale resolution of measurements from these global mapping missions, further attribution to particular facilities or even emission sectors is often not feasible. Less precise, target-mode satellites [e.g., PRISMA (12), GHGSat (13)] have proven capable of quantifying very large emissions at an ∼30-m scale, allowing for direct emission attribution to facilities or even subfacility-level infrastructure. However, the current generation of CH4 plume imaging satellites lack the spatial and temporal coverage to provide quantification completeness across multiple basins. For global mapping, high–spatial resolution multispectral satellites such as Sentinel-2 and Landsat are capable of CH4 detection (14, 15), but only for large emission sources (e.g., 2+ t h−1) over very bright surfaces.Airborne imaging spectrometers with shortwave infrared sensitivities and sufficient instrument signal-to-noise ratios can also quantify column CH4 concentrations. These remote sensing platforms are capable of resolving CH4 concentrations at high spatial resolution (∼3 to 5 m) depending on flight altitude, and can quantify point source emissions as low as 5 to 10 kg h−1 (16, 17). These instruments are sensitive to concentrated point-source emissions, and less sensitive to diffuse emissions spread over large areas (e.g., wetlands). Given the heavy-tailed nature of anthropogenic emissions, point-source detections above an imaging spectrometer’s detection limit may constitute a sizable fraction of the total regional CH4 flux, but independent measurements are needed to provide that context. Therefore, in this study, we flew a combination of the Global Airborne Observatory (GAO) and next-generation Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-NG) over multiple CH4 emitting regions between 2019 and 2021, including the southern San Joaquin Valley (SJV), the Permian, the Denver-Julesburg (DJ), the Unita, and the southwestern Pennsylvania portion of the Marcellus. We generally mapped each basin at least three times during each campaign to quantify persistence of emission sources. For the Permian, DJ, and SJV, we surveyed each region again after several months to assess trends and identify long-lasting emission sources. We also performed simultaneous regional CH4 flux inversions based on TROPOMI CH4 retrievals to quantify the total CH4 flux for each survey and compared against the quantified airborne point source budgets. With this tiered approach, we are able to quantify the contribution of unique point sources by sector on the regional budget, therefore highlighting specific points of action for mitigation.  相似文献   
83.
ObjectiveThe majority of tumor sequencing currently performed on cancer patients does not include a matched normal control, and in cases where germline testing is performed, it is usually run independently of tumor testing. The rates of concordance between variants identified via germline and tumor testing in this context are poorly understood. We compared tumor and germline sequencing results in patients with breast, ovarian, pancreatic, and prostate cancer who were found to harbor alterations in genes associated with homologous recombination deficiency (HRD) and increased hereditary cancer risk. We then evaluated the potential for a computational somatic-germline-zygosity (SGZ) modeling algorithm to predict germline status based on tumor-only comprehensive genomic profiling (CGP) results.MethodsA retrospective chart review was performed using an academic cancer center’s databases of somatic and germline sequencing tests, and concordance between tumor and germline results was assessed. SGZ modeling from tumor-only CGP was compared to germline results to assess this method’s accuracy in determining germline mutation status.ResultsA total of 115 patients with 146 total alterations were identified. Concordance rates between somatic and germline alterations ranged from 0% to 85.7% depending on the gene and variant classification. After correcting for differences in variant classification and filtering practices, SGZ modeling was found to have 97.2% sensitivity and 90.3% specificity for the prediction of somatic versus germline origin.ConclusionsMutations in HRD genes identified by tumor-only sequencing are frequently germline. Providers should be aware that technical differences related to assay design, variant filtering, and variant classification can contribute to discordance between tumor-only and germline sequencing test results. In addition, SGZ modeling had high predictive power to distinguish between mutations of somatic and germline origin without the need for a matched normal control, and could potentially be considered to inform clinical decision-making.

This study sought to describe concordance between germline and tumor testing among patients at the investigators'' institution and queried whether results from one test type could be used to inform the other. The authors also investigated the potential of a computational modeling algorithm to predict germline status based on tumor-only comprehensive genomic profiling results alone.

Implications for PracticeThe majority of tumor sequencing currently performed on cancer patients does not include a matched normal control, and in cases where germline testing is performed, it is usually run independently of tumor testing. The rates of concordance between variants identified via germline and tumor testing in this context are poorly understood. This study found that mutations in cancer predisposing genes identified by tumor-only sequencing are frequently germline, but providers should be aware that technical differences related to assay design, variant filtering, and variant classification can contribute to discordance between tumor-only and germline sequencing test results. In addition, computational somatic-germline-zygosity (SGZ) modeling had high predictive power to distinguish between mutations of somatic and germline origin without the need for a matched normal control, and could potentially be considered to inform clinical decision-making.  相似文献   
84.
85.
86.
87.
88.
89.
90.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号