首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   133篇
  免费   21篇
耳鼻咽喉   1篇
儿科学   1篇
妇产科学   3篇
基础医学   24篇
口腔科学   1篇
临床医学   13篇
内科学   13篇
皮肤病学   1篇
神经病学   9篇
特种医学   2篇
外科学   18篇
综合类   2篇
一般理论   1篇
预防医学   40篇
眼科学   5篇
药学   13篇
中国医学   1篇
肿瘤学   6篇
  2023年   6篇
  2022年   3篇
  2021年   8篇
  2020年   13篇
  2019年   14篇
  2018年   5篇
  2017年   10篇
  2016年   13篇
  2015年   11篇
  2014年   7篇
  2013年   11篇
  2012年   6篇
  2011年   5篇
  2010年   5篇
  2009年   3篇
  2008年   7篇
  2007年   3篇
  2006年   2篇
  2005年   5篇
  2004年   3篇
  2003年   1篇
  2002年   2篇
  2001年   2篇
  2000年   4篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1995年   1篇
  1993年   1篇
排序方式: 共有154条查询结果,搜索用时 125 毫秒
11.
The self is a multifaceted phenomenon that integrates information and experience across multiple time scales. How temporal integration on the psychological level of the self is related to temporal integration on the neuronal level remains unclear. To investigate temporal integration on the psychological level, we modified a well‐established self‐matching paradigm by inserting temporal delays. On the neuronal level, we indexed temporal integration in resting‐state EEG by two related measures of scale‐free dynamics, the power law exponent and autocorrelation window. We hypothesized that the previously established self‐prioritization effect, measured as decreased response times or increased accuracy for self‐related stimuli, would change with the insertion of different temporal delays between the paired stimuli, and that these changes would be related to temporal integration on the neuronal level. We found a significant self‐prioritization effect on accuracy in all conditions with delays, indicating stronger temporal integration of self‐related stimuli. Further, we observed a relationship between temporal integration on psychological and neuronal levels: higher degrees of neuronal integration, that is, higher power‐law exponent and longer autocorrelation window, during resting‐state EEG were related to a stronger increase in the self‐prioritization effect across longer temporal delays. We conclude that temporal integration on the neuronal level serves as a template for temporal integration of the self on the psychological level. Temporal integration can thus be conceived as the “common currency” of neuronal and psychological levels of self.  相似文献   
12.
This article discusses the development and implementation of New Zealand's booking system for publicly funded non-urgent surgical and medical procedures. The 'booking system' emerged out of New Zealand's core services debate and the government's desire to remove waiting lists. It was targeted for implementation by mid-1998. However, the booking system remains in an unsatisfactory state and a variety of problems have plagued its introduction. These include a lack of national consistency in the priority access criteria, failure to pilot the system and a shortfall in the levels of funding available to treat the numbers of patients whose priority criteria 'scores' deem them clinically eligible for surgery. The article discusses endeavours to address these problems. In conclusion, based on the New Zealand experience, the article provides lessons for policy-makers interested in introducing surgical booking systems.  相似文献   
13.
BACKGROUND: The purpose of the present paper was to evaluate the variability of using a visual analogue scale (VAS) and to assess the feasibility of a priority-setting scoring system for prioritizing elective cataract surgery. METHODS: Consecutive cases listed for cataract surgery were prospectively recruited. Ophthalmologists listed patients to undergo early or normal surgery and were asked to rate the urgency of surgery using a VAS. Patients were then reassessed and a cataract surgery prioritization (CSP) score was calculated based on the New Zealand priority criteria for cataract surgery. Correlation coefficients between VAS and CSP scores were calculated to determine the variability among ophthalmologists in using the VAS in prioritizing surgery. Further analyses were performed to assess the potential impact of implementing the CSP system. RESULTS: A total of 326 patients were recruited. There was a positive correlation between VAS and CSP scores (Spearman rho= 0.407, P < 0.001). A high degree of variation among ophthalmologists in the use of VAS was found. Patients with poor binocular vision were not listed as early, whereas patients with poor vision in the eye listed for cataract surgery but good vision in the fellow eye were more likely to be prioritized to have early operation. These findings suggest that patients with severe impairment in binocular visual function were not adequately accounted for during cataract surgery listing. CONCLUSIONS: The use of a VAS for prioritizing cataract surgery may be suboptimal due to high subjectivity. Adoption of an objective criteria-validated priority-setting scoring system may allow better stratification of patients to ensure better service provision.  相似文献   
14.
Occupational therapists in British community mental health teams have been debating how the most effective services can be targeted at the most needy clients. This paper presents the results of a quantitative study that examined 40 British occupational therapists' referral prioritization policies. Results showed half of the participants felt their generic responsibilities, which involved having care co-ordination responsibilities, were too large. Only 25% of participants co-ordinated care for clients whose needs were related to occupational dysfunction. Judgement analysis, that involved regressing the 40 individuals' prioritization decisions onto the 90 respective referral scenarios, was used to statistically model how referral information had been weighted. Group agreement of prioritization was moderate with the reason for referral, history of violence and diagnosis being given the most weighting. Consistency in policy application, as measured by examining prioritization decisions on identical referrals, showed wide variability. Further research is required to identify the optimal and most stable policies within this group.  相似文献   
15.
16.
17.
Rationale, aims and objectives This paper deals with the problem of surgical waiting lists and is aimed, in particular, at comparing two different prioritization approaches: (1) the clinical assessment of treatment urgency aimed at categorizing patients into urgency‐related groups (URGs) with a given recommended maximum waiting time for treatment; and (2) the implementation of an original prioritization scoring algorithm aimed at determining the relative priority of each patient in the waiting list and the corresponding order of admission. Methods A modelling exercise based on a cohort of 236 patients enrolled on the waiting list of a surgical department in an Italian public university hospital, from 1 January to 30 June 2004, is presented. The comparison is based on a measure called need‐adjusted‐waiting‐days, which allows to take into proper account both urgency and priority. Results The results show that both methods should be implemented simultaneously for increasing the department’s performance in terms of both efficiency– outcome gained from a given amount of resources – and equity– how patients are admitted according to their need. Conclusions Waiting list prioritization should not be limited to classifying patients into URGs, but to using a scoring system as well, in order to schedule patient admissions in an explicit and transparent way.  相似文献   
18.
Objective The aim of the present study was to compare two priority tools used for joint replacement for patients on waiting lists, which use two different methods. Methods Two prioritization tools developed and validated by different methodologies were used on the same cohort of patients. The first, an IRYSS hip and knee priority score (IHKPS) developed by RAND method, was applied while patients were on the waiting list. The other, a Catalonia hip–knee priority score (CHKPS) developed by conjoint analysis, was adapted and applied retrospectively. In addition, all patients fulfilled pre‐intervention the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC). Correlation between them was studied by Pearson correlation coefficient (r). Agreement was analysed by means of intra‐class correlation coefficient (ICC), Kendall coefficient and Cohern kappa. The relationship between IHKPS, CHKPS and baseline WOMAC scores by r coefficient was studied. Results The sample consisted of 774 consecutive patients. Pearson correlation coefficient between IHKPS and CHKPS was 0.79. The agreement study showed that ICC was 0.74, Kendall coefficient 0.86 and kappa 0.66. Finally, correlation between CHKPS and baseline WOMAC ranged from 0.43 to 0.64. The results according to the relationship between IHKPS and WOMAC ranged from 0.50 to 0.74. Conclusions Results support the hypothesis that if the final objective of the prioritization tools is to organize and sort patients on the waiting list, although they use different methodologies, the results are similar.  相似文献   
19.
The protein-coding exome of a patient with a monogenic disease contains about 20,000 variants, only one or two of which are disease causing. We found that 58% of rare variants in the protein-coding exome of the general population are located in only 2% of the genes. Prompted by this observation, we aimed to develop a gene-level approach for predicting whether a given human protein-coding gene is likely to harbor disease-causing mutations. To this end, we derived the gene damage index (GDI): a genome-wide, gene-level metric of the mutational damage that has accumulated in the general population. We found that the GDI was correlated with selective evolutionary pressure, protein complexity, coding sequence length, and the number of paralogs. We compared GDI with the leading gene-level approaches, genic intolerance, and de novo excess, and demonstrated that GDI performed best for the detection of false positives (i.e., removing exome variants in genes irrelevant to disease), whereas genic intolerance and de novo excess performed better for the detection of true positives (i.e., assessing de novo mutations in genes likely to be disease causing). The GDI server, data, and software are freely available to noncommercial users from lab.rockefeller.edu/casanova/GDI.Germ-line mutations can contribute to the long-term adaptation of humans, but at the expense of causing a large number of genetic diseases (1). The advent of next-generation sequencing (NGS)-based approaches, including whole-exome sequencing (WES), whole-genome sequencing (WGS), and RNA-Seq, has facilitated the large-scale detection of gene variants at both the individual and population levels (26). In patients suffering from a monogenic disease, at most two variants are disease causing [true positives (TP)], and the other 20,000 or so protein-coding exome variants are false positives (FP; type I error). Several variant-level metrics predicting the biochemical impact of DNA mutations (79) can be used to prioritize candidate variants for a phenotype of interest (10, 11). Gene-level metrics aim to prioritize the genes themselves, providing information that can be used for the further prioritization of variants. There are currently fewer gene-level than variant-level computational methods. They provide complementary information, as it is best to predict the impact of a variant by also taking into account population genetics data for its locus. Current gene-level methods include genic intolerance, as measured by the residual variation intolerance score (RVIS) (12) and de novo excess (DNE) (13). These metrics are particularly useful for determining whether a given gene (and, by inference, its variants) is a plausible candidate for involvement in a particular genetic disease (i.e., for the selection of a short list of candidate genes and variants, which include the TPs). However, owing to the large number and diversity of variants, the selection of a single candidate gene from the NGS data for a given patient with a specific disease remains challenging.We reasoned that genes frequently mutated in healthy populations would be unlikely to cause inherited and rare diseases, but would probably make a disproportionate contribution to the variant calls observed in any given patient. Conversely, mutations in genes that are never or only rarely mutated under normal circumstances are more likely to be disease-causing. Leading gene-level strategies are based on selective pressure (12) and de novo mutation rate estimates (13). These methods are tailored to detect genes likely to harbor TPs. However, these methods do not directly calculate quantitatively the mutational load for human genes in the general (i.e., “healthy”) population or the frequencies of mutant alleles. These methods may, therefore, not be optimal for filtering out highly mutated genes, which are likely to harbor many FPs. Moreover, there has been no formal comparison of the power of these gene-level methods and their combinations for maximizing the discovery of FPs and TPs by NGS. We therefore aimed to generate a robust metric of the cumulative mutational damage to each human protein-coding gene, to make it easier to distinguish the FP variants harbored by highly damaged genes (e.g., under relaxed constraint or positive selection) from potential candidate genes and variants, including the TPs. By damaged genes, we refer to genes displaying many nonsynonymous mutations, which are not necessarily damaging biochemically or evolutionarily. We developed the gene damage index (GDI), which defines, in silico, the mutational damage accumulated by each protein-coding human gene in the general population, and reflecting the combined influences of drifts and selections. We then tested this approach with the WES data for 84 patients in our in-house database, each of these patients having a known primary immunodeficiency (PID). Finally, we used receiver operating characteristic (ROC) curves for formal comparisons of performance between GDI and the existing gene-level RVIS and DNE approaches, and to assess the power of the gene-level methods for detecting enrichment in de novo mutations in cases versus controls. We also tested whether these methods could act in synergy to filter out FPs and select TPs.  相似文献   
20.
In many large ecosystems, conservation projects are selected by a diverse set of actors operating independently at spatial scales ranging from local to international. Although small-scale decision making can leverage local expert knowledge, it also may be an inefficient means of achieving large-scale objectives if piecemeal efforts are poorly coordinated. Here, we assess the value of coordinating efforts in both space and time to maximize the restoration of aquatic ecosystem connectivity. Habitat fragmentation is a leading driver of declining biodiversity and ecosystem services in rivers worldwide, and we simultaneously evaluate optimal barrier removal strategies for 661 tributary rivers of the Laurentian Great Lakes, which are fragmented by at least 6,692 dams and 232,068 road crossings. We find that coordinating barrier removals across the entire basin is nine times more efficient at reconnecting fish to headwater breeding grounds than optimizing independently for each watershed. Similarly, a one-time pulse of restoration investment is up to 10 times more efficient than annual allocations totaling the same amount. Despite widespread emphasis on dams as key barriers in river networks, improving road culvert passability is also essential for efficiently restoring connectivity to the Great Lakes. Our results highlight the dramatic economic and ecological advantages of coordinating efforts in both space and time during restoration of large ecosystems.Habitat loss and fragmentation are leading drivers of declining biodiversity and ecosystem services worldwide (13). Landscape corridors and dam removals are popular and effective strategies for mitigating fragmentation (4, 5). To implement these projects efficiently, societies around the world are developing regional and even continental-scale plans for restoring ecosystem connectivity (6). These plans set ecosystem-level conservation objectives and identify priority regions for investment, but individual project selection (e.g., a specific dam removal or habitat corridor) is generally dictated by opportunism and politics. When poorly coordinated, these piecemeal mitigation efforts may be an inefficient means of achieving ecosystem-level objectives. Transboundary coordination is known to increase the cost-effectiveness of nature reserve networks (79), but the benefits of coordination are likely to be even greater for connectivity efforts in rivers because the dendritic nature of drainage basins makes them highly susceptible to fragmentation (1012). Migratory fishes, which support major fisheries and ecosystem processes, are particularly vulnerable to life cycle disruption by the millions of dams and road crossings that fragment the world’s rivers (13, 14).Here, we investigate the value of coordinating restoration efforts in space and time to maximize ecological connectivity between the Laurentian Great Lakes and their tributaries. The Great Lakes Basin (GLB) contains 21% of the world’s surface freshwater and is home to more than 33.5 million people (15). High societal dependence on lake-derived ecosystem services includes US$7 billion annually in economic activity related to recreational fishing (16). Historically, breeding migrations of dozens of native fish species formed an important ecological link between the Great Lakes and their tributaries (17). Today, hundreds of thousands of dams and road culverts partially or fully block historical fish migration routes (18). There is growing investment in removing or modifying these structures, but project selection has been largely opportunistic and driven by local priorities.Barrier removal projects to restore tributary connectivity are selected and funded by a diverse set of actors operating independently at different spatial scales across the GLB. Most road crossings are managed by counties or states, whereas impacts of dams are addressed at the watershed, state, federal, or even international level. Funding to restore connectivity is often disbursed as small, one-time investments, but large pulses of public investment are occasionally available, as within the $1.2 billion Great Lakes Restoration Initiative (19). Although connectivity restoration efforts have been piecemeal, the GLB has a long history of collaborative management of shared resources, including binational treaties regarding fisheries, invasive species, and water quality (20). The success of these initiatives demonstrates that large-scale coordination is feasible and that large pulses of spending can be arranged when justified.We used a return-on-investment framework to analyze potential efficiency gains from coordinating barrier removals at a range of spatial scales (county, tributary, state, lake, nation, or GLB-wide) and temporal scales (a single “pulse” of investment vs. the same amount allocated as a series of 2, 5, or 10 “trickle” investments). Return-on-investment approaches are known to outperform alternative strategies such as purely minimizing cost, and maximizing benefit irrespective of cost (21). Our mathematical optimization model identifies the portfolio of barrier removal projects that provides the greatest increase in total tributary channel length (hereafter “habitat”) accessible to migratory fishes for a given budget. Channel length serves as a surrogate for gains in spawning habitat across the entire fish community and is widely used in restoration planning in lieu of high-resolution spawning habitat maps for individual species.We applied this model to a comprehensive barrier inventory for the GLB, encompassing 6,692 dams and 232,068 road crossings georeferenced within the 661 largest tributary watersheds (18). For each of these structures, we estimated the direct economic cost of restoring full passability (removal of dams or retrofitting road culverts) and the net upstream habitat that would become available, and we used estimates of the current passability of each culvert (22). Barrier passability is defined as the proportion of fish able to pass through or over a barrier to migrate upstream. Because dozens of partially passable structures often separate headwater spawning grounds from the Great Lakes, we calculated the net probability that a migratory fish could reach the area upstream of a particular barrier as the product of that barrier’s passability and the passability of all downstream barriers (hereafter, the “cumulative passability” of a barrier). Similarly, the net benefit of any barrier removal includes not only full access to the unobstructed area immediately upstream but also partial access to areas above successive upstream barriers until cumulative passability declines to zero.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号