首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7225篇
  免费   670篇
  国内免费   152篇
耳鼻咽喉   32篇
儿科学   193篇
妇产科学   37篇
基础医学   737篇
口腔科学   164篇
临床医学   577篇
内科学   1396篇
皮肤病学   37篇
神经病学   452篇
特种医学   507篇
外科学   691篇
综合类   759篇
预防医学   1146篇
眼科学   72篇
药学   720篇
  1篇
中国医学   347篇
肿瘤学   179篇
  2024年   8篇
  2023年   102篇
  2022年   126篇
  2021年   408篇
  2020年   305篇
  2019年   246篇
  2018年   258篇
  2017年   245篇
  2016年   247篇
  2015年   293篇
  2014年   429篇
  2013年   535篇
  2012年   386篇
  2011年   428篇
  2010年   335篇
  2009年   304篇
  2008年   323篇
  2007年   309篇
  2006年   228篇
  2005年   251篇
  2004年   185篇
  2003年   187篇
  2002年   173篇
  2001年   165篇
  2000年   141篇
  1999年   125篇
  1998年   121篇
  1997年   105篇
  1996年   89篇
  1995年   99篇
  1994年   93篇
  1993年   87篇
  1992年   87篇
  1991年   73篇
  1990年   63篇
  1989年   72篇
  1988年   44篇
  1987年   51篇
  1986年   42篇
  1985年   35篇
  1984年   51篇
  1983年   37篇
  1982年   36篇
  1981年   29篇
  1980年   18篇
  1979年   24篇
  1978年   13篇
  1977年   6篇
  1976年   8篇
  1973年   7篇
排序方式: 共有8047条查询结果,搜索用时 156 毫秒
1.
2.
肝再生的机制非常复杂,线粒体功能障碍所引起的能量供给不足是影响因素之一,但其机理亟待研究。严重肝损害时肝细胞ATP供应减少、线粒体能量代谢异常,导致肝再生受到抑制。补中益气汤为李东垣所创,其具补中益气、升阳举陷之功,有实验证实补中益气汤具有保护线粒体功能、增加线粒体能量代谢的作用,从而促进肝再生。本文综述补中益气汤总方与其中各类中药对线粒体能量代谢的保护作用,从而为促进肝再生提供新的治疗手段并对改善病人预后有重要意义。  相似文献   
3.
目的:探讨门脉期双源CT多个定量参数与胃腺癌病理分化程度及HER2的相关性。方法: 回顾性分析2018年7月至2019年4月间于陕西省人民医院行双源CT双能量扫描的48例经胃镜活检(21例)或手术病理证实(27例)的胃腺癌及30例正常胃的影像学资料,其中27例HER2指标明确,通过西门子第二代双源CT扫描获得静脉期双能量图像,利用syngo.via软件获得曲线斜率、门脉期碘浓度、标准化碘浓度;将患者分为胃腺癌与正常胃壁组,高、中、低分化胃腺癌组,HER2阳性组(+,++,+++)与HER2阴性组(-)。统计学方法采用Kappa一致性检验、ROC曲线法、两独立样本t检验及方差分析。结果:活检与术后病理结果具有较强的一致性(Kappa系数为0.701),两者无明显差异;胃腺癌与正常胃壁两组间能谱曲线斜率(1.35±0.24、2.19±0.71)及标准化碘浓度(0.31±0.079、0.54±0.157)均具有统计学意义(P<0.05),曲线下面积分别为0.992、0.919;低分化、中分化及高分化胃腺癌能谱曲线斜率值(3.07±0.67,2.63±0.57,2.01±0.39)组间及组内差异均具有统计学意义(P<0.05),低分化、中分化及高分化胃腺癌门脉期标准化碘浓度(0.60±0.167,0.52±0.089,0.36±0.039)组间差异具有统计学意义(P<0.05),中分化组与低分化组差异无统计学意义(P>0.05),高分化组与中、低分化组均具有统计学差异(P<0.05)。HER2阳性组与阴性组的能谱曲线斜率及标准化碘浓度值无统计学差异(P>0.05)。结论:能谱曲线斜率及门脉期标准化碘浓度值有助于对胃腺癌进行诊断并推测病理分化程度;双源CT定量参数与免疫组化指标HER2无相关性。  相似文献   
4.
目的调查北京地区健康体检人群骨密度的情况以及骨量减少和骨质疏松的患病率,为骨质疏松症的防治提供参考。方法选择2017年1月至2018年12月在中日友好医院健康体检中心进行健康体检的人群,排除继发性骨质疏松症及其他影响骨代谢的因素,共3859名。其中男性2067名,女性1792名。年龄20~83岁,平均年龄(51.29±11.18)岁,按性别及年龄每10年一组。采用美国GE公司的LUNAR Prodigy双能X线骨密度仪测量受试者腰椎1~4正位及股骨颈和全髋的骨密度。分析各组不同部位骨密度情况及骨量异常(包括骨量减少和骨质疏松)的患病率。采用SPSS 22.0统计软件进行分析,以P<0.05为差异有统计学意义。结果①男性腰椎1~4骨密度峰值在20~29岁,股骨颈和全髋骨密度峰值在30~39岁。女性各部位骨密度峰值均在30~39岁。②随年龄增长,男性和女性骨量异常患病率均呈上升趋势,50岁以上女性骨量异常患病率显著上升,明显高于同年龄组男性。③30~59岁男性和女性腰椎骨量异常患病率均明显高于髋部;70岁以上男性和60岁以上女性髋部骨量异常患病率明显高于腰椎。结论中老年人群尤其是绝经后女性是骨质疏松症的高危人群;老年人群的骨质疏松筛查可以考虑选择髋部骨密度为主。  相似文献   
5.
6.
Brain mitochondrial dysfunction has been implicated in several neurodegenerative diseases. The distribution and efficiency of mitochondria display large heterogeneity throughout the regions of the brain. This may imply that the selective regional susceptibility of neurodegenerative diseases could be mediated through inherent differences in regional mitochondrial function. To investigate regional cerebral mitochondrial energetics, the rates of oxygen consumption and adenosine-5′-triphosphate (ATP) synthesis were assessed in isolated non-synaptic mitochondria of the cerebral cortex, hippocampus, and striatum of the male mouse brain. Oxygen consumption rates were assessed using a Seahorse XFe96 analyzer and ATP synthesis rates were determined by an online luciferin-luciferase coupled luminescence assay. Complex I- and complex II-driven respiration and ATP synthesis, were investigated by applying pyruvate in combination with malate, or succinate, as respiratory substrates, respectively. Hippocampal mitochondria exhibited the lowest basal and adenosine-5′-diphosphate (ADP)-stimulated rate of oxygen consumption when provided pyruvate and malate. However, hippocampal mitochondria also exhibited an increased proton leak and an elevated relative rate of oxygen consumption in response to the uncoupler carbonyl cyanide 4-(trifluoromethoxy)phenylhydrazone (FCCP), showing a large capacity for uncoupled respiration in the presence of pyruvate. When the complex II-linked substrate succinate was provided, striatal mitochondria exhibited the highest respiration and ATP synthesis rate, whereas hippocampal mitochondria had the lowest. However, the mitochondrial efficiency, determined as ATP produced/O2 consumed, was similar between the three regions. This study reveals inherent differences in regional mitochondrial energetics and may serve as a tool for further investigations of regional mitochondrial function in relation to neurodegenerative diseases.  相似文献   
7.
Darwinian evolution tends to produce energy-efficient outcomes. On the other hand, energy limits computation, be it neural and probabilistic or digital and logical. Taking a particular energy-efficient viewpoint, we define neural computation and make use of an energy-constrained computational function. This function can be optimized over a variable that is proportional to the number of synapses per neuron. This function also implies a specific distinction between adenosine triphosphate (ATP)-consuming processes, especially computation per se vs. the communication processes of action potentials and transmitter release. Thus, to apply this mathematical function requires an energy audit with a particular partitioning of energy consumption that differs from earlier work. The audit points out that, rather than the oft-quoted 20 W of glucose available to the human brain, the fraction partitioned to cortical computation is only 0.1 W of ATP [L. Sokoloff, Handb. Physiol. Sect. I Neurophysiol. 3, 1843–1864 (1960)] and [J. Sawada, D. S. Modha, “Synapse: Scalable energy-efficient neurosynaptic computing” in Application of Concurrency to System Design (ACSD) (2013), pp. 14–15]. On the other hand, long-distance communication costs are 35-fold greater, 3.5 W. Other findings include 1) a 108-fold discrepancy between biological and lowest possible values of a neuron’s computational efficiency and 2) two predictions of N, the number of synaptic transmissions needed to fire a neuron (2,500 vs. 2,000).

The purpose of the brain is to process information, but that leaves us with the problem of finding appropriate definitions of information processing. We assume that given enough time and given a sufficiently stable environment (e.g., the common internals of the mammalian brain), then Nature’s constructions approach an optimum. The problem is to find which function or combined set of functions is optimal when incorporating empirical values into these function(s). The initial example in neuroscience is ref. 1, which shows that information capacity is far from optimized, especially in comparison to the optimal information per joule which is in much closer agreement with empirical values. Whenever we find such an agreement between theory and experiment, we conclude that this optimization, or near optimization, is Nature’s perspective. Using this strategy, we and others seek quantified relationships with particular forms of information processing and require that these relationships are approximately optimal (17). At the level of a single neuron, a recent theoretical development identifies a potentially optimal computation (8). To apply this conjecture requires understanding certain neuronal energy expenditures. Here the focus is on the energy budget of the human cerebral cortex and its primary neurons. The energy audit here differs from the premier earlier work (9) in two ways: The brain considered here is human not rodent, and the audit here uses a partitioning motivated by the information-efficiency calculations rather than the classical partitions of cell biology and neuroscience (9). Importantly, our audit reveals greater energy use by communication than by computation. This observation in turn generates additional insights into the optimal synapse number. Specifically, the bits per joule optimized computation must provide sufficient bits per second to the axon and presynaptic mechanism to justify the great expense of timely communication. Simply put from the optimization perspective, we assume evolution would not build a costly communication system and then not supply it with appropriate bits per second to justify its costs. The bits per joule are optimized with respect to N, the number of synaptic activations per interpulse interval (IPI) for one neuron, where N happens to equal the number of synapses per neuron times the success rate of synaptic transmission (below).To measure computation, and to partition out its cost, requires a suitable definition at the single-neuron level. Rather than the generic definition “any signal transformation” (3) or the neural-like “converting a multivariate signal to a scalar signal,” we conjecture a more detailed definition (8). To move toward this definition, note two important brain functions: estimating what is present in the sensed world and predicting what will be present, including what will occur as the brain commands manipulations. Then, assume that such macroscopic inferences arise by combining single-neuron inferences. That is, conjecture a neuron performing microscopic estimation or prediction. Instead of sensing the world, a neuron’s sensing is merely its capacitive charging due to recently active synapses. Using this sampling of total accumulated charge over a particular elapsed time, a neuron implicitly estimates the value of its local latent variable, a variable defined by evolution and developmental construction (8). Applying an optimization perspective, which includes implicit Bayesian inference, a sufficient statistic, and maximum-likelihood unbiasedness, as well as energy costs (8), produces a quantified theory of single-neuron computation. This theory implies the optimal IPI probability distribution. Motivating IPI coding is this fact: The use of constant amplitude signaling, e.g., action potentials, implies that all information can only be in IPIs. Therefore, no code can outperform an IPI code, and it can equal an IPI code in bit rate only if it is one to one with an IPI code. In neuroscience, an equivalent to IPI codes is the instantaneous rate code where each message is IPI1. In communication theory, a discrete form of IPI coding is called differential pulse position modulation (10); ref. 11 explicitly introduced a continuous form of this coding as a neuron communication hypothesis, and it receives further development in ref. 12.Results recall and further develop earlier work concerning a certain optimization that defines IPI probabilities (8). An energy audit is required to use these developments. Combining the theory with the audit leads to two outcomes: 1) The optimizing N serves as a consistency check on the audit and 2) future energy audits for individual cell types will predict N for that cell type, a test of the theory. Specialized approximations here that are not present in earlier work (9) include the assumptions that 1) all neurons of cortex are pyramidal neurons, 2) pyramidal neurons are the inputs to pyramidal neurons, 3) a neuron is under constant synaptic bombardment, and 4) a neuron’s capacitance must be charged 16 mV from reset potential to threshold to fire.Following the audit, the reader is given a perspective that may be obvious to some, but it is rarely discussed and seemingly contradicts the engineering literature (but see ref. 6). In particular, a neuron is an incredibly inefficient computational device in comparison to an idealized physical analog. It is not just a few bits per joule away from optimal predicted by the Landauer limit, but off by a huge amount, a factor of 108. The theory here resolves the efficiency issue using a modified optimization perspective. Activity-dependent communication and synaptic modification costs force upward optimal computational costs. In turn, the bit value of the computational energy expenditure is constrained to a central limit like the result: Every doubling of N can produce no more than 0.5 bits. In addition to 1) explaining the 108 excessive energy use, other results here include 2) identifying the largest “noise” source limiting computation, which is the signal itself, and 3) partitioning the relevant costs, which may help engineers redirect focus toward computation and communication costs rather than the 20-W total brain consumption as their design goal.  相似文献   
8.
9.
PurposeTo assess the impact of dose reduction and the use of an advanced modeled iterative reconstruction algorithm (ADMIRE) on image quality in low-energy monochromatic images from a dual-source dual energy computed tomography CT (DSCT) platform.Materials and methodsAcquisitions on an image-quality phantom were performed using DSCT equipment with 100/Sn150 kVp for four dose levels (CTDIvol: 20/11/8/5mGy). Raw data were reconstructed for six energy levels (40/50/60/70/80/100 keV) using filtered back projection and two levels of ADMIRE (A3/A5). Noise power spectrum (NPS) and task-based transfer function (TTF) were calculated on virtual monoenergetic images (VMIs). Detectability index (d′) was computed to model the detection task of two enhanced iodine lesions as function of keV.ResultsNoise-magnitude was significantly reduced between 40 to 70 keV by ?56 ± 0% (SD) (range: ?56%–?55%) with FBP; ?56 ± 0% (SD) (?56%–?56%) with A3; and ?57 ± 1% (SD) (range: ?57%–?56%) with A5. The average spatial frequency of the NPS peaked at 70 keV and decreased as ADMIRE level increased. TTF values at 50% were greatest at 40 keV and shifted towards lower frequencies as the keV increased. The detectability of both lesions increased with increasing dose level and ADMIRE level. For the simulated lesion with iodine at 2 mg/mL, d’ values peaked at 70 keV for all reconstruction types, except for A3 at 20 mGy and A5 at 11 and 20 mGy, where d’ peaked at 60 keV. For the other simulated lesion, d’ values were highest at 40 keV and decreased beyond.ConclusionAt low keV on VMIs, this study confirms that iterative reconstruction reduces the noise magnitude, improves the spatial resolution and increases the detectability of enhanced iodine lesions.  相似文献   
10.
India has set aggressive targets to install more than 400 GW of wind and solar electricity generation by 2030, with more than two-thirds of that capacity coming from solar. This paper examines the electricity and carbon mitigation costs to reliably operate India’s grid in 2030 for a variety of wind and solar targets (200 GW to 600 GW) and the most promising options for reducing these costs. We find that systems where solar photovoltaic comprises only 25 to 50% of the total renewable target have the lowest carbon mitigation costs in most scenarios. This result invites a reexamination of India’s proposed solar-majority targets. We also find that, compared to other regions and contrary to prevailing assumptions, meeting high renewable targets will avoid building very few new fossil fuel (coal and natural gas) power plants because of India’s specific weather patterns and need to meet peak electricity demand. However, building 600 GW of renewable capacity, with the majority being wind plants, reduces how often fossil fuel power plants run, and this amount of capacity can hold India’s 2030 emissions below 2018 levels for less than the social cost of carbon. With likely wind and solar cost declines and increases in coal energy costs, balanced or wind-majority high renewable energy systems (600 GW or 45% share by energy) could result in electricity costs similar to a fossil fuel-dominated system. As an alternative strategy for meeting peak electricity demand, battery storage can avert the need for new fossil fuel capacity but is cost effective only at low capital costs ( USD 150 per kWh).

India emitted 3.2 billion metric tons of CO2e in 2016, or 6% of annual global greenhouse gas emissions, placing it third only to China and the United States (1). One-third of these emissions were from coal-based electricity. At the same time, both per capita emissions and energy use remain well below global averages, suggesting a massive potential for growth of electricity generation and emissions (1). India’s primary energy demand is expected to double by 2040 compared to 2017 (2). Whether this energy comes from fossil or low-carbon sources will significantly affect the ability to limit average global temperature rise to below 2 °C.India is already pursuing significant technology-specific renewable energy targets—100 GW of solar and 60 GW of wind by 2022—and, in its Nationally Determined Contributions (NDC), committed to a 40% target for installed generation capacity from nonfossil fuel sources by 2030 (3). In 2019, in part to fulfill its NDC commitment, the Indian government proposed to install 440 GW of renewable energy capacity by 2030, with 300 GW of solar and 140 GW of wind capacity (4). Although costs of solar photovoltaic (PV) and wind technologies have declined significantly in recent years (57), the low cost of coal and integration costs associated with variable renewable energy (VRE) technologies like wind and solar may hinder India’s cost-effective transition to a decarbonized electricity system. This paper seeks to answer a number of questions that arise in the Indian context. What targets for wind and solar capacity have the lowest associated integration costs? Will these targets significantly offset the need to build fossil fuel generation capacity? What additional measures can we take to mitigate VRE integration costs?Merely comparing the levelized costs of VRE with the costs of conventional generation ignores additional cost drivers, which depend on the timing of VRE production and other conditions in the power system (8, 9). Quantifying these drivers requires models that choose lowest-cost generation capacity portfolios and simulate optimal system operation with detailed spatiotemporal data. Several prior studies address these system-level integration costs in a capacity expansion planning framework (1016), often making decisions based on a limited sample of representative hours. Other studies explicitly estimate the relationship between long-run economic value (including integration costs) of VRE penetration levels (17, 18) but do not include VRE investment costs in their analysis. Few prior studies explore the impacts of high VRE penetration on India’s electricity system, and those that do either use the capacity expansion framework and do not evaluate the economic value of multiple VRE targets (4, 19, 20) or do not optimize capacity build around proposed VRE targets (21).Here we address this gap by estimating how different VRE targets affect the cost to reliably operate the Indian electricity system. To do so, we work with three interrelated models. First, using a spatially explicit model for VRE site selection, we identify the lowest levelized cost wind and solar sites to meet different VRE capacity targets, and study how the resource quality—and corresponding levelized cost—of selected sites changes with increasing VRE targets.Second, using a capacity investment model that accounts for VRE production patterns and optimal dispatch of hydropower and battery storage, we determine the capacity requirements and investment costs for coal, combined cycle gas turbines (CCGT), and combustion turbine (CT) peaker plants. Due to uncertainties in their future deployment (22), and because their current targets are relatively low (4), we did not consider new nuclear or hydro capacity in the main scenarios but include those in the sensitivity scenarios presented in SI Appendix, section 2. Third, we use a unit commitment and economic dispatch model to simulate hourly operation of the electricity system and estimate annual system operational costs. This model captures important technical constraints, including minimum operating levels, daily unit commitment for coal and natural gas plants, and energy limits on hydropower and battery storage. Rather than cooptimize VRE capacity, we compute the system-level economic value of a range of VRE targets by comparing the sum of the avoided new conventional capacity and energy generation costs to a no-VRE scenario. The net cost for a scenario is then the difference between the levelized cost of the VRE and the system-level economic value. Materials and Methods provides more detail on this process.Our results show that, despite greater levelized cost reduction forecasts for solar PV compared to wind technologies, VRE targets with greater amounts of wind have the lowest projected net carbon mitigation costs. This finding is robust to a range of scenarios, including low-cost solar and storage, and lower minimum generation levels for coal generators.We find that, although VRE production displaces energy production from conventional generators, it does very little to defer the need for capacity from those generators due to low correlation between VRE production and peak demand. Our findings suggest that VRE in India avoids far less conventional capacity than VRE in other regions in the world. These capacity requirements are slightly mitigated if India’s demand patterns evolve to more closely resemble demand in its major cities. Overall, we conclude that the importance of choosing the right VRE mix is significant when measured in terms of carbon mitigation costs: Whereas most solar-majority scenarios we examined lead to costs greater than or equal to estimates of the social cost of carbon (SCC), wind-majority mixes all cost far less than the SCC.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号