首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   402篇
  免费   61篇
  国内免费   1篇
耳鼻咽喉   1篇
儿科学   1篇
妇产科学   1篇
基础医学   63篇
临床医学   41篇
内科学   42篇
皮肤病学   1篇
神经病学   41篇
特种医学   149篇
外科学   33篇
综合类   32篇
预防医学   28篇
眼科学   4篇
药学   14篇
中国医学   3篇
肿瘤学   10篇
  2024年   2篇
  2023年   2篇
  2022年   8篇
  2021年   22篇
  2020年   14篇
  2019年   12篇
  2018年   28篇
  2017年   36篇
  2016年   30篇
  2015年   33篇
  2014年   55篇
  2013年   31篇
  2012年   33篇
  2011年   13篇
  2010年   13篇
  2009年   14篇
  2008年   14篇
  2007年   9篇
  2006年   4篇
  2005年   1篇
  2004年   4篇
  2003年   8篇
  2002年   3篇
  2001年   5篇
  2000年   4篇
  1999年   2篇
  1998年   2篇
  1997年   4篇
  1996年   6篇
  1995年   2篇
  1994年   4篇
  1993年   6篇
  1992年   3篇
  1991年   4篇
  1990年   2篇
  1989年   3篇
  1988年   1篇
  1987年   2篇
  1986年   2篇
  1985年   2篇
  1984年   3篇
  1983年   1篇
  1982年   2篇
  1981年   2篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1976年   6篇
  1974年   1篇
  1973年   1篇
排序方式: 共有464条查询结果,搜索用时 15 毫秒
61.
目的 迭代算法是实现CT低剂量成像的有效工具,其特点是在降低图像噪声的情况下仍然保持较好的图像质量.但对于扫描条件过低噪声过大的图像,过度的迭代重建有可能降低图像的空间分辨率并损失解剖结构的细节.因此,如何能针对不同病人实施个性化成像、使放射剂量达到可处理的最低水平是在临床实施CT低剂量成像需要解决的问题.本文旨在探讨应用自适应迭代降剂量技术(adaptive iterative dose reduction 3D,AIDR 3D)进行个体化超低剂量胸部CT成像的可行性.方法 从本院2011年12月~2012年2月进行过首次常规剂量胸部CT平扫检查并且明确诊断的病人中,连续收集需要在1个月内进行CT平扫复查的病人共48例,其中男25例,女23例,13~84岁,平均年龄48.27±17.63岁,BMI为15.62~30.85,平均21.62±3.38.这48例病人首次常规剂量检查时采用普通自动曝光控制技术(automatic exposure control,AEC)扫描,其目标噪声值SD为12.5,并用常规滤波反投影(filter back projection,FBP)算法进行重建.复查时进行个性化超低剂量胸部CT成像方案扫描,扫描条件采用整合迭代算法(AIDR 3D)的AEC技术,设置目标噪声值为SD25,其他扫描参数和重建参数保持和常规检查一致,得到的数据分别进行AIDR算法和FBP算法重建.两次检查总共得到三组数据,分别为常规剂量FBP组(A组)、低剂量AIDR组(B组)和低剂量FBP组(C组).3组分别用相应的过滤函数显示肺窗和纵隔窗.对三组数据进行图像质量评价并进行对比:客观评价用图像噪声值进行量化(用CT值的标准差表示),主观评价由两名医师采用3分制(3-优,2-可,1-差)进行独立盲法评分.为了评估结合了AIDR 3D的AEC技术调控管电流的效果,把胸部分成上、中、下部分别进行评价.数据统计采用随机区组设计的Friedman检验,将上、中、下评分均≥2分的病例定义为可诊断,计算可诊断率.扫描的放射剂量则通过记录机器上显示的CTDI和DLP数值,计算有效剂量ED(k=0.014),将复查和首次扫描的剂量进行对比.结果 客观图像质量评价3组肺窗图像的噪声值均有统计学差异(P<0.05),B组比C组大幅降低噪声值(上66.58%;中39.62%;下48.55%),而B组和A组相比,上肺和下肺噪声分别降低15.67%和15.26%,而中肺噪声值则增加9.33%.3组的纵隔窗图像的噪声值相比均有统计学差异(P<0.05),B组比C组噪声值显著降低(上66.58%;中39.62%;下48.56%),而B组比A组噪声值有轻度增加(上39.95%;中79.43%;下76.35%).主观图像质量评价:B组的肺窗上、中、下部之间评价无统计学差异(P>0.05),纵隔窗的上、中、下评价之间无统计学差异(P>0.05).3组的肺窗图像的可诊断率(上、中、下均在2分或以上)都达到100%,优、良、差的分布没有统计学差异(P>0.05),但3组纵隔窗图像的可诊断率有显著差异,C组的可诊断率明显较低,B组和A组的可诊断率相同(95.83% vs 95.83% vs 56.25%,P< 0.05).放射剂量方面,复查扫描(B/C组)和初次扫描(A组)相比有效剂量降低87.05%(0.715 vs 5.524 mSv,k=0.014).结论 采用整合AIDR 3D的AEC技术可以实现对不同体型的人群的胸部个性化超低剂量CT成像.  相似文献   
62.
目的探讨Flash CT低管电压结合迭代重建(IRIS)技术对改善超重患者冠状动脉成像图像质量及降低辐射剂量的价值。方法对100例体质量指数(BMI)为25~30kg/m2的患者行Flash CT检查,按扫描管电压随机化分成A(120kV)、B(100kV)两组,对B组图像经IRIS重建获得数据作为C组。测量主动脉根部、左冠状动脉及右冠状动脉起始部血管腔内CT值、噪声(SD),并计算SNR、CNR,记录有效辐射剂量(ED)。结果 3组图像优良率比较差异有统计学意义(χ2=7.604,P<0.05);3组血管腔强化CT值、噪声、SNR、CNR差异均有统计学意义(P均<0.05)。两两比较,血管腔内CT值B、C两组高于A组;B组噪声最大,CNR最低;C组SNR最高。A、C组ED分别为(8.6±1.3)mSv和(3.5±0.7)mSv,差异有统计学意义(t=-16.91,P<0.05)。结论对于超重患者运用低管电压结合IRIS技术进行Flash CT冠状动脉检查能够获得较好的图像质量,显著降低辐射剂量。  相似文献   
63.
浅谈云计算在图书馆中的应用   总被引:2,自引:2,他引:0       下载免费PDF全文
阐述了图书馆应用云计算的可能性和必然性及其诸多优势。在分析了云计算服务类型的基础上,详细介绍了云计算技术应用于图书馆的几种方式。  相似文献   
64.
Most animals exhibit significant neurological and morphological change throughout their lifetime. No robots to date, however, grow new morphological structure while behaving. This is due to technological limitations but also because it is unclear that morphological change provides a benefit to the acquisition of robust behavior in machines. Here I show that in evolving populations of simulated robots, if robots grow from anguilliform into legged robots during their lifetime in the early stages of evolution, and the anguilliform body plan is gradually lost during later stages of evolution, gaits are evolved for the final, legged form of the robot more rapidly--and the evolved gaits are more robust--compared to evolving populations of legged robots that do not transition through the anguilliform body plan. This suggests that morphological change, as well as the evolution of development, are two important processes that improve the automatic generation of robust behaviors for machines. It also provides an experimental platform for investigating the relationship between the evolution of development and robust behavior in biological organisms.  相似文献   
65.
PurposeTo assess the impact of dose reduction and the use of an advanced modeled iterative reconstruction algorithm (ADMIRE) on image quality in low-energy monochromatic images from a dual-source dual energy computed tomography CT (DSCT) platform.Materials and methodsAcquisitions on an image-quality phantom were performed using DSCT equipment with 100/Sn150 kVp for four dose levels (CTDIvol: 20/11/8/5mGy). Raw data were reconstructed for six energy levels (40/50/60/70/80/100 keV) using filtered back projection and two levels of ADMIRE (A3/A5). Noise power spectrum (NPS) and task-based transfer function (TTF) were calculated on virtual monoenergetic images (VMIs). Detectability index (d′) was computed to model the detection task of two enhanced iodine lesions as function of keV.ResultsNoise-magnitude was significantly reduced between 40 to 70 keV by ?56 ± 0% (SD) (range: ?56%–?55%) with FBP; ?56 ± 0% (SD) (?56%–?56%) with A3; and ?57 ± 1% (SD) (range: ?57%–?56%) with A5. The average spatial frequency of the NPS peaked at 70 keV and decreased as ADMIRE level increased. TTF values at 50% were greatest at 40 keV and shifted towards lower frequencies as the keV increased. The detectability of both lesions increased with increasing dose level and ADMIRE level. For the simulated lesion with iodine at 2 mg/mL, d’ values peaked at 70 keV for all reconstruction types, except for A3 at 20 mGy and A5 at 11 and 20 mGy, where d’ peaked at 60 keV. For the other simulated lesion, d’ values were highest at 40 keV and decreased beyond.ConclusionAt low keV on VMIs, this study confirms that iterative reconstruction reduces the noise magnitude, improves the spatial resolution and increases the detectability of enhanced iodine lesions.  相似文献   
66.
Darwinian evolution tends to produce energy-efficient outcomes. On the other hand, energy limits computation, be it neural and probabilistic or digital and logical. Taking a particular energy-efficient viewpoint, we define neural computation and make use of an energy-constrained computational function. This function can be optimized over a variable that is proportional to the number of synapses per neuron. This function also implies a specific distinction between adenosine triphosphate (ATP)-consuming processes, especially computation per se vs. the communication processes of action potentials and transmitter release. Thus, to apply this mathematical function requires an energy audit with a particular partitioning of energy consumption that differs from earlier work. The audit points out that, rather than the oft-quoted 20 W of glucose available to the human brain, the fraction partitioned to cortical computation is only 0.1 W of ATP [L. Sokoloff, Handb. Physiol. Sect. I Neurophysiol. 3, 1843–1864 (1960)] and [J. Sawada, D. S. Modha, “Synapse: Scalable energy-efficient neurosynaptic computing” in Application of Concurrency to System Design (ACSD) (2013), pp. 14–15]. On the other hand, long-distance communication costs are 35-fold greater, 3.5 W. Other findings include 1) a 108-fold discrepancy between biological and lowest possible values of a neuron’s computational efficiency and 2) two predictions of N, the number of synaptic transmissions needed to fire a neuron (2,500 vs. 2,000).

The purpose of the brain is to process information, but that leaves us with the problem of finding appropriate definitions of information processing. We assume that given enough time and given a sufficiently stable environment (e.g., the common internals of the mammalian brain), then Nature’s constructions approach an optimum. The problem is to find which function or combined set of functions is optimal when incorporating empirical values into these function(s). The initial example in neuroscience is ref. 1, which shows that information capacity is far from optimized, especially in comparison to the optimal information per joule which is in much closer agreement with empirical values. Whenever we find such an agreement between theory and experiment, we conclude that this optimization, or near optimization, is Nature’s perspective. Using this strategy, we and others seek quantified relationships with particular forms of information processing and require that these relationships are approximately optimal (17). At the level of a single neuron, a recent theoretical development identifies a potentially optimal computation (8). To apply this conjecture requires understanding certain neuronal energy expenditures. Here the focus is on the energy budget of the human cerebral cortex and its primary neurons. The energy audit here differs from the premier earlier work (9) in two ways: The brain considered here is human not rodent, and the audit here uses a partitioning motivated by the information-efficiency calculations rather than the classical partitions of cell biology and neuroscience (9). Importantly, our audit reveals greater energy use by communication than by computation. This observation in turn generates additional insights into the optimal synapse number. Specifically, the bits per joule optimized computation must provide sufficient bits per second to the axon and presynaptic mechanism to justify the great expense of timely communication. Simply put from the optimization perspective, we assume evolution would not build a costly communication system and then not supply it with appropriate bits per second to justify its costs. The bits per joule are optimized with respect to N, the number of synaptic activations per interpulse interval (IPI) for one neuron, where N happens to equal the number of synapses per neuron times the success rate of synaptic transmission (below).To measure computation, and to partition out its cost, requires a suitable definition at the single-neuron level. Rather than the generic definition “any signal transformation” (3) or the neural-like “converting a multivariate signal to a scalar signal,” we conjecture a more detailed definition (8). To move toward this definition, note two important brain functions: estimating what is present in the sensed world and predicting what will be present, including what will occur as the brain commands manipulations. Then, assume that such macroscopic inferences arise by combining single-neuron inferences. That is, conjecture a neuron performing microscopic estimation or prediction. Instead of sensing the world, a neuron’s sensing is merely its capacitive charging due to recently active synapses. Using this sampling of total accumulated charge over a particular elapsed time, a neuron implicitly estimates the value of its local latent variable, a variable defined by evolution and developmental construction (8). Applying an optimization perspective, which includes implicit Bayesian inference, a sufficient statistic, and maximum-likelihood unbiasedness, as well as energy costs (8), produces a quantified theory of single-neuron computation. This theory implies the optimal IPI probability distribution. Motivating IPI coding is this fact: The use of constant amplitude signaling, e.g., action potentials, implies that all information can only be in IPIs. Therefore, no code can outperform an IPI code, and it can equal an IPI code in bit rate only if it is one to one with an IPI code. In neuroscience, an equivalent to IPI codes is the instantaneous rate code where each message is IPI1. In communication theory, a discrete form of IPI coding is called differential pulse position modulation (10); ref. 11 explicitly introduced a continuous form of this coding as a neuron communication hypothesis, and it receives further development in ref. 12.Results recall and further develop earlier work concerning a certain optimization that defines IPI probabilities (8). An energy audit is required to use these developments. Combining the theory with the audit leads to two outcomes: 1) The optimizing N serves as a consistency check on the audit and 2) future energy audits for individual cell types will predict N for that cell type, a test of the theory. Specialized approximations here that are not present in earlier work (9) include the assumptions that 1) all neurons of cortex are pyramidal neurons, 2) pyramidal neurons are the inputs to pyramidal neurons, 3) a neuron is under constant synaptic bombardment, and 4) a neuron’s capacitance must be charged 16 mV from reset potential to threshold to fire.Following the audit, the reader is given a perspective that may be obvious to some, but it is rarely discussed and seemingly contradicts the engineering literature (but see ref. 6). In particular, a neuron is an incredibly inefficient computational device in comparison to an idealized physical analog. It is not just a few bits per joule away from optimal predicted by the Landauer limit, but off by a huge amount, a factor of 108. The theory here resolves the efficiency issue using a modified optimization perspective. Activity-dependent communication and synaptic modification costs force upward optimal computational costs. In turn, the bit value of the computational energy expenditure is constrained to a central limit like the result: Every doubling of N can produce no more than 0.5 bits. In addition to 1) explaining the 108 excessive energy use, other results here include 2) identifying the largest “noise” source limiting computation, which is the signal itself, and 3) partitioning the relevant costs, which may help engineers redirect focus toward computation and communication costs rather than the 20-W total brain consumption as their design goal.  相似文献   
67.
We derive a closed‐form solution for a well‐known fisheries harvesting model with an additional state constraint. The problem is linear in the control and previous solutions appearing in the literature have been numerical in nature. The so‐called direct adjoining approach is used in our derivation and the optimal solutions turn out to be a mixture of bang‐bang and boundary arcs. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
68.
69.
BackgroundAdvances in image reconstruction are necessary to decrease radiation exposure from coronary CT angiography (CCTA) further, but iterative reconstruction has been shown to degrade image quality at high levels. Deep-learning image reconstruction (DLIR) offers unique opportunities to overcome these limitations. The present study compared the impact of DLIR and adaptive statistical iterative reconstruction-Veo (ASiR-V) on quantitative and qualitative image parameters and the diagnostic accuracy of CCTA using invasive coronary angiography (ICA) as the standard of reference.MethodsThis retrospective study includes 43 patients who underwent clinically indicated CCTA and ICA. Datasets were reconstructed with ASiR-V 70% (using standard [SD] and high-definition [HD] kernels) and with DLIR at different levels (i.e., medium [M] and high [H]). Image noise, image quality, and coronary luminal narrowing were evaluated by three blinded readers. Diagnostic accuracy was compared against ICA.ResultsNoise did not significantly differ between ASiR-V SD and DLIR-M (37 vs. 37 HU, p = 1.000), but was significantly lower in DLIR-H (30 HU, p < 0.001) and higher in ASiR-V HD (53 HU, p < 0.001). Image quality was higher for DLIR-M and DLIR-H (3.4–3.8 and 4.2–4.6) compared to ASiR-V SD and HD (2.1–2.7 and 1.8–2.2; p < 0.001), with DLIR-H yielding the highest image quality. Consistently across readers, no significant differences in sensitivity (88% vs. 92%; p = 0.453), specificity (73% vs. 73%; p = 0.583) and diagnostic accuracy (80% vs. 82%; p = 0.366) were found between ASiR-V HD and DLIR-H.ConclusionDLIR significantly reduces noise in CCTA compared to ASiR-V, while yielding superior image quality at equal diagnostic accuracy.  相似文献   
70.
Neurons in the medial superior olive (MSO) detect 10 µs differences in the arrival times of a sound at the two ears. Such acuity requires exquisitely precise integration of binaural synaptic inputs. There is substantial understanding of how neuronal phase locking of afferent MSO structures, and MSO membrane biophysics subserve such high precision. However, we still lack insight into how the entirety of excitatory inputs is integrated along the MSO dendrite under sound stimulation. To understand how the dendrite integrates excitatory inputs as a whole, we combined anatomic quantifications of the afferent innervation in gerbils of both sexes with computational modeling of a single cell. We present anatomic data from confocal and transmission electron microscopy showing that single afferent fibers follow a single dendrite mostly up to the soma and contact it at multiple (median 4) synaptic sites, each containing multiple independent active zones (the overall density of active zones is estimated as 1.375 per μm2). Thus, any presynaptic action potential may elicit temporally highly coordinated synaptic vesicle release at tens of active zones, thereby achieving secure transmission. Computer simulations suggest that such an anatomic arrangement boosts the amplitude and sharpens the time course of excitatory postsynaptic potentials by reducing current sinks and more efficiently recruiting subthreshold potassium channels. Both effects improve binaural coincidence detection compared with single large synapses at the soma. Our anatomic data further allow for estimation of a lower bound of 7 and an upper bound of 70 excitatory fibers per dendrite.SIGNIFICANCE STATEMENT Passive dendritic propagation attenuates the amplitude of postsynaptic potentials and widens their temporal spread. Neurons in the medial superior olive, with their large bilateral dendrites, however, can detect coincidence of binaural auditory inputs with submillisecond precision, a computation that is in stark contrast to passive dendritic processing. Here, we show that dendrites can counteract amplitude attenuation and even decrease the temporal spread of postsynaptic potentials, if active subthreshold potassium conductances are triggered in temporal coordination along the whole dendrite. Our anatomic finding that axons run in parallel to the dendrites and make multiple synaptic contacts support such coordination since incoming action potentials would depolarize the dendrite at multiple sites within a brief time interval.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号