首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   171篇
  免费   16篇
  国内免费   1篇
儿科学   1篇
妇产科学   1篇
基础医学   39篇
临床医学   3篇
内科学   35篇
皮肤病学   1篇
神经病学   35篇
特种医学   7篇
外科学   11篇
综合类   16篇
预防医学   22篇
眼科学   4篇
药学   10篇
中国医学   2篇
肿瘤学   1篇
  2024年   1篇
  2022年   2篇
  2021年   11篇
  2020年   3篇
  2019年   3篇
  2018年   7篇
  2017年   3篇
  2016年   4篇
  2015年   14篇
  2014年   16篇
  2013年   10篇
  2012年   12篇
  2011年   6篇
  2010年   10篇
  2009年   9篇
  2008年   8篇
  2007年   8篇
  2006年   3篇
  2005年   1篇
  2004年   4篇
  2003年   5篇
  2002年   2篇
  2001年   4篇
  2000年   2篇
  1996年   3篇
  1995年   1篇
  1994年   2篇
  1993年   5篇
  1992年   1篇
  1991年   2篇
  1989年   3篇
  1988年   1篇
  1987年   1篇
  1986年   2篇
  1985年   2篇
  1984年   3篇
  1983年   1篇
  1982年   1篇
  1981年   2篇
  1980年   3篇
  1978年   1篇
  1976年   4篇
  1974年   1篇
  1973年   1篇
排序方式: 共有188条查询结果,搜索用时 15 毫秒
1.
A single-purpose analogue-computing device is described for the online assessment of the contractile state of the human myocardium from the left ventricular pressure (Plv) data available during routine cardiac catheterisation. Due attention has been paid to the design of the computer circuits so that they will not process pressure phenomena outside the isovolumic contractile period. Either a \(\left( {\frac{1}{{P_{lv} }}\frac{{dP_{lv} }}{{dt}}} \right)_{max} \) or a plain \(\left( {\frac{{dP_{lv} }}{{dt}}} \right)_{max} \) index is presented on a digitalvoltmeter display, thus obviating the need for any graphical extrapolation or additional computation.  相似文献   
2.
The fundamental unit for quantum computing is the qubit, an isolated, controllable two-level system. However, for many proposed quantum computer architectures, especially photonic systems, the qubits can be lost or can leak out of the desired two-level systems, posing a significant obstacle for practical quantum computation. Here, we experimentally demonstrate, both in the quantum circuit model and in the one-way quantum computer model, the smallest nontrivial quantum codes to tackle this problem. In the experiment, we encode single-qubit input states into highly entangled multiparticle code words, and we test their ability to protect encoded quantum information from detected 1-qubit loss error. Our results prove in-principle the feasibility of overcoming the qubit loss error by quantum codes.  相似文献   
3.
4.
Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.Information technology has inundated and changed our world, as it is transforming the ways we live, work, play, learn, interact, and understand science and the world around us. One driving force behind this deluge is quite obvious: Computer hardware has become more cheap, fast, and innovative over the past half century, riding as it does on the exponent of Moore’s law (1). Progress in efficient algorithms—methods for solving computational problems in ways that take full advantage of fast hardware—is arguably of even greater importance.Algorithms have been known since antiquity. In the third century BC Euclid wrote about his algorithm for finding the greatest common divisor of two integers. The French scholar G. Lamé noted in 1845 (2) that Euclid’s algorithm is efficient, because it terminates after a number of arithmetic operations that grow proportionately to the length of the input—what we call today the number of bits of the two numbers. [In fact, one of the very few works on the subject of algorithms that have been published in PNAS is a 1976 article by Andrew Yao and Donald Knuth, revisiting and refining that analysis (3).] In the ninth century CE, the Arab mathematician Al Khwarizmi codified certain elementary algorithms for adding, dividing, etc., decimal numbers—the precise algorithms we learn today at elementary school. In fact, these simple and powerful algorithms were a major incentive for the eventual adoption of the decimal number system in Europe (ca. 1500 CE), an innovation that helped precipitate a social and scientific revolution comparable in impact to the one we are living in now.The study of efficient algorithms—algorithms that perform the required tasks within favorable time limits—started in the 1950s, soon after the first computer, and is now a very well-developed mathematical field within computer science. By the 1960s, researchers had begun to measure algorithms by the criterion of polynomial time, that is, to consider an algorithm efficient, or satisfactory, if the total number of operations it performs is always bounded from above by a polynomial function (as opposed to an exponential function) of the size of the input. For example, sorting n numbers can be done with about n?log?n comparisons, whereas discovering the best alignment of two DNA sequences with n nucleotides can take in the worst case time proportional to n2 (but can be performed in linear time for sequences that do align well); these are both considered “satisfactory” according to this criterion.  相似文献   
5.
Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a “quantum channel,” quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895–1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼108 rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing.  相似文献   
6.
Recent years have witnessed rapidly increasing interests in developing quantum theoretical models of human cognition. Quantum mechanisms have been taken seriously to describe how the mind reasons and decides. Papers in this special issue report the newest results in the field. Here we discuss why the two levels of commitment, treating the human brain as a quantum computer and merely adopting abstract quantum probability principles to model human cognition, should be integrated. We speculate that quantum cognition models gain greater modeling power due to a richer representation scheme.  相似文献   
7.
Complex behaviors are often driven by an internal model, which integrates sensory information over time and facilitates long-term planning to reach subjective goals. A fundamental challenge in neuroscience is, How can we use behavior and neural activity to understand this internal model and its dynamic latent variables? Here we interpret behavioral data by assuming an agent behaves rationally—that is, it takes actions that optimize its subjective reward according to its understanding of the task and its relevant causal variables. We apply a method, inverse rational control (IRC), to learn an agent’s internal model and reward function by maximizing the likelihood of its measured sensory observations and actions. This thereby extracts rational and interpretable thoughts of the agent from its behavior. We also provide a framework for interpreting encoding, recoding, and decoding of neural data in light of this rational model for behavior. When applied to behavioral and neural data from simulated agents performing suboptimally on a naturalistic foraging task, this method successfully recovers their internal model and reward function, as well as the Markovian computational dynamics within the neural manifold that represent the task. This work lays a foundation for discovering how the brain represents and computes with dynamic latent variables.  相似文献   
8.
提出了一个用于中西医关联发现的云平台——BioTCM Cloud.该平台是构建在大量的开放链接数据(Linked Data)的基础上,以及跨领域知识整合的需要.面对海量的链接数据,提出了基于MapReduce框架的分布式语义推理框架,用于解决基于领域规则的知识推理问题.以中医草药为案例,分布式语义推理可以建立中医药和西医之间的关联,以促进中西医之间的沟通和数据共享.  相似文献   
9.
Abstract: With 200 SEPs of normal subjects, the differences in the Group Mean SEP between sexes were defined eliminating the differences attributable to another peripheral factor relating to the nutritional condition represented by the Rohrer's index. The differences in the baseline amplitude of the Group Mean SEP between males vs. females in the sections between roughly 23–111 msec, around 330 msec, and behind 389 msec in latency were verified, subtracting the differences between the groups with a high vs. low value of the Rohrer's index, not attributable to the different nutritional condition, and might be the central origin. Applying the Amplitude Scaling, the differences in the sections 23–104 msec in latency were verified similarly, being more a significant indication of the sex difference per se.  相似文献   
10.
在CPU串行运算模式下实现大规模矩阵求逆是一个非常耗时的过程。为了解决这一问题,基于NVIDIA公司专为GPU(图形处理器)提供的CUDA(计算统一设备架构),从新的编程角度出发,利用GPU多线程并行处理技术,将矩阵求逆过程中大量的数据实现并行运算,从而获得了较大的加速比。同时,根据程序的执行结果,分析了GPU的单精度与双精度的浮点运算能力及其优、劣势。最后,通过分析数据传输时间对GPU性能的影响,总结出适合GPU的算法特征。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号