全文获取类型
收费全文 | 171篇 |
免费 | 16篇 |
国内免费 | 1篇 |
专业分类
儿科学 | 1篇 |
妇产科学 | 1篇 |
基础医学 | 39篇 |
临床医学 | 3篇 |
内科学 | 35篇 |
皮肤病学 | 1篇 |
神经病学 | 35篇 |
特种医学 | 7篇 |
外科学 | 11篇 |
综合类 | 16篇 |
预防医学 | 22篇 |
眼科学 | 4篇 |
药学 | 10篇 |
中国医学 | 2篇 |
肿瘤学 | 1篇 |
出版年
2024年 | 1篇 |
2022年 | 2篇 |
2021年 | 11篇 |
2020年 | 3篇 |
2019年 | 3篇 |
2018年 | 7篇 |
2017年 | 3篇 |
2016年 | 4篇 |
2015年 | 14篇 |
2014年 | 16篇 |
2013年 | 10篇 |
2012年 | 12篇 |
2011年 | 6篇 |
2010年 | 10篇 |
2009年 | 9篇 |
2008年 | 8篇 |
2007年 | 8篇 |
2006年 | 3篇 |
2005年 | 1篇 |
2004年 | 4篇 |
2003年 | 5篇 |
2002年 | 2篇 |
2001年 | 4篇 |
2000年 | 2篇 |
1996年 | 3篇 |
1995年 | 1篇 |
1994年 | 2篇 |
1993年 | 5篇 |
1992年 | 1篇 |
1991年 | 2篇 |
1989年 | 3篇 |
1988年 | 1篇 |
1987年 | 1篇 |
1986年 | 2篇 |
1985年 | 2篇 |
1984年 | 3篇 |
1983年 | 1篇 |
1982年 | 1篇 |
1981年 | 2篇 |
1980年 | 3篇 |
1978年 | 1篇 |
1976年 | 4篇 |
1974年 | 1篇 |
1973年 | 1篇 |
排序方式: 共有188条查询结果,搜索用时 15 毫秒
1.
A single-purpose analogue-computing device is described for the online assessment of the contractile state of the human myocardium from the left ventricular pressure (Plv) data available during routine cardiac catheterisation. Due attention has been paid to the design of the computer circuits so that they will not process pressure phenomena outside the isovolumic contractile period. Either a \(\left( {\frac{1}{{P_{lv} }}\frac{{dP_{lv} }}{{dt}}} \right)_{max} \) or a plain \(\left( {\frac{{dP_{lv} }}{{dt}}} \right)_{max} \) index is presented on a digitalvoltmeter display, thus obviating the need for any graphical extrapolation or additional computation. 相似文献
2.
Lu CY Gao WB Zhang J Zhou XQ Yang T Pan JW 《Proceedings of the National Academy of Sciences of the United States of America》2008,105(32):11050-11054
The fundamental unit for quantum computing is the qubit, an isolated, controllable two-level system. However, for many proposed quantum computer architectures, especially photonic systems, the qubits can be lost or can leak out of the desired two-level systems, posing a significant obstacle for practical quantum computation. Here, we experimentally demonstrate, both in the quantum circuit model and in the one-way quantum computer model, the smallest nontrivial quantum codes to tackle this problem. In the experiment, we encode single-qubit input states into highly entangled multiparticle code words, and we test their ability to protect encoded quantum information from detected 1-qubit loss error. Our results prove in-principle the feasibility of overcoming the qubit loss error by quantum codes. 相似文献
3.
4.
Christos Papadimitriou 《Proceedings of the National Academy of Sciences of the United States of America》2014,111(45):15881-15887
Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.Information technology has inundated and changed our world, as it is transforming the ways we live, work, play, learn, interact, and understand science and the world around us. One driving force behind this deluge is quite obvious: Computer hardware has become more cheap, fast, and innovative over the past half century, riding as it does on the exponent of Moore’s law (1). Progress in efficient algorithms—methods for solving computational problems in ways that take full advantage of fast hardware—is arguably of even greater importance.Algorithms have been known since antiquity. In the third century BC Euclid wrote about his algorithm for finding the greatest common divisor of two integers. The French scholar G. Lamé noted in 1845 (2) that Euclid’s algorithm is efficient, because it terminates after a number of arithmetic operations that grow proportionately to the length of the input—what we call today the number of bits of the two numbers. [In fact, one of the very few works on the subject of algorithms that have been published in PNAS is a 1976 article by Andrew Yao and Donald Knuth, revisiting and refining that analysis (3).] In the ninth century CE, the Arab mathematician Al Khwarizmi codified certain elementary algorithms for adding, dividing, etc., decimal numbers—the precise algorithms we learn today at elementary school. In fact, these simple and powerful algorithms were a major incentive for the eventual adoption of the decimal number system in Europe (ca. 1500 CE), an innovation that helped precipitate a social and scientific revolution comparable in impact to the one we are living in now.The study of efficient algorithms—algorithms that perform the required tasks within favorable time limits—started in the 1950s, soon after the first computer, and is now a very well-developed mathematical field within computer science. By the 1960s, researchers had begun to measure algorithms by the criterion of polynomial time, that is, to consider an algorithm efficient, or satisfactory, if the total number of operations it performs is always bounded from above by a polynomial function (as opposed to an exponential function) of the size of the input. For example, sorting n numbers can be done with about n?log?n comparisons, whereas discovering the best alignment of two DNA sequences with n nucleotides can take in the worst case time proportional to n2 (but can be performed in linear time for sequences that do align well); these are both considered “satisfactory” according to this criterion. 相似文献
5.
Xiao-Hui Bao Xiao-Fan Xu Che-Ming Li Zhen-Sheng Yuan Chao-Yang Lu Jian-Wei Pan 《Proceedings of the National Academy of Sciences of the United States of America》2012,109(50):20347-20351
Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a “quantum channel,” quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895–1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼108 rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing. 相似文献
6.
Recent years have witnessed rapidly increasing interests in developing quantum theoretical models of human cognition. Quantum mechanisms have been taken seriously to describe how the mind reasons and decides. Papers in this special issue report the newest results in the field. Here we discuss why the two levels of commitment, treating the human brain as a quantum computer and merely adopting abstract quantum probability principles to model human cognition, should be integrated. We speculate that quantum cognition models gain greater modeling power due to a richer representation scheme. 相似文献
7.
Zhengwei Wu Minhae Kwon Saurabh Daptardar Paul Schrater Xaq Pitkow 《Proceedings of the National Academy of Sciences of the United States of America》2020,117(47):29311
Complex behaviors are often driven by an internal model, which integrates sensory information over time and facilitates long-term planning to reach subjective goals. A fundamental challenge in neuroscience is, How can we use behavior and neural activity to understand this internal model and its dynamic latent variables? Here we interpret behavioral data by assuming an agent behaves rationally—that is, it takes actions that optimize its subjective reward according to its understanding of the task and its relevant causal variables. We apply a method, inverse rational control (IRC), to learn an agent’s internal model and reward function by maximizing the likelihood of its measured sensory observations and actions. This thereby extracts rational and interpretable thoughts of the agent from its behavior. We also provide a framework for interpreting encoding, recoding, and decoding of neural data in light of this rational model for behavior. When applied to behavioral and neural data from simulated agents performing suboptimally on a naturalistic foraging task, this method successfully recovers their internal model and reward function, as well as the Markovian computational dynamics within the neural manifold that represent the task. This work lays a foundation for discovering how the brain represents and computes with dynamic latent variables. 相似文献
8.
9.
Differences in Human Group Mean SEP between Sexes: with Reference to the Rohrer's Index* 总被引:1,自引:1,他引:0
Takumi Ikuta M.D Noriko Furuta Atsushi Unzai M.D Kenji Kondo M.D 《Psychiatry and clinical neurosciences》1981,35(2):147-158
Abstract: With 200 SEPs of normal subjects, the differences in the Group Mean SEP between sexes were defined eliminating the differences attributable to another peripheral factor relating to the nutritional condition represented by the Rohrer's index. The differences in the baseline amplitude of the Group Mean SEP between males vs. females in the sections between roughly 23–111 msec, around 330 msec, and behind 389 msec in latency were verified, subtracting the differences between the groups with a high vs. low value of the Rohrer's index, not attributable to the different nutritional condition, and might be the central origin. Applying the Amplitude Scaling, the differences in the sections 23–104 msec in latency were verified similarly, being more a significant indication of the sex difference per se. 相似文献
10.