首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8898篇
  免费   521篇
  国内免费   45篇
耳鼻咽喉   92篇
儿科学   314篇
妇产科学   279篇
基础医学   839篇
口腔科学   286篇
临床医学   837篇
内科学   2097篇
皮肤病学   248篇
神经病学   705篇
特种医学   321篇
外科学   1541篇
综合类   140篇
一般理论   7篇
预防医学   527篇
眼科学   212篇
药学   506篇
中国医学   37篇
肿瘤学   476篇
  2024年   8篇
  2023年   102篇
  2022年   204篇
  2021年   434篇
  2020年   285篇
  2019年   374篇
  2018年   361篇
  2017年   267篇
  2016年   273篇
  2015年   288篇
  2014年   412篇
  2013年   438篇
  2012年   759篇
  2011年   754篇
  2010年   425篇
  2009年   367篇
  2008年   530篇
  2007年   549篇
  2006年   453篇
  2005年   419篇
  2004年   352篇
  2003年   333篇
  2002年   276篇
  2001年   92篇
  2000年   82篇
  1999年   77篇
  1998年   45篇
  1997年   47篇
  1996年   39篇
  1995年   26篇
  1994年   18篇
  1993年   31篇
  1992年   30篇
  1991年   34篇
  1990年   29篇
  1989年   19篇
  1988年   25篇
  1987年   21篇
  1986年   17篇
  1985年   27篇
  1984年   27篇
  1983年   15篇
  1982年   10篇
  1981年   13篇
  1980年   8篇
  1979年   11篇
  1978年   9篇
  1977年   8篇
  1974年   7篇
  1973年   6篇
排序方式: 共有9464条查询结果,搜索用时 15 毫秒
121.
BackgroundThe Score Committee of the European Foot and Ankle Society (EFAS) developed, validated, and published the EFAS Score in nine European languages (English, German, French, Italian, Polish, Dutch, Swedish, Finnish, Turkish). From other languages under validation, the Persian version finished data acquisition and underwent further validation.MethodsThe Persian version of the EFAS Score was developed and validated in three stages: 1) item (question) identification (completed during initial validation study), 2) item reduction and scale exploration (completed during initial validation study), 3) confirmatory analyses and responsiveness of Persian version (completed during initial validation study in nine other languages). The data were collected pre-operatively and post-operatively at a minimum follow-up of 3 months and mean follow-up of 6 months. Item reduction, scale exploration, confirmatory analyses and responsiveness were executed using classical test theory and item response theory.ResultsThe internal consistency was confirmed in the Persian version (Cronbach’s Alpha 0.82). The Standard Error of Measurement (SEM) was 0.38 and is similar to other language versions. Between baseline and follow-up, 97% of patients showed an improvement on their EFAS score, with excellent responsiveness (effect size 1.93).ConclusionsThe Persian EFAS Score version was successfully validated in patients with a wide variety of foot and ankle pathologies. All score versions are freely available at www.efas.co.  相似文献   
122.
123.
Radioactivity in the soil of a tea garden in the Fatickchari area in Chittagong, Bangladesh, was measured using a high-resolution HPGe detector. The soil samples were collected from depths of up to 20 cm beneath the soil surface. The activity concentrations of naturally occurring 238U and 232Th were observed to be in the range of 27 ± 7 to 53 ± 8 Bq kg−1 and 36 ± 11 to 72 ± 11 Bq kg−1, respectively. The activity concentration of 40K ranged from 201 ± 78 to 672 ± 81 Bq kg−1, and the highest activity of fallout 137Cs observed was 10 ± 1 Bq kg−1. The average activity concentration observed for 238U was 39 ± 8 Bq kg−1, for 232Th was 57 ± 11 Bq kg−1, for 40K was 384 ± 79 Bq kg−1 and for 137Cs was 5 ± 0.5 Bq kg−1. The radiological hazard parameters (representative level index, radium equivalent activity, outdoor and indoor dose rates, outdoor and indoor annual effective dose equivalents, and radiation hazard index) were calculated from the radioactivity in the soil.  相似文献   
124.
Cytokines are mediators for polarization of immune response in vaccines. Studies show that co‐immunization of DNA vaccines with granulocyte‐macrophage colony‐stimulating factor (GM‐CSF) can increase immune responses. Here, experimental mice were immunized with HIV‐1tat/pol/gag/env DNA vaccine with GM‐CSF and boosted with recombinant vaccine. Lymphocyte proliferation with Brdu and CTL activity, IL‐4, IFN‐γ, IL‐17 cytokines, total antibody, and IgG1 and IgG2a isotypes were assessed with ELISA. Results show that GM‐CSF as adjuvant in DNA immunization significantly increased lymphocyte proliferation and IFN‐γ cytokines, but CTL response was tiny increased. Also GM‐CSF as adjuvant decreased IL‐4 cytokine vs mere vaccine group. IL‐17 in the group that immunized with mixture of DNA vaccine/GM‐CSF was significantly increased vs DNA vaccine group. Result of total antibody shows that GM‐CSF increased antibody response in which both IgG1 and IgG2a increased. Overall, results confirmed the beneficial effect of GM‐CSF as adjuvant to increase vaccine immunogenicity. The hallmark result of this study was to increase IL‐17 cytokine with DNA vaccine/GM‐CSF immunized group. This study for the first time provides the evidence of the potency of GM‐CSF in the induction of IL‐17 in response to a vaccine, which is important for control of infection such as HIV‐1.  相似文献   
125.
126.
127.
128.
We report a case of isolated cleft mitral valve with two clefts in the posterior and one in the anterior leaflet. Our case adds to the few reports of posterior and multiple mitral valve clefts and to our knowledge is the first using real‐time transoesophageal three‐dimensional echocardiography (3DE) for assessment of isolated cleft mitral valve. (Echocardiography 2010;27:E50‐E52)  相似文献   
129.
130.
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.The human brain is capable of remarkable acts of perception while consuming very little energy. The dream of brain-inspired computing is to build machines that do the same, requiring high-accuracy algorithms and efficient hardware to run those algorithms. On the algorithm front, building on classic work on backpropagation (1), the neocognitron (2), and convolutional networks (3), deep learning has made great strides in achieving human-level performance on a wide range of recognition tasks (4). On the hardware front, building on foundational work on silicon neural systems (5), neuromorphic computing, using novel architectural primitives, has recently demonstrated hardware capable of running 1 million neurons and 256 million synapses for extremely low power (just 70 mW at real-time operation) (6). Bringing these approaches together holds the promise of a new generation of embedded, real-time systems, but first requires reconciling key differences in the structure and operation between contemporary algorithms and hardware. Here, we introduce and demonstrate an approach we call Eedn, energy-efficient deep neuromorphic networks, which creates convolutional networks whose connections, neurons, and weights have been adapted to run inference tasks on neuromorphic hardware.For structure, typical convolutional networks place no constraints on filter sizes, whereas neuromorphic systems can take advantage of blockwise connectivity that limits filter sizes, thereby saving energy because weights can now be stored in local on-chip memory within dedicated neural cores. Here, we present a convolutional network structure that naturally maps to the efficient connection primitives used in contemporary neuromorphic systems. We enforce this connectivity constraint by partitioning filters into multiple groups and yet maintain network integration by interspersing layers whose filter support region is able to cover incoming features from many groups by using a small topographic size (7).For operation, contemporary convolutional networks typically use high precision ( ≥ 32-bit) neurons and synapses to provide continuous derivatives and support small incremental changes to network state, both formally required for backpropagation-based gradient learning. In comparison, neuromorphic designs can use one-bit spikes to provide event-based computation and communication (consuming energy only when necessary) and can use low-precision synapses to colocate memory with computation (keeping data movement local and avoiding off-chip memory bottlenecks). Here, we demonstrate that by introducing two constraints into the learning rule—binary-valued neurons with approximate derivatives and trinary-valued ({1,0,1}) synapses—it is possible to adapt backpropagation to create networks directly implementable using energy efficient neuromorphic dynamics. This approach draws inspiration from the spiking neurons and low-precision synapses of the brain (8) and builds on work showing that deep learning can create networks with constrained connectivity (9), low-precision synapses (10, 11), low-precision neurons (1214), or both low-precision synapses and neurons (15, 16). For input data, we use a first layer to transform multivalued, multichannel input into binary channels using convolution filters that are learned via backpropagation (12, 16) and whose output can be sent on chip in the form of spikes. These binary channels, intuitively akin to independent components (17) learned with supervision, provide a parallel distributed representation to carry out high-fidelity computation without the need for high-precision representation.Critically, we demonstrate that bringing the above innovations together allows us to create networks that approach state-of-the-art accuracy performing inference on eight standard datasets, running on a neuromorphic chip at between 1,200 and 2,600 frames/s (FPS), using between 25 and 275 mW. We further explore how our approach scales by simulating multichip configurations. Ease-of-use is achieved using training tools built from existing, optimized deep learning frameworks (18), with learned parameters mapped to hardware using a high-level deployment language (19). Although we choose the IBM TrueNorth chip (6) for our example deployment platform, the essence of our constructions can apply to other emerging neuromorphic approaches (2023) and may lead to new architectures that incorporate deep learning and efficient hardware primitives from the ground up.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号