首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3050篇
  免费   341篇
  国内免费   123篇
耳鼻咽喉   8篇
儿科学   3篇
妇产科学   4篇
基础医学   289篇
口腔科学   19篇
临床医学   210篇
内科学   400篇
皮肤病学   7篇
神经病学   66篇
特种医学   268篇
外国民族医学   1篇
外科学   318篇
综合类   518篇
预防医学   282篇
眼科学   14篇
药学   692篇
  2篇
中国医学   268篇
肿瘤学   145篇
  2024年   13篇
  2023年   77篇
  2022年   249篇
  2021年   255篇
  2020年   188篇
  2019年   130篇
  2018年   158篇
  2017年   121篇
  2016年   140篇
  2015年   140篇
  2014年   239篇
  2013年   276篇
  2012年   205篇
  2011年   201篇
  2010年   139篇
  2009年   129篇
  2008年   115篇
  2007年   132篇
  2006年   109篇
  2005年   91篇
  2004年   55篇
  2003年   39篇
  2002年   29篇
  2001年   27篇
  2000年   28篇
  1999年   21篇
  1998年   21篇
  1997年   21篇
  1996年   17篇
  1995年   21篇
  1994年   19篇
  1993年   21篇
  1992年   11篇
  1991年   8篇
  1990年   12篇
  1989年   8篇
  1988年   12篇
  1987年   8篇
  1986年   4篇
  1985年   8篇
  1984年   1篇
  1983年   3篇
  1982年   3篇
  1981年   5篇
  1980年   2篇
  1979年   2篇
  1976年   1篇
排序方式: 共有3514条查询结果,搜索用时 15 毫秒
51.
目的 利用蒙特卡洛方法比较研究中国药典和美国药典的含量均匀度检查法,并提出改进思路。方法 以含量均匀度检查法准确率为性能指标,采用蒙特卡洛方法模拟药品质量的变化,并进行模拟抽样检查,比较中美药典含量均匀度检查法的性能;通过中心复合实验设计,考察中美药典含量均匀度检查法的检查参数对准确率的影响。结果 在不同均值和标准差情况下,中美药典的检查准确率无显著差异;中心复合实验设计的结果表明,中美药典含量均匀度检查法的参数均可进一步优化,从而提高检查准确率。结论 中美药典含量均匀度检查法的准确率基本一致,提出了优化含量均匀度检查参数的新思路。  相似文献   
52.
目的:调查我院制剂室普通制剂的品种和供应情况,为自制制剂的发展提供依据。方法:归纳分析普通制剂的供应品种,并从HIS系统中提取2007年1月1日-12月31日的制剂发放记录,调研相关市售品种的价格,分类统计分析自制普通制剂的品种结构和使用现状。结果:2007年全年共配制供应普通制剂102个品规,金额111.3万元。用药金额前10位的制剂品种占总金额的64.4%。结论:医院制剂在我院的医疗活动中不可缺少,自制制剂品种结构有待于进一步优化。  相似文献   
53.
This research focuses on the development of enteric microparticles of lansoprazole in a single step by employing the spray drying technique and studies the effects of variegated formulation/process variables on entrapment efficiency and in vitro gastric resistance. Preliminary trials were undertaken to optimize the type of Eudragit and its various levels. Further trials included the incorporation of plasticizer triethyl citrate and combinations of other polymers with Eudragit S 100. Finally, various process parameters were varied to investigate their effects on microparticle properties. The results revealed Eudragit S 100 as the paramount polymer giving the highest gastric resistance in comparison to Eudragit L 100-55 and L 100 due to its higher pH threshold and its polymeric backbone. Incorporation of plasticizer not only influenced entrapment efficiency, but diminished gastric resistance severely. On the contrary, polymeric combinations reduced entrapment efficiency for both sodium alginate and glyceryl behenate, but significantly influenced gastric resistance for only sodium alginate and not for glyceryl behenate. The optimized process parameters were comprised of an inlet temperature of 150°C, atomizing air pressure of 2 kg/cm2, feed solution concentration of 6% w/w, feed solution spray rate of 3 ml/min, and aspirator volume of 90%. The SEM analysis revealed smooth and spherical shape morphologies. The DSC and PXRD study divulged the amorphous nature of the drug. Regarding stability, the product was found to be stable under 3 months of accelerated and long-term stability conditions as per ICH Q1A(R2) guidelines. Thus, the technique offers a simple means to generate polymeric enteric microparticles that are ready to formulate and can be directly filled into hard gelatin capsules.  相似文献   
54.
目的 对海洋来源真菌Penicillium sp.产抗肿瘤天然产物brefeldin A的发酵优化,以期获得高产率目标化合物。方法 运用单因素实验设计,重点考察了培养方式、发酵培养基组成 (如碳源、氮源、盐度等)以及培养条件 (温度、摇床转速)等因素。结果 最佳发酵条件为:马铃薯200 g/L,麦芽糖10 g/L,淀粉20 g/L,海盐15 g/L,28 ℃, 摇床转速160 r/min,培养14天。结论 在该发酵条件下,brefeldin A的实际产量达到40 mg/L,较优化前提高了近10倍。  相似文献   
55.
目的:通过正交试验优化樱花多糖的提取工艺,对樱花多糖的抗氧化功能进行了相关的研究,为樱花多糖保健功能的研究提供试验依据。方法:通过正交设计优化樱花多糖的提取工艺,用蒽酮-硫酸法测定不同提取条件下多糖的含量,并对其进行抗氧化性试验和自由基的清除试验。结果与结论:通过对试验结果进行分析,最优工艺为3 g干燥的樱花花瓣置于烧杯中,加蒸馏水240 m L,在90℃的水浴锅中提取3 h,过滤,离心,得上清液。各种浓度的樱花多糖提取液均能够有效地清除超氧阴离子自由基(O2-·),且随着浓度的增大,其清除能力逐渐增强。浓度为70.5μg·L-1多糖提取液的清除率为58.82%。  相似文献   
56.
目的 优化儿咳灵糖浆的提取工艺.方法 以苦杏仁苷、盐酸麻黄碱、黄芩苷的转移率和干浸膏得率为指标,采用L9(34)正交试验,优选最佳水提取工艺,并进行验证.结果 最佳工艺为苦杏仁沸水入药,加6倍量水,煎煮2次,每次1 h;苦杏仁药渣与麻黄等13味药材加10倍量水,煎煮2次,每次1 h.结论 优选的提取工艺稳定可行,可为大...  相似文献   
57.
目的以绿色荧光蛋白(GFP)为观察指标优化K562细胞的电穿孔基因转染条件。方法通过改变悬液体积、温度及缓冲液、质粒浓度等转染条件,采用不同条件组合后用电穿孔法将质粒DNA转入K562细胞,通过流式细胞仪和荧光显微镜观察转染效率。结果①在电压250V、电容950μF的条件下.K562细胞得到58.6%的最大转染率。②应用0.4mL细胞悬液体积,当质粒浓度≥20μg/mL时可得到较高的转染效率。③温度和缓冲液中的血清成分对电穿孔效率影响不大。结论电穿孔是一种高效的基因转染法,通过优化条件,可提高转染率。  相似文献   
58.
The purpose of the present article is to study the bending strength of glulam prepared by plane tree (Platanus Orientalis-L) wood layers adhered by UF resin with different formaldehyde to urea molar ratios containing the modified starch adhesive with different NaOCl concentrations. Artificial neural network (ANN) as a modern tool was used to predict this response, too. The multilayer perceptron (MLP) models were used to predict the modulus of rapture (MOR) and the statistics, including the determination coefficient (R2), root mean square error (RMSE), and mean absolute percentage error (MAPE) were used to validate the prediction. Combining the ANN and the genetic algorithm by using the multiple objective and nonlinear constraint functions, the optimum point was determined based on the experimental and estimated data, respectively. The characterization analysis, performed by FTIR and XRD, was used to describe the effect of the inputs on the output. The results indicated that the statistics obtained show excellent MOR predictions by the feed-forward neural network using Levenberg–Marquardt algorithms. The comparison of the optimal output of the actual values obtained by the genetic algorithm resulting from the multi-objective function and the optimal output of the values estimated by the nonlinear constraint function indicates a minimum difference between both functions.  相似文献   
59.
Connectome spectrum electromagnetic tomography (CSET) combines diffusion MRI-derived structural connectivity data with well-established graph signal processing tools to solve the M/EEG inverse problem. Using simulated EEG signals from fMRI responses, and two EEG datasets on visual-evoked potentials, we provide evidence supporting that (i) CSET captures realistic neurophysiological patterns with better accuracy than state-of-the-art methods, (ii) CSET can reconstruct brain responses more accurately and with more robustness to intrinsic noise in the EEG signal. These results demonstrate that CSET offers high spatio-temporal accuracy, enabling neuroscientists to extend their research beyond the current limitations of low sampling frequency in functional MRI and the poor spatial resolution of M/EEG.  相似文献   
60.
Recent research identifies and corrects bias, such as excess dispersion, in the leading sample eigenvector of a factor-based covariance matrix estimated from a high-dimension low sample size (HL) data set. We show that eigenvector bias can have a substantial impact on variance-minimizing optimization in the HL regime, while bias in estimated eigenvalues may have little effect. We describe a data-driven eigenvector shrinkage estimator in the HL regime called “James–Stein for eigenvectors” (JSE) and its close relationship with the James–Stein (JS) estimator for a collection of averages. We show, both theoretically and with numerical experiments, that, for certain variance-minimizing problems of practical importance, efforts to correct eigenvalues have little value in comparison to the JSE correction of the leading eigenvector. When certain extra information is present, JSE is a consistent estimator of the leading eigenvector.

Averaging is the most important tool for distilling information from data. To name just two of countless examples, batting average is a standard measure of the likelihood that a baseball player will get on base, and an average of squared security returns is commonly used to estimate the variance of a portfolio of stocks.The average can be the best estimator of a mean in the sense of having the smallest mean squared error. But a strange thing happens when considering a collection of many averages simultaneously. The aggregate sum of mean squared errors is no longer minimized by the collection of averages. Instead, the error can be reduced by shrinking the averages toward a common target, even if, paradoxically, there is no underlying relation among the quantities.For baseball players, since an individual batting average incorporates both the true mean and estimation error from sampling, the largest observed batting average is prone to be overestimated and the smallest underestimated. That is why the aggregate mean squared error is reduced when the collection of observed averages are all moved toward their center.This line of thinking has been available at least since Sir Francis Galton introduced “regression towards mediocrity” in 1886. Still, Charles Stein surprised the community of statisticians with a sequence of papers about this phenomenon beginning in the 1950s. Stein showed that it is always possible to lower the aggregate squared error of a collection of three or more averages by explicitly shrinking them toward their collective average. In 1961, Stein improved and simplified the analysis in collaboration with Willard James. The resulting empirical James–Stein shrinkage estimator (JS) launched a new era of statistics.This article describes “James–Stein for eigenvectors” (JSE), a recently discovered shrinkage estimator for the leading eigenvector of an unknown covariance matrix. A leading eigenvector is a direction in a multidimensional data set that maximizes explained variance. The variance explained by the leading eigenvector is the leading eigenvalue.Like a collection of averages, a sample eigenvector is a collection of values that may be overly dispersed. This can happen in the high-dimension low sample size (HL) regime when the number of variables is much greater than the number of observations. In this situation, the JSE estimator reduces excess dispersion in the entries of the leading sample eigenvector. The HL regime arises when a relatively small number of observations are used to explain or predict complex high-dimensional phenomena, and it falls outside the realm of classical statistics. Examples of such settings include genome-wide association studies (GWAS), such as (1) and (2), in which characteristics of a relatively small number of individuals might be explained by millions of single nucleotide polymorphisms (SNPs); machine learning in domains with a limited number of high-dimensional observations, such as in (3); and finance, in which the number of assets in a portfolio can greatly exceed the number of useful observations.We work in the context of factor models and principal component analysis, which are used throughout the physical and social sciences to reduce dimension and identifythe most important drivers of complex outcomes. Principal component analysis (PCA) is a statistical technique that uses eigenvectors as factors. The results in this article are set in the context of a one-factor model that generates a covariance matrix with a single spike. This means that the leading eigenvalue is substantially larger than the others. We do not provide a recipe for practitioners working in higher-rank contexts; our goal is to describe these ideas in a setting in which we can report the current state of the theory. However, similar results are reported experimentally for multifactor models by Goldberg et al. (4), and continuing theoretical work indicates that the success of this approach is not limited to the one-factor case.We begin this article by describing the JS and JSE shrinkage estimators side by side, in order to highlight their close relationship. We then describe three asymptotic regimes, low-dimension high sample size (LH), high-dimension high sample size (HH), and high-dimension low sample size (HL), in order to clarify the relationship between our work and the literature. Subsequently, we describe an optimization-based context in which a high-dimensional covariance matrix estimated with the JSE estimator performs substantially better than eigenvalue correction estimators coming from the HH literature. We describe both theoretical and numerical supporting results for performance metrics relevant to minimum variance optimization.This article focuses on high-dimensional covariance matrix estimation via shrinkage of eigenvectors, rather than eigenvalues or the entire covariance matrix. It relies on results from the HL regime and emphasizes optimization-based performance metrics. The bulk of the existing high-dimensional covariance estimation literature concerns correction of biased eigenvalues or provides results only in the HH regime or focuses on metrics that do not take account of the use of covariance matrices in optimization.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号