首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   463篇
  免费   64篇
  国内免费   3篇
耳鼻咽喉   3篇
儿科学   3篇
妇产科学   1篇
基础医学   49篇
口腔科学   6篇
临床医学   9篇
内科学   152篇
皮肤病学   5篇
神经病学   14篇
特种医学   20篇
外科学   101篇
综合类   16篇
预防医学   93篇
眼科学   5篇
药学   45篇
中国医学   1篇
肿瘤学   7篇
  2024年   1篇
  2023年   10篇
  2022年   15篇
  2021年   33篇
  2020年   23篇
  2019年   18篇
  2018年   13篇
  2017年   15篇
  2016年   26篇
  2015年   25篇
  2014年   23篇
  2013年   24篇
  2012年   34篇
  2011年   26篇
  2010年   24篇
  2009年   29篇
  2008年   25篇
  2007年   27篇
  2006年   19篇
  2005年   19篇
  2004年   15篇
  2003年   22篇
  2002年   2篇
  2001年   6篇
  2000年   2篇
  1999年   3篇
  1998年   5篇
  1997年   5篇
  1996年   6篇
  1995年   3篇
  1994年   3篇
  1993年   2篇
  1992年   1篇
  1991年   3篇
  1989年   5篇
  1988年   2篇
  1987年   4篇
  1986年   2篇
  1985年   1篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
排序方式: 共有530条查询结果,搜索用时 31 毫秒
51.
Phase II trials often test the null hypothesis H(0): p or=p(1), where p is the true unknown proportion responding to the new treatment, p(0) is the greatest response proportion which is deemed clinically ineffective, and p(1) is the smallest response proportion which is deemed clinically effective. In order to expose the fewest number of patients to an ineffective therapy, phase II clinical trials should terminate early when the trial fails to produce sufficient evidence of therapeutic activity (i.e. if p or=p(1)), the trial should declare the drug effective in the fewest patients possible to allow for advancement to a phase III comparative trial. Several statistical designs, including Simon's minimax and optimal designs, have been developed that meet these requirements. In this paper, we propose three alternative designs that rely upon stochastic curtailment based on conditional power. We compare and contrast the properties of the three approaches: (1) stochastically curtailed (SC) binomial tests, (2) stochastically curtailed (SC) Simon's optimal design, and (3) SC Simon's minimax design to those of Simon's minimax and Simon's optimal designs. For each of these designs we compare and contrast the number of opportunities for study termination, the expected sample size of the trial under the null hypothesis (p stochastic curtailment using conditional power.  相似文献   
52.
We propose a transition model for analysing data from complex longitudinal studies. Because missing values are practically unavoidable in large longitudinal studies, we also present a two-stage imputation method for handling general patterns of missing values on both the outcome and the covariates by combining multiple imputation with stochastic regression imputation. Our model is a time-varying auto-regression on the past innovations (residuals), and it can be used in cases where general dynamics must be taken into account, and where the model selection is important. The entire estimation process was carried out using available procedures in statistical packages such as SAS and S-PLUS. To illustrate the viability of the proposed model and the two-stage imputation method, we analyse data collected in an epidemiological study that focused on various factors relating to childhood growth. Finally, we present a simulation study to investigate the behaviour of our two-stage imputation procedure.  相似文献   
53.
Purpose: The concept of a vibrating wristband, to improve dextrous hand function of stroke survivors, was recently proposed with clinical results and is referred to as ‘TheraBracelet’ in this paper. The purpose of this study was to demonstrate feasibility of a portable, wearable TheraBracelet, and to apply usability evaluation techniques to assess potential demands of TheraBracelet and to identify critical improvement needs of the prototype. Method: A prototype was developed with a vibrating element housed in an elastic wristband and connected to a wearable electronics box via a cable. Expectation for TheraBracelet and evaluation of the prototype were obtained from 10 chronic stroke survivors using surveys before and after using the prototype and House of Quality analysis. Results: The survey for expectation showed stroke survivors’ willingness to try out TheraBracelet at a low cost. The survey evaluating the prototype showed that the current prototype was overall satisfactory with a mean rating of 3.7 out of 5. The House of Quality analysis revealed that the priority improvement needs for the prototype are to improve clinical knowledge on long-term effectiveness, reduce cost, ease donning/doffing and waterproof.

Conclusions: This study presents a potential for a low-cost wearable hand orthotic likable by stroke survivors.

  • Implications for Rehabilitation
  • Feasibility for a portable wearable wristband-type hand orthotic was demonstrated.

  • The survey showed stroke survivors are willing to try such an orthotic at low cost.

  • The current prototype was rated overall satisfactory by stroke survivors.

  • This study provides a potential for a low-cost wearable hand orthotic likable by stroke survivors.

  相似文献   
54.
55.
56.
Because different proteins compete for the proton gradient across the inner mitochondrial membrane, an efficient mechanism is required for allocation of associated chemical potential to the distinct demands, such as ATP production, thermogenesis, regulation of reactive oxygen species (ROS), etc. Here, we used the superresolution technique dSTORM (direct stochastic optical reconstruction microscopy) to visualize several mitochondrial proteins in primary mouse neurons and test the hypothesis that uncoupling protein 4 (UCP4) and F0F1-ATP synthase are spatially separated to eliminate competition for the proton motive force. We found that UCP4, F0F1-ATP synthase, and the mitochondrial marker voltage-dependent anion channel (VDAC) have various expression levels in different mitochondria, supporting the hypothesis of mitochondrial heterogeneity. Our experimental results further revealed that UCP4 is preferentially localized in close vicinity to VDAC, presumably at the inner boundary membrane, whereas F0F1-ATP synthase is more centrally located at the cristae membrane. The data suggest that UCP4 cannot compete for protons because of its spatial separation from both the proton pumps and the ATP synthase. Thus, mitochondrial morphology precludes UCP4 from acting as an uncoupler of oxidative phosphorylation but is consistent with the view that UCP4 may dissipate the excessive proton gradient, which is usually associated with ROS production.Mitochondria are involved in a wide range of cell functions, including fatty acid oxidation, calcium homeostasis, apoptosis, reactive oxygen species (ROS) signaling, and above all, production of ATP (1, 2). In neurons, these organelles are transported along neuronal processes to provide energy for areas of high energy demand, such as synapses (3). To support their functions, mitochondria exhibit a complex morphology consisting of separate and functionally distinct outer mitochondrial membrane (OMM) and inner mitochondrial membrane (IMM). The latter is structurally organized into two domains: an inner boundary membrane (IBM) and a cristae membrane (CM) (4). The current hypotheses imply that the morphology/topology of the IMM is tightly related to biochemical function, the energy state, and the pathophysiological state of mitochondria (5). Whereas the OMM contains porins [e.g., voltage-dependent anion channel (VDAC)], which mediate its permeability to molecules up to 10 kDa, the IMM topology is highly complex. It is comprised of different transport proteins, the ATP synthase (complex V), and complexes I, III, and IV of the electron transport chain, which are responsible for generating the proton motive force (pmf); pmf represents the driving force for not only ATP synthesis, but also other protein-mediated transport activities (for example, phosphate, pyruvate, and glutamate transport). Uncoupling protein 1 (UCP1; thermogenin), a member of the UCP subfamily, is known to dissipate the inner membrane proton gradient for heat production. One of the widely discussed functions for UCP4—another member of the same subfamily that is localized in neurons and neurosensory cells (69)—is the regulation of ROS by decreasing the pmf (10, 11). Although there is no unambiguous evidence revealing the exact UCP4 function, it was shown that UCP4 transports protons similar to UCP1 (12). It is, therefore, assumed that UCP4 and other UCPs possibly compete for protons with other proton-consuming proteins, including ATP synthase, but this phenomenon has not yet been studied in detail (13).Knowledge about exact protein localization at the mitochondrial inner membrane is of utmost importance for understanding the mechanisms behind the allocation of electrochemical potential to various demands, such as ATP production, thermogenesis, ROS regulation, etc. Because of resolution limitations, current data about IMM protein topography are scarce. By implementing immuno-EM, EM tomography, and live cell fluorescence microscopy, it was found that IBM and CM have different protein compositions (1417). Few studies using superresolution microscopy have investigated nanoscale protein distribution, mainly focusing on respiratory chain proteins (18). In particular, there was strong evidence obtained in yeast, fibroblast-like COS cell line, and heart and liver mitochondria that ATP synthase and complexes I, III, and IV are mainly localized on the CM (14, 15, 19, 20). No data are available on the exact localization of UCPs along the IMM.In this study, we test the hypothesis that proton gradient-consuming proteins UCP4 and F0F1-ATP synthase are spatially separated within and/or between individual neuronal mitochondria. Therefore, we performed a two-color analysis of pairwise fluorescence-labeled mitochondrial proteins UCP4, VDAC, and F0F1-ATP synthase at 30 nm spatial resolution using superresolution imaging by direct stochastic optical reconstruction microscopy (dSTORM).  相似文献   
57.
58.
Populations can evolve to adapt to external changes. The capacity to evolve and adapt makes successful treatment of infectious diseases and cancer difficult. Indeed, therapy resistance has become a key challenge for global health. Therefore, ideas of how to control evolving populations to overcome this threat are valuable. Here we use the mathematical concepts of stochastic optimal control to study what is needed to control evolving populations. Following established routes to calculate control strategies, we first study how a polymorphism can be maintained in a finite population by adaptively tuning selection. We then introduce a minimal model of drug resistance in a stochastically evolving cancer cell population and compute adaptive therapies. When decisions are in this manner based on monitoring the response of the tumor, this can outperform established therapy paradigms. For both case studies, we demonstrate the importance of high-resolution monitoring of the target population to achieve a given control objective, thus quantifying the intuition that to control, one must monitor.The progression of cancer is an evolutionary process of cells driven by genetic alterations and selective forces (1). The frequent failure of cancer therapies, despite a host of new targeted cancer drugs, is largely caused by the emergence of drug resistance (2). Cancer therapy faces a real dilemma: the more effective a new treatment is at killing cancerous cells, the more selective pressure it provides for those cells resistant to the drug to take over the cancer population in a process called competitive release (3, 4).A genetic innovation conferring resistance can either be already present as standing variation or in close evolutionary reach, via de novo mutations. The probability of these events is often proportional to the genetic diversity of the tumor. Therefore, resistance is a problem especially for genetically heterogeneous cancers (5). This diversity can be the result of a variable microenvironment, with different pockets of acidity, blood supply, and geometrical constraints of surrounding tissue (2). Also, late-stage cancers not only carry the cumulative archaeological record of their evolutionary history (6) but can also become genetically unstable and fall victim to chromothripsis (7), kataegis (8), and other disruptive mutational processes (9, 10). Thus, the probability of treatment success is higher in genetically homogeneous and/or early-stage cancers (11). Taken together, these considerations place emphasis on early detection of tumors.In cases where early detection is not achieved, the pertinent question is how to avoid treatment failure in the presence of genetic heterogeneity, which seems to be the norm for most solid cancers. One obvious attempt is to make treatments more complex and thus put the resistance mechanisms out of reach of the tumor. In combination therapy, the tumor is simultaneously treated with two or more drugs that would require different, possibly mutually exclusive, escape mechanisms for cells to become resistant. This approach has proven to be successful in the treatment of HIV and is discussed as a possible model also for cancer (12). In the context of cancer, this form of personalized therapy is not yet widely realized, mainly because of the much richer repertoire of genetic variation and adaptability of cancer cells and a comparable shortage of drugs targeting distinct biological pathways. For a recent study of the conditions under which combination therapy is expected to be successful in cancer, see ref. 13.For application of single drugs, there are a number of studies that concentrate on how the therapeutic protocol itself can be optimized. It was realized that all-out maximum tolerated dose chemotherapy is not the only, or necessarily the best, treatment strategy (14). Alternative dosing schedules have been proposed such as drug holidays, metronome therapy (15), and adaptive therapy (16). The realization of Gatenby et al. in ref. 16 is that cancer, as a dynamic evolutionary process, can be better controlled by dynamically changing the therapy, depending on the response of the tumor. Their protocol of reducing the dose while the tumor shrinks and increasing it under tumor growth showed a drastic improvement of life expectancy in mice models of ovarian cancer (16). Furthermore, Gatenby et al. made the important conceptual step of reformulating cancer therapy to be not necessarily about tumor eradication. Instead, dynamic maintenance of a stable tumor size can also be a preferable outcome.Motivated by this experiment, we conjecture that there are substantial therapy gains in optimal applications of existing drugs, as of yet underexploited. As a first step toward using this potential we would like to formalize the intuition of Gatenby et al. To this extent, we aim to establish a theoretical framework for the adaptive control of evolving populations. In particular, we connect the idea of adaptive therapy to the paradigm of stochastic optimal control, also known as Markov decision problems. For other applications of stochastic control in the context of evolution by natural selection see refs. 17, 18. A stochastic treatment is necessary due to the nature of evolutionary dynamics where fluctuations (so-called genetic drift) matter even in large populations. For instance, the dynamics of a new beneficial mutation is initially dominated by genetic drift before it becomes established (19). Stochastic control is a well-established field of research which provides not only a natural language for framing the task of cancer therapy, but also a set of general purpose techniques to compute an optimal control or therapy regimen for a given dynamical system and a given control objective. Although we demonstrate the main steps in this program, we focus on the detrimental effect of imperfect information and the loss of control it entails, thus quantifying the intuition that to control, one must monitor. The informational value of continued monitoring is a natural concept for controlled stochastic systems, whereas in deterministic models successful control usually does not rely on sustained observations.We first introduce the concepts of stochastic optimal control using a minimal evolutionary example: how to keep a finite population polymorphic under Wright–Fisher evolution by influencing the selective difference between two alleles. If perfect information about the population is available, the polymorphism can be maintained for a very long time. We will show how imperfect information due to finite monitoring can lead to a quick loss of control and how some of it can be partially reclaimed by informed preemptive control strategies. We then move to our main problem and introduce a minimal stochastic model of drug resistance in cancer that incorporates features such as variable population size, drug-sensitive and -resistant cells, a carrying capacity, mutation, selection, and genetic drift. After computing the optimal control strategies for a few important settings under perfect information, we demonstrate the effect of imperfect monitoring. If only the total tumor size can be monitored, we show how a control strategy emerges that can adaptively infer, and thus exploit, the inner tumor composition of susceptible and resistant cells.  相似文献   
59.
60.
Calculating the probability of each possible outcome for a patient at any time in the future is currently possible only in the simplest cases: short-term prediction in acute diseases of otherwise healthy persons. This problem is to some extent analogous to predicting the concentrations of species in a reactor when knowing initial concentrations and after examining reaction rates at the individual molecule level. The existing theoretical framework behind predicting contagion and the immediate outcome of acute diseases in previously healthy individuals is largely analogous to deterministic kinetics of chemical systems consisting of one or a few reactions. We show that current statistical models commonly used in chronic disease epidemiology correspond to simple stochastic treatment of single reaction systems. The general problem corresponds to stochastic kinetics of complex reaction systems. We attempt to formulate epidemiologic problems related to chronic diseases in chemical kinetics terms. We review methods that may be adapted for use in epidemiology. We show that some reactions cannot fit into the mass-action law paradigm and solutions to these systems would frequently exhibit an antiportfolio effect. We provide a complete example application of stochastic kinetics modeling for a deductive meta-analysis of two papers on atrial fibrillation incidence, prevalence, and mortality.Much of the medical progress over the last century can be attributed to the objective assessment of the effect of treatments on the evolution of specific diseases. Treatment effect is measured as the rate of an event such as recovery in a sample of the patient population. Relatively immediate results were obtained from studies involving acute diseases, occurring in previously healthy individuals, in which recovery could be clearly identified. This resulted in the development of effective treatments for most acute diseases affecting children and younger adults and a substantial prolongation of life expectancy (1). Consequently, many acute diseases were treated effectively. This led to the current, more complex situation, in which an elderly population suffers from a combination of chronic conditions. Few older people are strictly healthy, and besides the evolution of the chronic conditions themselves, acute diseases occurring in this setting do not always evolve as in a young, healthy population. This combination of chronic conditions and risk factors amounts to the presence of more heterogenous populations. Thus, samples need to be larger to allow reproducible predictions, compared with those for acute diseases occurring in a young and previously healthy population. Predictions that are also more complex (there is no strict “recovery”) apply to a limited range of cases.The concepts used by clinicians and epidemiologists to describe the health status of individuals and their prevalence in the population, as well as the rates of change in this status and the general predictive laws, are quite analogous to the concepts used by chemists for predicting the future concentrations of species in a reactor. Early works on the spread of epidemics of communicable diseases (2, 3) made reference to this analogy, unlike later developments in mathematical epidemiology (4, 5). The purpose of current mathematical modeling of epidemiology is mostly the kinetics (or “dynamics” as it is frequently called) of the spread of a communicable disease in an acute epidemic (6), an important and pressing problem when such epidemics occur. The primary phenomenon represented in these models is contagion. The models used are typically deterministic, self-catalytic kinetic models of the whole population, with mass-action law assumption.The focus of clinical studies in chronic diseases is the risk for various possible outcomes for the patient that are usually distinct from a complete recovery and sometimes of a quantitative nature—for example, how much of the function of an organ is preserved. Kinetics-type mathematical support for this purpose is rarely available, other than a simple statement of risk or relative risk that is directly inferred from a study in a sample.Deterministic event rates are the basis of virtually all clinical judgement. A typical example is, What is the yearly risk of stroke in patients with atrial fibrillation on either of two treatments, such as warfarin or aspirin? (More individual parameters are usually taken into account when classifying each patient). A lower yearly risk rate in one of the treatment categories is an argument to choose that treatment for a particular patient. This risk rate is usually the rate of stroke that has been directly measured in a study where a sample of patients belonging to certain classes (for example, middle-aged males with atrial fibrillation and no history of stroke) has been followed for some time. The observed event rates are taken as the best estimations for the population sharing the same characteristics as the patients in the sample, a population that is presumed infinite. The event rates are inferred, however, starting from observations made in finite, small, ensembles of individuals (case series). Thus, inference is always probabilistic, as event rates in the population can be estimated only with some uncertainty, even in homogenous populations.In populations in which individuals have various combinations of underlying pathologies that may each influence the future event rates, this approach may frequently lead to unreproducible results (7, 8). Aiming to predict events that would occur in the more distant future, as needed with chronic disease, further complicates the problem: The longer the prediction time is, the higher the number of other events that may intervene and invalidate the prediction.Both epidemiology and chemical kinetics have evolved independently over the previous decades, each developing its own stochastic methods with a specific terminology that frequently refers to somewhat similar concepts. Prediction of event rates from relatively small samples, using probabilistic models, is of primary importance for epidemiology. Half a century ago, the factors influencing the recovery from acute diseases in otherwise healthy (that is, homogenous) populations were the main concern. Thus, the problem was to estimate event rates in otherwise simple systems. Models of the epidemiologic equivalent of a single reaction, with a few other parameters, were adequate for this. Probabilistic issues were mostly related to the errors associated with the limited sizes of the samples used, but the inferred rates were typically deterministic. In chemistry, at that time, model development focused on identifying the relatively complex reaction mechanisms that occur, even when only a few initial species are involved, and on describing their kinetics, typically with systems of deterministic differential equations adjusted using macroscopic measurements of species concentrations. Uncertainty due to small molecule numbers was not usually involved. Over the last 70 y, however, chemical kinetics developed new methods, such as models of more complex systems that do not rely on mass-action law (9) or models of single-molecule kinetics that might be closer to the problem of predicting clinical evolution in individual patients. Also, issues that occur in the biochemical kinetics of more complex systems, such as crowding (10), are to some extent analogous to event prediction in heterogenous populations.The development of numerical methods allows the practical approach to problems that involve systems that are both complex and stochastic and the exploration of uncertain phenomena at both individual and population levels (11). An important clinical problem that cannot, in general, be solved without such a systematic approach is to compute, for an individual, the risks for each possible disease over the next time interval (such as 1 y), given what we know about his or her health status and history and based on currently available epidemiologic data. Solving this problem would allow much more accurate planning of clinical interventions than is possible today.In this paper, we attempt to compare concepts, methods, and models that have been developed in the two fields, by reformulating the epidemiologic approaches in chemical kinetics terms, to identify chemical kinetics methods that might be adaptable for epidemiologic use. In Supporting Information, we show an example of a deductive meta-analysis of two epidemiology papers, using a stochastic kinetic system.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号