全文获取类型
收费全文 | 57篇 |
免费 | 1篇 |
专业分类
耳鼻咽喉 | 1篇 |
基础医学 | 2篇 |
口腔科学 | 1篇 |
临床医学 | 1篇 |
内科学 | 3篇 |
皮肤病学 | 2篇 |
神经病学 | 2篇 |
特种医学 | 1篇 |
综合类 | 1篇 |
预防医学 | 26篇 |
眼科学 | 1篇 |
药学 | 16篇 |
中国医学 | 1篇 |
出版年
2022年 | 1篇 |
2021年 | 1篇 |
2020年 | 3篇 |
2017年 | 1篇 |
2016年 | 2篇 |
2015年 | 2篇 |
2014年 | 5篇 |
2013年 | 11篇 |
2012年 | 1篇 |
2011年 | 1篇 |
2009年 | 6篇 |
2008年 | 6篇 |
2007年 | 5篇 |
2006年 | 1篇 |
2005年 | 2篇 |
2003年 | 2篇 |
2002年 | 2篇 |
2000年 | 1篇 |
1998年 | 1篇 |
1997年 | 2篇 |
1993年 | 1篇 |
1991年 | 1篇 |
排序方式: 共有58条查询结果,搜索用时 15 毫秒
1.
In this paper we describe Bonferroni‐based multiple testing procedures (MTPs) as strategies to split and recycle test mass. Here, ‘test mass’ refers to (parts of) the nominal level α at which the family‐wise error rate is controlled. Briefly, test mass is split between different null hypotheses, and whenever a null hypothesis is rejected, the part of α allocated to it may be recycled to the testing of other hypotheses. These recycling MTPs are closed testing procedures based on raw p‐values associated with testing the individual null hypotheses, and the class of such MTPs includes, for example, serial and parallel gatekeeping, fallback and Holm procedures. Graphical displays and a concise algebraic notation are provided for such MTPs. This recycling approach has pedagogical advantages and may facilitate the tailoring of MTPs for different purposes. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
2.
Hilde M. Huizenga Joost A. Agelink van Rentergem Raoul P. P. P. Grasman Dino Muslimovic Ben Schmand 《Journal of clinical and experimental neuropsychology》2016,38(6):611-629
Introduction. In neuropsychological research and clinical practice, a large battery of tests is often administered to determine whether an individual deviates from the norm. We formulate three criteria for such large battery normative comparisons. First, familywise false-positive error rate (i.e., the complement of specificity) should be controlled at, or below, a prespecified level. Second, sensitivity to detect genuine deviations from the norm should be high. Third, the comparisons should be easy enough for routine application, not only in research, but also in clinical practice. Here we show that these criteria are satisfied for current procedures used to assess an overall deviation from the norm—that is, a deviation given all test results. However, we also show that these criteria are not satisfied for current procedures used to assess test-specific deviations, which are required, for example, to investigate dissociations in a test profile. We therefore propose several new procedures to assess such test-specific deviations. These new procedures are expected to satisfy all three criteria. Method. In Monte Carlo simulations and in an applied example pertaining to Parkinson disease, we compare current procedures to assess test-specific deviations (uncorrected and Bonferroni normative comparisons) to new procedures (Holm, one-step resampling, and step-down resampling normative comparisons). Results. The new procedures are shown to: (a) control familywise false-positive error rate, whereas uncorrected comparisons do not; (b) have higher sensitivity than Bonferroni corrected comparisons, where especially step-down resampling is favorable in this respect; (c) be user-friendly as they are implemented in a user-friendly normative comparisons website, and as the required normative data are provided by a database. Conclusion. These new normative comparisons procedures, especially step-down resampling, are valuable additional tools to assess test-specific deviations from the norm in large test batteries. 相似文献
3.
4.
MULTIPLE COMPARISON PROCEDURES UPDATED 总被引:12,自引:0,他引:12
John Ludbrook 《Clinical and experimental pharmacology & physiology》1998,25(12):1032-1037
1. A common statistical flaw in articles submitted to or published in biomedical research journals is to test multiple null hypotheses that originate from the results of a single experiment without correcting for the inflated risk of type 1 error (false positive statistical inference) that results from this. Multiple comparison procedures (MCP) are designed to minimize this risk. The present review focuses on pairwise contrasts, the most common sort of multiple comparisons made by biomedical investigators. 2. In an earlier review a variety of MCP were described and evaluated. It was concluded that an effective MCP should control the risk of family-wise type 1 error, so as to ensure that not more than one hypothesis within a single family is falsely rejected. One-step procedures based on the Bonferroni or Sidak inequalities do this. For continuous data and under normal distribution theory, so does the Tukey-Kramer procedure for all possible pairwise contrasts of means and the Dunnett procedure for all possible pairwise contrasts of means with a control mean. 3. There is now a new class of MCP, based on the Bonferroni or ?idák inequalities but performed in a step-wise fashion. The members of this class have certain desirable properties. They: (i) control the family-wise type 1 error rate as effectively as the one-step procedures; (ii) are more powerful than the one-step Bonferroni or Sidak procedures, especially when hypotheses are logically related; and (iii) can be applied not only to continuous data but also to ordinal or categorical data. 4. Of the new step-wise MCP, Holm's step-down procedures are commended for their combination of accuracy, power and versatility. They also have the virtue of simplicity. Given the raw P values that result from conventional tests of significance, the adjustments for multiple comparisons can be made by hand or hand-held calculator. 5. Despite the corrective abilities of the new step-wise MCP, investigators should try to design their experiments and analyses to test a single, global hypothesis rather than multiple ones. 相似文献
5.
Methods of multiple comparisons were applied to linkage analysis in the case of genome scanning. Data for Problem 2A were used. P-Values were calculated for all 440,400 possible tests of linkage. Plots of distribution functions and false discovery rate are shown. © 1997 Wiley-Liss, Inc. 相似文献
6.
Compatible simultaneous lower confidence bounds for the Holm procedure and other Bonferroni-based closed tests 总被引:2,自引:0,他引:2
We consider the problem of simultaneously testing multiple one-sided null hypotheses. Single-step procedures, such as the Bonferroni test, are characterized by the fact that the rejection or non-rejection of a null hypothesis does not take the decision for any other hypothesis into account. For stepwise test procedures, such as the Holm procedure, the rejection or non-rejection of a null hypothesis may depend on the decision of other hypotheses. It is well known that stepwise test procedures are by construction more powerful than their single-step counterparts. This power advantage, however, comes only at the cost of increased difficulties in constructing compatible simultaneous confidence intervals for the parameters of interest. For example, such simultaneous confidence intervals are easily obtained for the Bonferroni method, but surprisingly hard to derive for the Holm procedure. In this paper, we discuss the inherent problems and show that ad hoc solutions used in practice typically do not control the pre-specified simultaneous confidence level. Instead, we derive simultaneous confidence intervals that are compatible with a certain class of closed test procedures using weighted Bonferroni tests for each intersection hypothesis. The class of multiple test procedures covered in this paper includes gatekeeping procedures based on Bonferroni adjustments, fixed sequence procedures, the simple weighted or unweighted Bonferroni procedure by Holm and the fallback procedure. We illustrate the results with a numerical example. 相似文献
7.
While data sets based on dense genome scans are becoming increasingly common, there are many theoretical questions that remain unanswered. How can a large number of markers in high linkage disequilibrium (LD) and rare disease variants be simulated efficiently? How should markers in high LD be analyzed: individually or jointly? Are there fast and simple methods to adjust for correlation of tests? What is the power penalty for conservative Bonferroni adjustments? Assuming that association scans are adequately powered, we attempt to answer these questions. Performance of single‐point and multipoint tests, and their hybrids, is investigated using two simulation designs. The first simulation design uses theoretically derived LD patterns. The second design uses LD patterns based on real data. For the theoretical simulations we used polychoric correlation as a measure of LD to facilitate simulation of markers in LD and rare disease variants. Based on the simulation results of the two studies, we conclude that statistical tests assuming only additive genotype effects (i.e. Armitage and especially multipoint T2) should be used cautiously due to their suboptimal power in certain settings. A false discovery rate (FDR)‐adjusted combination of tests for additive, dominant and recessive effects had close to optimal power. However, the common genotypic χ2 test performed adequately and could be used in lieu of the FDR combination. While some hybrid methods yield (sometimes spectacularly) higher power they are computationally intensive. We also propose an “exact” method to adjust for multiple testing, which yields nominally higher power than the Bonferroni correction. Genet. Epidemiol. 2008. © 2008 Wiley‐Liss, Inc. 相似文献
8.
《Gait & posture》2020
BackgroundIn statistical analysis of time series researchers often pick key points from curves and run the venerable analysis of variance (ANOVA) to determine if a difference exists between groups. However, this approach fails to compare most of the data across time and thereby may throw out potentially valuable inferences.Research questionThis study illustrates a novel method termed LOESS alpha-adjusted serial t-testing (LAAST). LAAST employs locally weighted scatterplot smoothing (LOESS) on the data, serial correlation to make alpha adjustments, and point-wise Welch's t-tests to determine regional significance when comparing groups of time dependent data. It was expected that LAAST gives similar results to random field theory (RFT) based inferences while overcoming its shortcomings with respect to longitudinal data analysis.MethodsTwo data sets were analyzed with LAAST and RFT. The first contained two groups of five simulated random sinusoidal waveforms such that both inline time-series and equivalent time-offset longitudinal conditions were represented. The second data set was comprised of publicly available medial gastrocnemius forces from individuals with (N = 27) and without (N = 16) pain.ResultsResults for both data sets indicated similar corrected alpha levels regardless of analysis type, but the applied alpha level corrections were less conservative for LAAST than RFT or Holm-Bonferroni corrections, but often more conservative than Hochberg corrections.SignificanceAnalysis methods employing functional ANOVA and RFT have enabled researchers to effectively run comparisons between groups at all points within the time series and are gaining popularity. However, in some correction methods for multiple comparisons the alpha level correction can in turn lead to inflation of type II error. These results suggest that LAAST is comparable to RFT while also being appropriate for longitudinal type time series data analysis. Additionally, its use of Welch’s t-tests improves its validity on non-normally distributed data. 相似文献
9.
10.
《Statistics In Biopharmaceutical Research》2013,5(2):320-335
Many questions in biomedical research can be addressed effectively with simultaneous confidence intervals for multiple contrasts. While procedures for normal outcome data are readily available, there is still a need for developing practical methods for binary outcomes. In this article, we construct simultaneous confidence intervals for multiple contrasts of binomial proportions using the two-step method of variance estimates recovery (Zou and Donner 2008; Zou 2008; Zou et al. 2009a). First, we obtain confidence limits about single proportions using critical values from the multivariate normal distribution that account for correlations among contrasts. Second, we set confidence limits for these contrasts using variance estimates recovered from the limits. Simulation results show this approach performs well in small to moderate sample sizes when either the Wilson or Jeffreys method is used for constructing confidence limits about a single proportion. We illustrate the procedure with examples. 相似文献