首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11篇
  免费   0篇
临床医学   1篇
内科学   1篇
神经病学   1篇
预防医学   2篇
药学   6篇
  2021年   1篇
  2020年   1篇
  2019年   1篇
  2013年   1篇
  2012年   1篇
  2009年   1篇
  2007年   2篇
  2005年   1篇
  1995年   1篇
  1994年   1篇
排序方式: 共有11条查询结果,搜索用时 203 毫秒
1.
Purpose. Highly variable drugs pose a problem in bioequivalence assessment because they often fail to meet current regulatory acceptance criteria for average bioequivalence (80–125%). This paper examines alternative approaches to establishing bioequivalence. Methods. Suggested solutions have included alternate study designs, e.g., replicate and multiple dose studies, reducing the level of the confidence interval, and widening the acceptance limits. We focus on the latter approach. Results. A rationale is presented for defining wider acceptance limits for highly variable drugs. Two previously described methods are evaluated, and a new method having more desirable properties is proposed. Conclusions. We challenge the one size fits all current definition of bioequivalence acceptance limits for highly variable drugs, proposing alternative limits or goal posts which vary in accordance with the intrasubject variability of the reference product.  相似文献   
2.
Background and Objective: High levels of sedentary behavior are prevalent among people with stroke and contribute to elevated risk for recurrent stroke. Few interventions reduce sedentary behavior post-stroke. The ABLE intervention aims to reduce sedentary behavior using activity monitoring, activity scheduling, problem-solving, and self-assessment to promote engagement in meaningful daily activities. The purpose of this study was to assess the feasibility (tolerability, acceptability, reliability, safety) of the ABLE intervention after stroke and describe trends in sedentary behavior at baseline and 4 weeks.

Clinical Presentation: Participants (n = 5) who were 6 months to 2 years post-stroke, ambulatory, and reported ≥6 h of daily sitting time.

Intervention: Twelve ABLE intervention sessions (3x/week for 4 weeks) conducted in participants’ homes. The ABLE intervention includes activity monitoring, activity scheduling, self-assessment, and collaborative problem-solving.

Results: All feasibility benchmarks were met for three participants. Two participants met tolerability and safety benchmarks but did not meet acceptability and reliability benchmarks. Variability in feasibility and sedentary behavior outcomes may be related to baseline levels of sedentary behavior and social support.

Conclusions: The ABLE intervention was tolerable and safe. The intervention protocol was refined to enhance reliability and acceptability. Future studies should estimate the effects of the ABLE intervention.  相似文献   

3.
Abstract

Purpose: To define semi-supervised machine learning (SSML) and explore current and potential applications of this analytic strategy in rehabilitation research.

Method: We conducted a scoping review using PubMed, GoogleScholar and Medline. Studies were included if they: (1) described a semi-supervised approach to apply machine learning algorithms during data analysis and (2) examined constructs encompassed by the International Classification of Functioning, Disability and Health (ICF). The first two authors reviewed identified articles and recorded study and participant characteristics. The ICF domain used in each study was also identified.

Results: After combining information from the eight studies, we established that SSML was a feasible approach for analysis of complex data in rehabilitation research. We also determined that semi-supervised approaches may be more accurate than supervised machine learning approaches.

Conclusions: A semi-supervised approach to machine learning has potential to enhance our understanding of complex data sets in rehabilitation science. SSML mirrors the iterative process of rehabilitation, making this approach ideal for calibrating devices, classifying activities or identifying just-in-time interventions. Rehabilitation scientists who are interested in conducting SSML should collaborate with data scientists to advance the application of this approach within our field.
  • Implications for rehabilitation
  • Semi-supervised machine learning applications may be a feasible approach for analyses of complex data sets in rehabilitation research.

  • Semi-supervised machine learning approaches uses a combination of labelled and unlabelled data to produce accurate predictive models, thereby requiring less user-input data than other machine learning approaches (i.e., supervised, unsupervised), reducing resource cost and user-burden.

  • Semi-supervised machine learning is an iterative process that, when applied to rehabilitation assessment and outcomes, could produce accurate personalized models for treatment.

  • Rehabilitation researchers and data scientists should collaborate to implement semi-supervised machine learning approaches in rehabilitation research, optimizing the power of large datasets that are becoming more readily available within the field (e.g., EEG signals, sensors, smarthomes).

  相似文献   
4.
A recent conference report described a decision rule, hereafter referred to as the 4-6-20 rule, for acceptance/rejection of analytical runs in bioavailability, bioequivalence, and pharmacokinetic studies. This procedure requires that quality control specimens at three concentrations (low, medium, and high) be assayed in duplicate in each run. For run acceptance, at least four of the six assay values must be within ±20% of their respective nominal concentrations, and at least one of the two values at each concentration must be within these limits. An inherent flaw in this decision rule is that the risk of rejecting runs, when the assay performance has in fact not deteriorated, varies for each assay and is neither known nor controlled. In this paper simulation methods are used to evaluate the operating characteristics of the 4-6-20 rule in comparison to those of classical statistical quality control procedures.  相似文献   
5.
A procedure for constructing two-sided beta-content, gamma-confidence tolerance intervals is proposed for general random effects models, in both balanced and unbalanced data scenarios. The proposed intervals are based on the concept of effective sample size and modified large sample methods for constructing confidence bounds on functions of variance components. The performance of the proposed intervals is evaluated via simulation techniques. The results indicate that the proposed intervals generally maintain the nominal confidence and content levels. Application of the proposed procedure is illustrated with a one-fold nested design used to evaluate the performance of a quantitative bioanalytical method.  相似文献   
6.
The ICH E14 guidance recommends the use of a time-matched baseline, while others recommend alternative baseline definitions including a day-averaged baseline. In this article we consider six models adjusting for baselines. We derive the explicit covariances and compare their power under various conditions. Simulation results are provided. We conclude that type I error rates are controlled. However, one model outperforms the others on statistical power under certain conditions. In general, the analysis of covariance (ANCOVA) model using a day-averaged baseline is preferred. If the time-matched baseline has to be used as per requests from regulatory agencies, the analysis by time point using ANCOVA model should be recommended.  相似文献   
7.
Purpose Typical acceptance criteria for analytical methods are not chosen with regard to the concept of method suitability and are commonly based on ad-hoc rules. Such approaches yield unknown and uncontrolled risks of accepting unsuitable analytical methods and rejecting suitable analytical methods. This paper proposes a formal statistical framework for the validation of analytical methods, which incorporates the use of total error and controls the risks of incorrect decision-making. Materials and Methods A total error approach for method validation based on the use of two-sided β-content tolerance intervals is proposed. The performance of the proposed approach is compared to the performance of current ad-hoc approaches via simulation techniques. Results The current ad-hoc approaches for method validation fail to control the risk of incorrectly accepting unsuitable analytical methods. The proposed total error approach controls the risk of incorrectly accepting unsuitable analytical methods and provides adequate power to accept truly suitable methods. Conclusion Current ad-hoc approaches to method validation are inconsistent with ensuring method suitability. A total error approach based on the use of two-sided β-content tolerance intervals was developed. The total error approach offers a formal statistical framework for assessing analytical method performance. The approach is consistent with the concept of method suitability and controls the risk of incorrectly accepting unsuitable analytical methods.  相似文献   
8.
9.
The ICH E14 guidance recommends the use of a time-matched baseline, while others recommend alternative baseline definitions including a day-averaged baseline. In this article we consider six models adjusting for baselines. We derive the explicit covariances and compare their power under various conditions. Simulation results are provided. We conclude that type I error rates are controlled. However, one model outperforms the others on statistical power under certain conditions. In general, the analysis of covariance (ANCOVA) model using a day-averaged baseline is preferred. If the time-matched baseline has to be used as per requests from regulatory agencies, the analysis by time point using ANCOVA model should be recommended.  相似文献   
10.
The proposed guidelines for the assessment of the effect of new pharmaceutical agents on the QT interval (beginning of QRS complex to end of T wave on the electrocardiogram) are based on the maximum of a series over time of simple one-sided 95 per cent upper confidence bounds. This procedure is typically very conservative as a procedure for obtaining a 95 per cent bound for the maximum of the population parameters. This paper proposes new bounds for the maximum, both analytical and bootstrap-based, that are lower but still achieve correct coverage in the context of crossover and parallel designs for the most realistic portions of the parameter space.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号