Particle size analysis in the pharmaceutical industry has long been a source of debate regarding how best to define measurement accuracy; the degree to which the result of a measurement or calculation conforms to the true value. Defining a “true” value for the size of a particle can be challenging as the output of its measurement will differ because of variations in measurement approaches, instrumental differences and calculation methods. Consequently, for “real” particles, a universal “true” value does not exist and accuracy is therefore not a definable characteristic. Accordingly, precision is then a measure of the ability to reproducibly achieve a measurement of unknown relevance.This article proposes, in place of accuracy, a means to define the “appropriateness” of a measurement in line with the critical quality attributes (CQA) of the material being characterized. The decision as to whether the measurement is correct should involve a link to the CQA; that is, correlation should be demonstrated, without which the measured particle size cannot be defined as a critical material attribute.Correspondingly, methods should also be able to provide sufficient precision to demonstrate discrimination relating to variation in the CQA. The benefits and challenges of this approach are discussed. 相似文献
IntroductionPredicting pathological complete response (pCR) for patients receiving neoadjuvant chemotherapy (NAC) is crucial in establishing individualized treatment. Whole-slide images (WSIs) of tumor tissues reflect the histopathologic information of the tumor, which is important for therapeutic response effectiveness. In this study, we aimed to investigate whether predictive information for pCR could be detected from WSIs.Materials and methodsWe retrospectively collected data from four cohorts of 874 patients diagnosed with biopsy-proven breast cancer. A deep learning pathological model (DLPM) was constructed to predict pCR using biopsy WSIs in the primary cohort, and it was then validated in three external cohorts. The DLPM could generate a deep learning pathological score (DLPs) for each patient; stromal tumor-infiltrating lymphocytes (TILs) were selected for comparison with DLPs.ResultsThe WSI feature-based DLPM showed good predictive performance with the highest area under the curve (AUC) of 0.72 among the cohorts. Alternatively, the combination of the DLPM and clinical characteristics offered a better prediction performance (AUC >0.70) in all cohorts. We also evaluated the performance of DLPM in three different breast subtypes with the best prediction for the triple-negative breast cancer (TNBC) subtype (AUC: 0.73). Moreover, DLPM combined with clinical characteristics and stromal TILs achieved the highest AUC in the primary cohort (AUC: 0.82) and validation cohort 1 (AUC: 0.80).ConclusionOur study suggested that WSIs integrated with deep learning could potentially predict pCR to NAC in breast cancer. The predictive performance will be improved by combining clinical characteristics. DLPs from DLPM can provide more information compared to stromal TILs for pCR prediction. 相似文献
Purpose of the study: the aim of this study was to synthesize PFC fNIRS outcomes on the effects of cognitive tasks compared to resting/baseline tasks in healthy adults from studies utilizing a pre/post design.
Material and methods: original research studies were searched from seven databases (MEDLINE, EMBASE, CENTRAL, CINAHL, SCOPUS, PEDro and PubMed). Subsequently, two independent reviewers screened the titles and abstracts followed by full-text reviews to assess the studies' eligibility.
Results: eleven studies met the inclusion criteria and had data abstracted and quality assessed. Methodology varied considerably and yet cognitive tasks resulted in the ΔO2Hb increasing in 8 of the 11 and ΔHHb decreasing in 8 of 8 studies that reported this outcome. The cognitive tasks from 10 of the 11 studies were classified as “Working Memory” and “Verbal Fluency Tasks”.
Conclusions: although, the data comparison was challenging provided the heterogeneity in methodology, the results across studies were similar. 相似文献
IntroductionAlthough blood transfusion is common in burns, data are lacking in appropriate transfusion thresholds. It has been reported that a restrictive blood transfusion policy decreases blood utilization and improves outcomes in critically ill adults, but the impact of a restrictive blood transfusion policy in burn patients is unclear. We decided to investigate the outcome of decreasing the blood transfusion threshold.Material and methodsEighty patients with TBSA > 20% who met our inclusion criteria were included. They were randomly divided into control and intervention groups. The intervention group received packed cells only when Hemoglobin declined to less than 8 g/dL at routine laboratory evaluations. While the control group received packed-cell when hemoglobin was declined to less than 10 g/dl. The total number of the received packed cell before, during and after any surgical procedure was recorded. The outcome was measured by the evaluation of the infection rate and other complications.ResultThe mean hemoglobin level before transfusion was 7.7 ± 0.4 g/dL in the restrictive group and 8.8 ± 0.7 g/dL in the liberal group. The mean number of RBC unit transfusion per patient in the restrictive group was significantly lower than the traditional group (3.28 ± 2.2 units vs. 5.9 ± 3.7 units) (p-value = 0.006). The total number of RBC transfused units varied significantly between the two groups (p-value = 0.014). The number of transfused RBC units outside the operation room showed a significant difference between groups (restrictive: 2.8 ± 1.4 units vs. liberal: 4.4 ± 2.6 units) (p = 0.004). We did not find any significant difference in mortality rate or other outcome measures between groups.ConclusionApplying the restrictive transfusion strategy in thermal burn patients who are highly prone to all kinds of infection, does not adversely impact the patient outcome, and results in significant cost savings to the institution and lower rate of infection. We conclude that the restrictive transfusion practice during burn excision and grafting is well tolerated and effective in reducing the number of transfusions without increasing complications.Clinical Trial Registration ReferenceIRCT20190209042660N1. 相似文献
PurposeAttempts by magnetic resonance (MR) manufacturers to help imaging centres improve patient throughput has led to the development of more automated acquisition. This software is capable of customizing individual scan alignment; potentially improving imaging efficiency and standardizing protocols. However, substantial investments are required to introduce such systems, potentially deterring their widespread application. This study assessed the implementation costs and reduction in examination durations for automated knee MR imaging (MRI) software.Materials and MethodsResearch activities were performed at a community-based academic centre on a 3-Tesla (3-T) system using Siemens' Day Optimizing Throughput (Dot) knee software. Examination acquisition times were extracted from the system before and after software implementation. Fiscal year 2012/13 finances were used to determine the average hourly cost of MRI utilization. Costs associated with automated software implementation were also calculated. Finally, the number of knee scans required to achieve a positive return on investment using the software was established.Results and DiscussionThe mean (standard deviation, sample size) pre- and post-Dot software scan times were 23.20 (4.18, n = 266) and 21.94 (4.51, n = 59) minutes, respectively, for a routine knee scan and 11.88 (1.60, n = 74) and 11.24 (1.51, n = 27) minutes, respectively, for a fast knee scan. The overall weighted average resulted in a 64-second time savings per automated knee examination. This negligible time savings would be extremely difficult to make use of clinically. Dot simplified 29 unique knee protocols to two, improving the consistency of knee examinations. Current Dot software is not compatible with all patients and therefore has limitations that are a concern among MR technologists.ConclusionAdoption of automated knee systems could assist in standardizing protocols; however, the cost of implementation and difficulty in modifying patient scheduling to reflect the minimal time savings would make a financial return unlikely to occur at small- and medium-sized institutions. 相似文献
IntroductionThis study aims to construct learning curves related to the realization of standardized postprocessing by radiographer students and to discuss their exploitation and interest.Materials and MethodsThis study was carried out in 21 French students in their 3rd year of training. Two postprocessing protocols in CT (#1 traumatic shoulder; #2 petrous bone) were repeated 15 times by each student. Each achievement was timed to obtain overall learning curves. The realization accuracy was also assessed for each student at each repetition.ResultsThe learning rates for the two protocols are 63% and 56%, respectively. The number of repetitions to reach the reference time for each protocol is 11 and 12, respectively. In both protocols, the standard deviations are significantly reduced and stabilized during repetitions. The mean accuracy progresses more quickly in protocol #1.DiscussionThe measured learning rates reflect a rapid learning process for each protocol. The analysis of the standard deviations shows that students have reached a homogeneous level. The average times and accuracies measured during the last repetitions show that the group has reached a high level of performance. Building learning curves helps students measure their progress and motivates them.ConclusionObtaining learning curves allows trainers/supervisors to qualify the learning difficulty of a task while motivating students/radiographers. The use of learning curves is inline with the competency-based training paradigm. 相似文献