首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 240 毫秒
1.
Abstract

This study investigated the inter-rater and intra-rater reliability of subjective judgements of cough in patients following inhalation of citric acid. Eleven speech-language pathologists (SLPs) currently using cough reflex testing in their clinical practice (experienced raters) and 34 SLPs with no experience using cough reflex testing (inexperienced raters) were recruited to the study. Participants provided a rating of strong, weak, or absent to 10 video segments of cough responses elicited by inhalation of nebulized citric acid. The same video segments presented in a different sequence were re-evaluated by the same clinicians following a 15-minute break. Inter-rater reliability for experienced raters was calculated with a Fleiss’ generalized kappa of .487; intra-rater reliability was higher with a kappa of .700. Inexperienced raters showed similar reliability, with kappa values for inter-rater and intra-rater reliability of .363 and .618, respectively. In conclusion, SLPs demonstrate only fair-to-moderate reliability in subjectively judging a patient's cough response to citric acid. Experience in making cough judgements does not improve inter-rater reliability significantly. Further validity and reliability research, including an evaluation of the effect of training on judgement reliability, would be beneficial for guiding clinical policies.  相似文献   

2.
The intra- and inter-rater reliability of a motor function evaluation of stroke patients, based on the Bobath approach, was studied. The intraclass correlation coefficient (ICC) was used to determine the degree of agreement between repeated measurements on the same patient taken by the same rater and between measurements taken by three raters on the same patient. In the intra-rater study, each of 19 patients was evaluated in three different sessions by one of 19 raters. In the inter-rater study 18 patients were each evaluated by three different raters. The intra-rater data were highly reliable, with ICCs of 0.95 and 0.97 for the upper and lower limbs respectively. For the inter-rater study, the ICCs were 0.79 and 0.77 for the upper and lower limbs respectively. It can therefore be concluded that this instrument, previously demonstrated to quantify patient progress, is also reliable both in intra- and inter-rater dimensions.  相似文献   

3.
OBJECTIVE: To examine the reliability of the Assessment of Capacity for Myoelectric Control (ACMC) in children and adults with a myoelectric prosthetic hand. DESIGN: Intra-rater and inter-rater reliability estimated from reported assessments by 3 different raters. PATIENTS: A sample of convenience of 26 subjects (11 males, 15 females) with upper limb reduction deficiency or amputation and myoelectric prosthetic hands were video-taped during a regular clinical visit for ACMC. Participants' ages ranged from 2 to 40 years. METHODS: After instruction, 3 occupational therapists with no, 10 weeks' and 15 years' clinical experience of myoelectric prosthesis training and follow-up independently rated the 30 ACMC items for each patient. The ratings were repeated after 2-4 weeks. Inter- and intra-rater reliability in items was examined by using weighted kappa statistics and Rasch-measurement analyses. RESULTS: The mean intra-rater agreement in items was excellent (kappa 0.81) in the more experienced raters. Fit statistics showed too much variation in the least experienced rater, who also had only good (kappa 0.65) agreement in items. The stability of rater calibrations between first and second assessment showed that no rater varied beyond chance (>0.50 logit) in severity. The mean inter-rater agreement in items was fair; kappa 0.60, between the experienced raters and kappa 0.47 between raters with no and 10 weeks' experience. CONCLUSION: Overall, the agreement was higher in the more experienced raters, indicating that reliable measures of the ACMC require clinical experience from myoelectric prosthesis training.  相似文献   

4.
BackgroundPreseason movement screening can identify modifiable risk factors, deterioration of function, and potential for injury in baseball players. Limited resources and time intensive testing procedures prevent high school coaches from accurately performing frequent movement screens on their players.PurposeTo establish the intra-rater and inter-rater reliability of a novel arm care screening tool based on the concepts of the Functional Movement Screen (FMS™) and Selective Functional Movement Assessment (SFMA™) in high school coaches.Study DesignMethodological intra- and inter-rater reliability studyMethodsThirty-one male high school baseball players (15.9 years ± 1.06) were independently scored on the Arm Care Screen (ACS) by three examiners (two coaches, one physical therapist) in real-time and again seven days later by reviewing video recordings of each players’ initial screening performance. Results from each examiner were compared within and between raters using Cohen’s kappa and percent absolute agreement.ResultsSubstantial to excellent intra-rater and inter-rater reliability were established among all raters for each component of the ACS. The mean Cohen’s kappa coefficient for intra-rater reliability was 0.76 (95% confidence interval, 0.54-0.95) and percent absolute agreement ranged from 0.82-0.94 among all raters. Inter-rater reliability demonstrated a mean Cohen’s kappa value of 0.89 (95% confidence interval, 0.77-0.99) while percent absolute agreement between raters ranged from 0.81-1.00. Intra- and inter-rater reliability did not differ between raters with various movement screening experience (p>0.05).ConclusionsHigh school baseball coaches with limited experience screening movement can reliably score all three components of the ACS in less than three minutes with minimal training.Level of EvidenceLevel 3, Reliability study  相似文献   

5.
《Disability and rehabilitation》2013,35(19-20):1797-1804
The WorkHab Functional Capacity Evaluation (FCE) is widely used in Australian workplace injury management and occupational rehabilitation arenas; however, there is a lack of published literature regarding its reliability and validity.

Purpose.?This study investigated the intra- and inter-rater reliability of the manual handling component of this FCE.

Method.?A DVD was produced containing footage of the manual handling components of the WorkHab conducted with four injured workers. Therapist raters (n == 17) who were trained and accredited in use of the WorkHab FCE scored these components and 14 raters re-evaluated them after approximately 2 weeks. Ratings were compared using intraclass correlation coefficients (ICCs), paired sample t-tests (intra-rater), chi-squared (inter-rater) and percentage agreement.

Results.?Intra-rater agreement was high with ICCs for the manual handling components and manual handling score showing excellent reliability (0.94–0.98) and good reliability for identification of the safe maximal lift (ICC: 0.81). Overall inter-rater agreement ranged from good to excellent for the manual handling components and safe maximal lift determination (ICC > 0.9). Agreement for safe maximal lift identification was good.

Conclusions.?Ratings demonstrated substantial levels of intra-rater and inter-rater reliability for the lifting components of the WorkHab FCEs.  相似文献   

6.
目的探讨上肢动作研究量表(ARAT)在脑卒中患者中的评价者间信度、评价者内信度和内在一致性。方法由两位评价者各自独立的对30例慢性脑卒中患者进行ARAT的评价。检验ARAT的评价者间信度、评价者内信度和内在一致性。结果ARAT的评价者间信度和评价者内信度ICC值分别为0.992和0.987。内在一致性分析结果Cronbach'sα系数为0.936。结论ARAT在脑卒中患者中具有良好的评价者间信度、评价者内信度和内在一致性。  相似文献   

7.
Abstract

Purpose: The reliability of the Modified Rivermead Mobility Index (MRMI) has not previously been investigated in the very early post-stroke phase. The aim of the study was to evaluate inter-rater and intra-rater reliability and internal consistency in patients, 1–14?d post-stroke. Method: A cohort study with repeated measures within 24?h, on 37 patients, 1–14?d post-stroke was conducted. Inter-rater (two raters) and intra-rater (one rater) reliability was analyzed using weighted kappa (κ) statistics and internal consistency with Cronbach’s alpha and intra-class correlation (ICC), 3.k. Results: Inter-rater and intra-rater reliability was excellent (ICC coefficient 0.97 and 0.99) for MRMI summary score. Intra-rater exact agreement for separate items was between 77% and 97%; κ between 0.81 and 0.96. Inter-rater exact agreement for separate items was between 68% and 92%; κ 0.59–0.87. The internal consistency was high (α 0.96; ICC 3.k 0.99). Conclusion: The MRMI is a reliable measure of physical mobility in the early post-stroke phase.  相似文献   

8.
BackgroundThree-dimensional (3D) motion analysis is considered the gold standard for evaluating human movement. However, its clinical utility is limited due to cost, operating expertise, and lengthy data processing time. Numerous qualitative scoring systems have been introduced to assess trunk and lower extremity biomechanics during functional tasks. However, the reliability of qualitative scoring systems to evaluate cutting movements is understudied. Purpose/Hypotheses: To assess the inter-rater and intra-rater reliability of the Cutting Alignment Scoring Tool (CAST) among sports medicine providers and to evaluate rater agreement of each component of the CAST. The hypotheses were: 1) there would be good–to-excellent inter-rater and intra-rater reliability among sports medicine providers, 2) there would be good to almost perfect agreement for cut width and trunk lean variables and moderate to good agreement for valgus variables of the CAST.Study DesignRepeated MeasuresMethodsTen videos of a 45-degree side-step cut performed by adolescent athletes were independently rated on two occasions by six raters (2 medical doctors, 2 physical therapists, and 2 athletic trainers). The variables assessed include trunk lean to the opposite direction of the cut, increased cut width, knee valgus at initial load acceptance (static), and knee valgus throughout the task (dynamic). Variables were scored as either present, which were given a score of “1”, or not present, which were given a score of “0”. Video sequence was randomized in each rating session, and a two-week wash out period was given.ResultsThe cumulative inter-rater and intra-rater reliabilities were good (ICC: 0.808 and ICC: 0.753). Almost perfect kappa coefficients were recorded for cut width (k=0.949). Moderate kappa coefficients were found for trunk lean (k= 0.632) and fair kappa coefficients were noted for dynamic and static valgus (k=0.462 and k= 0.533 respectively).ConclusionThese findings suggest that the CAST is a reliable tool to evaluate trunk and LE alignment during a cutting task by sports medicine providers.Level of EvidenceLevel 2 Diagnosis  相似文献   

9.
OBJECTIVE: To translate, to test inter-rater and intra-rater reliability and concurrent validity of the Basic Care Needs (BCN) section of the Northwick Park Dependency Score (NPDS). DESIGN: Test-retest reliability and validity testing. Observed data were collected by the staff (nursing staff, occupational therapists). SETTING: Three rehabilitation units. SUBJECTS: Forty inpatients between 16 and 65 years of age with brain injury were included. MAIN MEASURES: Inter-rater and intra-rater reliability was calculated by percentage agreement (PA) and unweighted kappa measure. Concurrent validity was examined by computing Goodman-Kruskal's gamma, a nonparametric statistic for degree of association between the BCN score and the total score of the Functional Independence Measure (FIM). RESULTS: Inter-rater reliability showed a good percentage agreement between nursing staff. Between nursing staff and occupational therapists the percentage agreement was lower especially in one item. Intra-rater reliability showed a good percentage agreement for all assessors. Concordance was good with a gamma--0.83 and an asymptotic error (ase) of 0.04, for nursing staff and for occupational therapists -0.87, ase 0.04. CONCLUSION: The BCN section of NPDS was found to be inter-rater and intra-rater reliable, to have concurrent validity. Further studies are needed on clinical utility. The instrument can be used for assessment of dependency for individual patients with brain injury, and information when transferring between different caregivers. Further studies need to investigate the sensitivity of the instrument.  相似文献   

10.
OBJECTIVE: To investigate the intra-rater and inter-rater reliability of the Erasmus MC modifications to the Nottingham Sensory Assessment (EmNSA). SUBJECTS: A consecutive sample of 18 inpatients, with a mean age of 57.7 years, diagnosed with an intracranial disorder and referred for physiotherapy. SETTING: The inpatient neurology and neurosurgery wards of a university hospital. DESIGN: Through discussions between four experienced neurophysiotherapists, the testing procedures of the revised Nottingham Sensory Assessment were further standardized. Subsequently, the intra-rater and inter-rater reliabilities of the EmNSA were investigated. RESULTS: The intra-rater reliability of the tactile sensations, sharp blunt discrimination and the proprioception items of the EmNSA were generally good to excellent for both raters with a range of weighted kappa coefficients between 0.58 and 1.00. Likewise the inter-rater reliabilities of these items were predominantly good to excellent with a range of weighted kappa coefficients between 0.46 and 1.00. An exception was the two-point discrimination that had a poor to good reliability, with the range for intra-rater reliability of 0.11-0.63 and for inter-rater reliability -0.10-0.66. CONCLUSION: The EmNSA is a reliable screening tool to evaluate primary somatosensory impairments in neurological and neurosurgical inpatients with intracranial disorders. Further research is necessary to consolidate these results and establish the validity and responsiveness of the Erasmus MC modifications to the NSA.  相似文献   

11.
[Purpose] The aim of this study was to determine the inter-rater and intra-rater reliability of the mandibular range of motion (ROM) considering the neutral craniocervical position when performing the measurements. [Subjects and Methods] The sample consisted of 50 asymptomatic subjects. Two raters measured four mandibular ROMs (maximal mouth opening (MMO), laterals, and protrusion) using the craniomandibular scale. Subjects alternated between raters, receiving two complete trials per day, two days apart. Intra- and inter-rater reliability was determined using intra-class correlation coefficients (ICCs). Bland-Altman analysis was used to assess reliability, bias, and variability. Finally, the standard error of measurement (SEM) and minimal detectable change (MDC) were analyzed to measure responsiveness. [Results] Reliability was good for MMO (inter-rater, ICC= 0.95−0.96; intra-rater, ICC= 0.95−0.96) and for protrusion (inter-rater, ICC= 0.92−0.94; intra-rater, ICC= 0.93−0.96). Reliability was moderate for lateral excursions. The MMO and protrusion SEM ranged from 0.74 to 0.82 mm and from 0.29 to 0.49 mm, while the MDCs ranged from 1.73 to 1.91 mm and from 0.69 to 0.14 mm respectively. The analysis showed no random or systematic error, suggesting that effect learning did not affect reliability. [Conclusion] A standardized protocol for assessment of mandibular ROM in a neutral craniocervical position obtained good inter- and intra-rater reliability for MMO and protrusion and moderate inter- and intra-rater reliability for lateral excursions.Key words: Reliability, Range of motion, Temporomandibular joint  相似文献   

12.
Reliable methods of measuring turnout in dancers and comparing active turnout (used in class) with functional (uncompensated) turnout are needed. Authors have suggested measurement techniques but there is no clinically useful, easily reproducible technique with established inter-rater and intra-rater reliability. We adapted a technique based on previous research, which is easily reproducible. We hypothesized excellent inter-rater and intra-rater reliability between experienced physical therapists (PTs) and a briefly trained faculty member from a university’s department of dance. Thirty-two participants were recruited from the same dance department. Dancers’ active and functional turnout was measured by each rater. We found that our technique for measuring active and functional turnout has excellent inter-rater and intra-rater reliability when performed by two experienced PTs and by one briefly trained university-level dance faculty member. For active turnout, inter-rater reliability was 0.78 among all raters and 0.82 among only the PT raters; intra-rater reliability was 0.82 among all raters and 0.85 among only the PT raters. For functional turnout, inter-rater reliability was 0.86 among all raters and 0.88 among only the PT raters; intra-rater reliability was 0.87 among all raters and 0.88 among only the PT raters. The measurement technique described provides a standardized protocol with excellent inter-rater and intra-rater reliability when performed by experienced PTs or by a briefly trained university-level dance faculty member.  相似文献   

13.
The use of ultrasound (US) to perform quantitative measurements of musculoskeletal tissues requires accurate and reliable measurements between investigators and ultrasound machines. The objective of this study was to evaluate inter-rater and intra-rater reliability of patellar tendon measurements between providers with different levels of US experience and inter-machine reliability of US machines. Sixteen subjects without a history of knee pain were evaluated with US examinations of the patellar tendon. Each tendon was scanned independently by two investigators using two different ultrasound machines. Tendon length and cross-sectional area (CSA) were obtained, and examiners were blinded to each other's results. Tendon length was measured using a validated system involving surface markers and calipers, and CSA was measured using each machine's measuring software. Intra-class correlation coefficients (ICCs) were used to determine reliability of measurements between observers, where ICC > 0.75 was considered good and ICC > 0.9 was considered excellent. Inter-rater reliability between sonographers was excellent and revealed an ICC of 0.90 to 0.92 for patellar tendon CSA and an ICC of 0.96 for tendon length. ICC for intra-rater reliability of tendon CSA was also generally excellent, with ICC between 0.87 and 0.96. Inter-machine reliability was excellent, with ICC of 0.91–0.98 for tendon CSA and 0.96–0.98 for tendon length. Bland–Altman plots were constructed to measure validity and demonstrated a mean difference between sonographers of 0.03 mm2 for CSA measurements and 0.2 mm for tendon length. Using well-defined scanning protocols, a novice and an experienced musculoskeletal sonographer attained high levels of inter-rater agreement, with similarly excellent results for intra-rater and inter-machine reliability. To our knowledge, this study is the first to report inter-machine reliability in the setting of quantitative musculoskeletal ultrasound.  相似文献   

14.
OBJECTIVE: Physiological track-and-trigger warning systems are used to identify patients on acute wards at risk of deterioration, as early as possible. The objective of this study was to assess the inter-rater and intra-rater reliability of the physiological measurements, aggregate scores and triggering events of three such systems. DESIGN: Prospective cohort study. SETTING: General medical and surgical wards in one non-university acute hospital. PATIENTS AND PARTICIPANTS: Unselected ward patients: 114 patients in the inter-rater study and 45 patients in the intra-rater study were examined by four raters. MEASUREMENTS AND RESULTS: Physiological observations obtained at the bedside were evaluated using three systems: the medical emergency team call-out criteria (MET); the modified early warning score (MEWS); and the assessment score of sick-patient identification and step-up in treatment (ASSIST). Inter-rater and intra-rater reliability were assessed by intra-class correlation coefficients, kappa statistics and percentage agreement. There was fair to moderate agreement on most physiological parameters, and fair agreement on the scores, but better levels of agreement on triggers. Reliability was partially a function of simplicity: MET achieved a higher percentage of agreement than ASSIST, and ASSIST higher than MEWS. Intra-rater reliability was better then inter-rater reliability. Using corrected calculations improved the level of inter-rater agreement but not intra-rater agreement. CONCLUSION: There was significant variation in the reproducibility of different track-and-trigger warning systems. The systems examined showed better levels of agreement on triggers than on aggregate scores. Simpler systems had better reliability. Inter-rater agreement might improve by using electronic calculations of scores.  相似文献   

15.
OBJECTIVE: To study the concurrent validity and the inter-rater reliability of the Post-Concussion Symptoms Questionnaire. DESIGN: The approach was to study the concurrent validity of the Post-Concussion Symptoms Questionnaire when used as an interview questionnaire compared with a self-report questionnaire administered by the patients. The inter-rater reliability was also studied when 2 different raters administered the Post-Concussion Symptoms Questionnaire interview. PATIENTS: Thirty-five patients with mild traumatic brain injury were consecutively contacted by telephone and asked whether they would be willing to participate in a follow-up intervention. METHODS: The Post-Concussion Symptoms Questionnaire was completed by the patients, who answered "Yes" or "No" to the standardized questions. The patients were then interviewed to check the certain "Yes" or "No" answers, 0-10 days after having completed the first Post-Concussion Symptoms Questionnaire. The raters filled in their ratings independently. RESULTS: The concurrent validity of answers in the questionnaire compared with those in the interview ranged from 82% to 100% agreement. The inter-rater reliability results ranged from 93% to 100% agreement between the raters. CONCLUSION: The Post-Concussion Symptoms Questionnaire with answers of "Yes" or "No" is a valid instrument. High reliability was found between the raters.  相似文献   

16.
BackgroundCurrent clinical screening tools assessing risky movements during cutting maneuvers do not adequately address sagittal plane foot and ankle evaluations. The Cutting Alignment Scoring Tool (CAST) is reliable in evaluating frontal plane trunk and lower extremity alignment during a 45-degree side-step cut. The Expanded Cutting Alignment Scoring Tool (E-CAST) includes two new sagittal plane variables, knee flexion and ankle plantarflexion angle.Hypothesis/PurposeTo assess the inter-and intra-rater reliability of the E-CAST to evaluate trunk and lower extremity alignment during a 45-degree side-step cut.Study DesignRepeated MeasuresMethodsParticipants included 25 healthy females (13.8 ± 1.4 years) regularly participating in cutting or pivoting sports. Participants were recorded performing a side-step cut in frontal and sagittal planes. One trial was randomly selected for analysis. Two physical therapists independently scored each video using the E-CAST on two separate occasions, with randomization and a two-week wash-out between rounds. Observed movement variables were awarded a score of “1”, with higher scores representing poorer technique. Intraclass correlation coefficients (ICC) and 95% confident intervals (95% CI) were calculated for the total score, and a kappa coefficient (k) was calculated for each variable.ResultsThe cumulative intra-rater reliability was good (ICC=0.78, 95% CI 0.59-0.96) and the cumulative inter-rater reliability was moderate (ICC=0.71, 95% CI 0.50-0.91). Intra-rater kappa coefficients ranged from moderate to excellent for all variables (k= 0.50-0.84) and inter-rater kappa coefficients ranged from slight to excellent for all variables (k=0.20-0.90).ConclusionThe addition of two sagittal plane variables resulted in lower inter-rater ICC compared to the CAST (ICC= 0.81, 95% CI 0.64-0.91). The E-CAST is a reliable tool to evaluate trunk and LE alignment during a 45-degree side-step cut, with good intra-rater and moderate inter-rater reliability.Level of EvidenceLevel 2, Diagnosis  相似文献   

17.
The Posture Pro software is used for photogrammetry assessment of posture and has been commercially available for several years. Along with symmetry-related measures, a Posture Number® is calculated to reflect the sum of postural deviations. Our aim was to investigate the intra- and inter-rater reliability of measures extracted using the Posture Pro 8 software without using reference markers on subjects. Four raters assessed the standing posture of 40 badminton players (20 males, 20 females) from anterior, lateral, and posterior photographs. Thirty-three postural measures were extracted using visual landmarks as guide. Reliability was quantified using intra-class correlation coefficient (ICC) and typical error of measurement (TEM). Overall, the intra-rater reliability was considered good to excellent for nearly all measures. However, only two measures had excellent inter-rater reliability, with 13 and 18 measures exhibiting good and fair inter-rater reliability, respectively. Posture Pro specific measures (n = 9) exhibited good-to-excellent intra-rater and fair-to-excellent inter-rater reliability, with small-to-moderate and small-to-large TEM, respectively. Overall, the Posture Pro 8 software can be considered a reliable tool for assessing a range of posture-relevant measures from photographs, particularly when performed by the same examiner. The Posture Number® demonstrated generally acceptable intra- and inter-rater reliability. Nonetheless, investigations on the validity, sensitivity, and interpretation of this measure are essential to confirm its clinical relevance.  相似文献   

18.
BACKGROUND: The Naranjo criteria are frequently used for determination of causality for suspected adverse drug reactions (ADRs); however, the psychometric properties have not been studied in the critically ill. OBJECTIVE: To evaluate the reliability and validity of the Naranjo criteria for ADR determination in the intensive care unit (ICU). METHODS: All patients admitted to a surgical ICU during a 3-month period were enrolled. Four raters independently reviewed 142 suspected ADRs using the Naranjo criteria (review 1). Raters evaluated the 142 suspected ADRs 3-4 weeks later, again using the Naranjo criteria (review 2). Inter-rater reliability was tested using the kappa statistic. The weighted kappa statistic was calculated between reviews 1 and 2 for the intra-rater reliability of each rater. Cronbach alpha was computed to assess the inter-item consistency correlation. The Naranjo criteria were compared with expert opinion for criterion validity for each rater and reported as a Spearman rank (r(s)) coefficient. RESULTS: The kappa statistic ranged from 0.14 to 0.33, reflecting poor inter-rater agreement. The weighted kappa within raters was 0.5402-0.9371. The Cronbach alpha ranged from 0.443 to 0.660, which is considered moderate to good. The r(s) coefficient range was 0.385-0.545; all r(s) coefficients were statistically significant (p < 0.05). CONCLUSIONS: Inter-rater reliability is marginal; however, within-rater evaluation appears to be consistent. The inter-item correlation is expected to be higher since all questions pertain to ADRs. Overall, the Naranjo criteria need modification for use in the ICU to improve reliability, validity, and clinical usefulness.  相似文献   

19.

Objectives:

To evaluate intra-rater and inter-rater reliability and measurement error in glenohumeral range of motion (ROM) measurements using a standard goniometer.

Study design:

17 adult subjects with and without shoulder pathology were evaluated for active and passive range of motion. Fifteen shoulder motions were assessed by two raters to determine reliability. The intra-class correlation coefficients (ICC) were calculated and examined to determine if reliability of ICC ≥ 0.70 existed. The standard error of measurement (SEM) and the minimal clinical difference (MCD) were also calculated.

Results:

Thxe criterion reliability was achieved in both groups for intra-rater reliability of standing AROM abduction; supine AROM and PROM abduction, flexion, external rotation at 0° abduction; and for inter-rater reliability of supine AROM and PROM abduction, external rotation at 0° abduction. The SEM ranged from 4°-7° for intra-rater and 6°-9° for inter-rater agreement on movements that achieved the criterion reliability. The MCD ranged from 11°-16° for a single evaluator and 14°-24° for two evaluators.

Conclusions:

Assessment of AROM and PROM in supine achieves superior reliability. The use of either a single or multiple raters affects the number of movements that achieved clinically meaningful reliability. Some movements consistently did not achieve the criterion and may not be the best movements to monitor treatment outcome.  相似文献   

20.
OBJECTIVE: To assess the face validity and the inter-rater reliability of the Vein Assessment Tool (VAT) for classifying veins according to their level of intravenous insertion difficulty. DESIGN: Prospective observational study. PARTICIPANTS: Eight nurses and two radiographers from the Medical Imaging Department and five nurses from the Haematology Day Patient Unit of a large tertiary hospital. INTERVENTION: Assessments of veins in the upper limb were undertaken independently by nurses from two departments of a major tertiary hospital. MAIN OUTCOME MEASURE: Level of inter-rater agreement assessed using intraclass correlation coefficients (ICC). RESULTS: A total of 125 independent assessments were made by 15 nurses. The mean percentage agreement between raters from Medical Imaging was 84% (SD 10.7; range 60% to 100%) and between raters from Oncology was 92% (SD 17.9; range 60% to 100%). The inter-rater reliability was very high for the ten medical imaging raters 0.83 (95% confidence interval CI = 0.61 - 0.95) and for the Oncology raters 0.93 (95% CI = 0.77-0.99). CONCLUSION: The Vein Assessment Tool (VAT) has been validated by a sample of nurses with cannulating experience. Following broader testing it may be useful for research studies or by nurses who wish to objectively describe the condition of a vein for clinical purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号