首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
OBJECTIVE: To develop and evaluate the validity and reliability of a multidimensional balance scale-the Fullerton Advanced Balance (FAB) scale-suitable for use with functionally independent older adults. DESIGN: Psychometric evaluation of the scale's content and convergent validity, test-retest and intra- and interrater reliability, and internal rater consistency. SETTING: Urban community. PARTICIPANTS: Forty-six community-residing older adults (mean +/- standard deviation, 75 +/- 6.2 y), with (n = 31) and without identified balance problems (n = 15), participated in the study. Four physical therapists with expertise in the assessment and treatment of balance disorders in older adults also participated in the content validity and/or reliability phases of the study. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Spearman rank correlation coefficients for convergent validity, test-retest, intra- and interrater reliability, and homogeneity coefficient values for rater consistency. RESULTS: Test-retest reliability for the total balance scale score was high (rho = .96). Interrater reliability for total score ranged from .94 to .97 whereas intrarater reliability coefficients ranged from .97 to 1.00. Homogeneity (H) coefficients were greater than .90 for 6 of the 10 individual test items and all 10 test items had H coefficients of greater than .75 for both rating sessions. CONCLUSIONS: Preliminary results suggest that the FAB scale is a valid and reliable assessment tool that is suitable for use with functionally independent older adults residing in the community.  相似文献   

2.
OBJECTIVE: To examine the reliability of the Wolf Motor Function Test (WMFT) for assessing upper extremity motor function in adults with hemiplegia. DESIGN: Interrater and test-retest reliability. SETTING: A clinical research laboratory at a university medical center. PATIENTS: A sample of convenience of 24 subjects with chronic hemiplegia (onset >1yr), showing moderate motor impairment. INTERVENTION: The WMFT includes 15 functional tasks. Performances were timed and rated by using a 6-point functional ability scale. The WMFT was administered to subjects twice with a 2-week interval between administrations. All test sessions were videotaped for scoring at a later time by blinded and trained experienced therapists. MAIN OUTCOME MEASURE: Interrater reliability was examined by using intraclass correlation coefficients and internal consistency by using Cronbach's alpha. RESULTS: Interrater reliability was.97 or greater for performance time and.88 or greater for functional ability. Internal consistency for test 1 was.92 for performance time and.92 for functional ability; for test 2, it was.86 for performance time and.92 for functional ability. Test-retest reliability was.90 for performance time and.95 for functional ability. Absolute scores for subjects were stable over the 2 test administrations. CONCLUSION: The WMFT is an instrument with high interrater reliability, internal consistency, test-retest reliability, and adequate stability.  相似文献   

3.
Reliability of the dynamic gait index in people with vestibular disorders   总被引:1,自引:0,他引:1  
OBJECTIVE: To examine the interrater reliability of the Dynamic Gait Index (DGI) when used with patients with vestibular disorders and with previously published instructions. DESIGN: Correlational study. SETTING: Outpatient physical therapy clinic. PARTICIPANTS: Subjects included 30 patients (age range, 27-88y) with vestibular disorders, who were referred for vestibular rehabilitation. INTERVENTIONS: Subjects' performance on the DGI was concurrently rated by 2 physical therapists experienced in vestibular rehabilitation to determine interrater reliability. MAIN OUTCOME MEASURES: Percentage agreement, kappa statistics, and the ratio of subject variability to total variability were calculated for individual DGI items. Kappa statistics for individual items were averaged to yield a composite kappa score of the DGI. Total DGI scores were evaluated for interrater reliability by using the Spearman rank-order correlation coefficient. RESULTS: Interrater reliability of individual DGI items varied from poor to excellent based on kappa values (kappa range,.35-1.00). Composite kappa values showed good overall interrater reliability (kappa=.64) of total DGI scores. The Spearman rho demonstrated excellent correlation (r=.95) between total DGI scores given concurrently by the 2 raters. CONCLUSION: DGI total scores, administered by using the published instructions, showed moderate interrater reliability with subjects with vestibular disorders. The DGI should be used with caution in this population at this time, because of the lack of strong reliability.  相似文献   

4.
OBJECTIVES: To estimate the test-retest and interrater reliability of the center of pressure-center of mass (COP-COM) variable of postural control in the elderly. DESIGN: The biomechanic variable COP-COM, which represents the distance between the COP and the COM, was determined from 2 AMTI force platforms and 3 OPTOTRAK position sensors. Measurements were taken in quiet position, double leg stance, and eyes open and eyes closed conditions. SETTING: Laboratory environment. PARTICIPANTS: Forty-five healthy subjects, 8 patients with diabetes neuropathy, and 7 stroke survivors, all of whom were at least 60 years old. INTERVENTIONS: Subjects were evaluated on 2 separate occasions within 7 days by the same evaluator to determine test-retest reliability. Interrater reliability was determined the same day. MAIN OUTCOME MEASURES: The biomechanic variable COP-COM, which represents the distance between the COP and the COM in terms of root mean square. The mean of 4 trials of the COP-COM variable for each condition was used for statistical analysis. Intraclass correlation coefficients (ICCs) were used. RESULTS: The COP-COM variable has good reliability for both the test-retest and interrater studies, but its reliability varies according to the direction of the COP-COM. For the test-retest and interrater studies, the ICC ranged from.89 to.93 in the anteroposterior direction and from.74 to.79 in the mediolateral direction. CONCLUSION: The equivalence of the test-retest and interrater coefficients obtained suggests that the measurement error of the COP-COM variable is mainly linked to the biologic variability of this measure over a short period of time. Using the mean of 4 trials stabilizes the COP-COM variable enough to be potentially used to evaluate clinical change.  相似文献   

5.
Purpose.?To establish interrater and test–retest reliability of a clinical assessment of motor and sensory upper limb impairments in children with hemiplegic cerebral palsy aged 5–15 years.

Method.?The assessments included passive range of motion (PROM), Modified Ashworth Scale (MAS), manual muscle testing (MMT), grip strength, the House thumb and Zancolli classification and sensory function. Interrater reliability was investigated in 30 children, test–retest reliability in 23 children.

Results.?For PROM, interrater reliability varied from moderate to moderately high (correlation coefficients 0.48–0.73) and test–retest reliability was very high (>0.81). For the MAS and MMT, total score and subscores for shoulder, elbow, and wrist showed a moderately high to very high interrater reliability (0.60–0.91) and coefficients of >0.78 for test–retest reliability. The reliability for the individual muscles varied from moderate to high. The Jamar dynamometer was found to be highly reliable. The House thumb classification showed a substantial reliability and the Zancolli classification an almost perfect reliability. All sensory modalities had a good agreement.

Conclusions.?For all motor and sensory assessments, interrater and test–retest reliability was moderate to very high. Test–retest reliability was clearly higher than interrater reliability. To improve interrater reliability, it was recommended to strictly standardize the test procedure, refine the scoring criteria and provide intensive rater trainings.  相似文献   

6.
BACKGROUND AND PURPOSE: The Functional Gait Assessment (FGA) is a 10-item gait assessment based on the Dynamic Gait Index. The purpose of this study was to evaluate the reliability, internal consistency, and validity of data obtained with the FGA when used with people with vestibular disorders. SUBJECTS: Seven physical therapists from various practice settings, 3 physical therapist students, and 6 patients with vestibular disorders volunteered to participate. METHODS: All raters were given 10 minutes to review the instructions, the test items, and the grading criteria for the FGA. The 10 raters concurrently rated the performance of the 6 patients on the FGA. Patients completed the FGA twice, with an hour's rest between sessions. Reliability of total FGA scores was assessed using intraclass correlation coefficients (2,1). Internal consistency of the FGA was assessed using the Cronbach alpha and confirmatory factor analysis. Concurrent validity was assessed using the correlation of the FGA scores with balance and gait measurements. RESULTS: Intraclass correlation coefficients of.86 and.74 were found for interrater and intrarater reliability of the total FGA scores. Internal consistency of the FGA scores was.79. Spearman rank order correlation coefficients of the FGA scores with balance measurements ranged from.11 to.67. DISCUSSION AND CONCLUSION: The FGA demonstrates what we believe is acceptable reliability, internal consistency, and concurrent validity with other balance measures used for patients with vestibular disorders.  相似文献   

7.
Objective: The reliability of the Functional Independence Measure (FIMSM) for adults was examined using procedures of meta-analysis.Data Sources: Eleven published studies reporting estimates of reliability for the FIM were located using computer searches of Index Medicus, Psychological Abstracts, the Functional Assessment Information Service, and citation tracking.Study Selection: Studies were identified and coded based on type of reliability (interrater, test-retest, or equivalence), method of data analysis, size of sample, and training or experience of raters.Data Extraction: Information from the articles was coded by two independent raters. Interrater reliability for coding all elements included in the analysis ranged from .89 to 1.00.Data Synthesis: The 11 investigations included a total of 1,568 patients and produced 221 reliability coefficients. The majority of the reliability values (81%) were from interrater reliability studies, and the intraclass correlation coefficient (ICC) was the most commonly used statistical procedure to compute reliability. The reported reliability values were converted to a common correlation metric and aggregated across the 11 studies. The results revealed a median interrater reliability for the total FIM of .95 and median test-retest and equivalence reliability values of .95 and .92, respectively. The median reliability values for the six FIM subscales ranged from .95 for Self-Care to .78 for Social Cognition. For the individual FIM items, median reliability values varied from .90 for Toilet Transfer to .61 for Comprehension. Median and mean reliability coefficients for FIM motor items were generally higher than for items in the cognitive or communication subscales.Conclusions: Based on the 11 studies examined in this review the FIM demonstrated acceptable reliability across a wide variety of settings, raters, and patients.  相似文献   

8.
The primary purposes of this article are to review the literature on seating assessment and to describe the development of a clinical evaluation scale, the Seated Postural Control Measure (SPCM), for use with children requiring adaptive seating systems. The SPCM is an observational scale of 22 seated postural alignment items and 12 functional movement items, each scored on a four-point, criterion-referenced scale. A secondary purpose of this article is to report the reliability of the seven-point Level of Sitting Scale (LSS). Interrater and test-retest reliability of the SPCM items and the one-item LSS were evaluated on a sample of 40 children with developmental disabilities who sat with and without their seating systems. Kappa values of .75 or higher were considered excellent, .40 to .74 as fair to good, and less than .40 as poor. The interrater reliability tests for the two seated conditions and the two test sessions conducted 3 weeks apart yielded overall item Kappa coefficient means of .45 for the alignment section and .85 for the function section. Test-retest results for the SPCM items were less satisfactory, with item Kappa coefficient means for the two seating conditions and raters of .35 and .29 for alignment and function, respectively. Reliability results did not appear to be consistently better among seating conditions, raters, or test sessions. Kappa coefficients for the LSS were fair to good for both interrater and test-retest reliability. Plans for future development of the SPCM and LSS are discussed.  相似文献   

9.
ObjectiveThe aim was to determine the interrater and intrarater reliability of navicular drop (NDP), navicular drift (NDT), and the Foot Posture Index-6 (FPI-6), and test–retest reliability of the static arch index (SAI) and dynamic arch index (DAI).MethodsSixty healthy individuals were assessed for intrarater and test–retest reliability. From 60 participants, 30 individuals were assessed for interrater reliability. A digital caliper was used to measure NDP and NDT. Electronic pedography was used to calculate SAI and DAI. The FPI-6 was also performed. All assessments were performed on the dominant foot. The NDP, NDT, SAI, and DAI were repeated 3 times. The NDP and NDT were analyzed separately using both first measurement and the average, but the SAI and DAI were analyzed using only the average. The NDP, NDT, and FPI-6 were conducted by 2 raters to determine interrater reliability and were repeated by a single rater after 5 days from initial assessment to determine intrarater reliability. The SAI and DAI were also repeated after 5 days to determine test–retest reliability.ResultsIntrarater intraclass correlation coefficients (ICCs) were 0.934 and 0.970 for NDP, 0.724 and 0.850 for NDT, and 0.945 for FPI. Interrater ICCs were 0.712 and 0.811 for NDP, 0.592 and 0.797 for NDT, and 0.575 for FPI. Test–retest ICCs of the SAI and DAI were 0.850 and 0.876, respectively.ConclusionNavicular drop is relatively more reliable than other traditional techniques. Also, the FPI-6 has excellent intrarater reliability, but only moderate interrater reliability. The results can provide clinicians and researchers with a reliable way to implement foot posture assessment.  相似文献   

10.
Neuromotor problems such as hypotonia, incoordination, impaired sensory-motor integration lead to significant delays in motor skills and balance development in individuals with Down Syndrome (DS). Balance control is essential for performing many motor skills independently and safely. Standardised testing of balance control can contribute significantly to the rehabilitation of individuals with DS. The purpose of this study was to determine intrarater and interrater reliability of the Modified Star Excursion Balance Test (SEBT) for individuals with DS. Thirteen individuals with DS were recruited in this study. Intraclass correlation coefficients (ICC [3,1]) with 95% confidence intervals, standard error of measurement (SEM), the smallest detectable difference (SDD) and the Spearman rank correlation coefficient were calculated. In all directions of the Modified SEBT, no statistically significant difference was found between two raters’ first and second measurements (p > 0.05). Interrater reliability for all reach directions of the Modified SEBT was high, with ICC ranging from 0.990 to 0.998.95% confidence intervals, SEM and SDD ranged from 0.924 to 0.999, 0.180–2.434 and 3.270–6.747, respectively. The Modified SEBT are reliable for evaluating dynamic balance in individuals with DS aged between 6 and 24 years.  相似文献   

11.
OBJECTIVES: To determine the intra- and interrater reliability of the Action Research Arm (ARA) test, to assess its ability to detect a minimal clinically important difference (MCID) of 5.7 points, and to identify less reliable test items. DESIGN: Intrarater reliability of the sum scores and of individual items was assessed by comparing (1) the ratings of the laboratory measurements of 20 patients with the ratings of the same measurements recorded on videotape by the original rater, and (2) the repeated ratings of videotaped measurements by the same rater. Interrater reliability was assessed by comparing the ratings of the videotaped measurements of 2 raters. The resulting limits of agreement were compared with the MCID. PATIENTS: Stratified sample, based on the intake ARA score, of 20 chronic stroke patients (median age, 62yr; median time since stroke onset, 3.6yr; mean intake ARA score, 29.2). MAIN OUTCOME MEASURES: Spearman's rank-order correlation coefficient (Spearman's rho); intraclass correlation coefficient (ICC); mean difference and limits of agreement, based on ARA sum scores; and weighted kappa, based on individual items. RESULTS: All intra- and interrater Spearman's rho and ICC values were higher than .98. The mean difference between ratings was highest for the interrater pair (.75; 95% confidence interval, .02-1.48), suggesting a small systematic difference between raters. Intrarater limits of agreement were -1.66 to 2.26; interrater limits of agreement were -2.35 to 3.85. Median weighted kappas exceeded .92. CONCLUSION: The high intra- and interrater reliability of the ARA test was confirmed, as was its ability to detect a clinically relevant difference of 5.7 points.  相似文献   

12.
目的:验证功能性步态评价(FGA)在帕金森病(PD)患者中的组间信度、重测信度、内部一致性及分半信度,为临床提供评价工具.方法:121例住院帕金森病患者(平均年龄61.9岁)入选.两名评价者同时评定PD患者的FGA表现,进行组间信度分析.评价过程同时记录为视频资料,4周后其中一名评定者对视频资料进行二次评价,进行重测信度分析.内部一致性信度采用克朗巴赫α系数来评价.分半信度:将FGA各单项以奇数项、偶数项分为两半,计算其分半信度.结果:FGA总分的组间信度和重测信度均为0.99,各单项组间信度波动于0.49-0.98之间,重测信度波动于0.91-0.99之间.FGA内部一致性Cronbach α为0.94,分半信度为0.97.结论:FGA用于评价PD患者的平衡及步态障碍,其组间信度、重测信度、内部一致性及分半信度极佳.  相似文献   

13.
OBJECTIVE: To determine the measurement properties and diagnostic utility of the JFK Coma Recovery Scale-Revised (CRS-R). DESIGN: Analysis of interrater and test-retest reliability, internal consistency, concurrent validity, and diagnostic accuracy. SETTING: Acute inpatient brain injury rehabilitation hospital. PARTICIPANTS: Convenience sample of 80 patients with severe acquired brain injury admitted to an inpatient Coma Intervention Program with a diagnosis of either vegetative state (VS) or minimally conscious state (MCS). INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: The CRS-R, the JFK Coma Recovery Scale (CRS), and the Disability Rating Scale (DRS). RESULTS: Interrater and test-retest reliability were high for CRS-R total scores. Subscale analysis showed moderate to high interrater and test-retest agreement although systematic differences in scoring were noted on the visual and oromotor/verbal subscales. CRS-R total scores correlated significantly with total scores on the CRS and DRS indicating acceptable concurrent validity. The CRS-R was able to distinguish 10 patients in an MCS who were otherwise misclassified as in a VS by the DRS. CONCLUSIONS: The CRS-R can be administered reliably by trained examiners and repeated measurements yield stable estimates of patient status. CRS-R subscale scores demonstrated good agreement across raters and ratings but should be used cautiously because some scores were underrepresented in the current study. The CRS-R appears capable of differentiating patients in an MCS from those in a VS.  相似文献   

14.
Fall events and fear of falling increase with age in healthy and frail older people. Fear of falling has been identified as a significant falls risk factor. The aims of this study were to establish the interrater and test-retest reliability and predictive validity of the fear of falling scale (FOFS). Sixty-nine subjects, 55 female and 14 male subjects, aged 65–97 years were included. Subjects were asked to respond to the FOFS on three occasions to test interrater and test-retest reliability. In the absence of a suitable comparative test for concurrent validity, balance predictive formulae were developed and relationship to Functional Reach Test, Timed up & go, and Step Test performance were compared. Intraclass coefficient (2,1) of interrater and test-retest reliability was 0.96 and 0.94, respectively. The Cronbach α for the FOFS scores collected ranged from 0.94 to 0.97. From the results of the multiple regression analyses, prediction formulae for Functional Reach Test, Timed up & go test, and Step Test from the FOFS scores were generated. These formulae then predicted balance performance by the subjects. The interrater and test-retest reliability and the predictive validity of the FOFS were established.  相似文献   

15.
Lark SD, McCarthy PW, Rowe DA. Reliability of the parallel walk test for the elderly.

Objective

To determine interrater agreement and test-retest reliability of the parallel walk test (PWT), a simple method of measuring dynamic balance in the elderly during gait.

Design

Cohort study.

Setting

Outpatient clinic.

Participants

Elderly fallers (N=34; mean ± SD age, 81.3±5.4y) registered at a falls clinic participated in this study based on Mini-Mental State Examination and Barthel Index scores.

Interventions

Subjects were timed as they walked 6m between 2 parallel lines on the floor at 3 different widths (20, 30.5, 38cm) wearing their own footwear. They were scored for foot placement on (1 point) or outside the lines (2 points) by 2 separate raters. Fifteen subjects were retested 1 week later.

Main Outcome Measures

Footfall score and time to complete the PWT. Intraclass correlation coefficients (ICCs) and 95% limits of agreement were calculated for interrater and test-retest reliability.

Results

For widths of 20, 30.5, and 38cm, interrater reliability ICC range was .93 to .99 and test-retest ICC range was .63 to .90.

Conclusions

The PWT was implemented easily by 2 raters with a high degree of interrater reliability. Test-retest reliability was not as high, possibly because of the high susceptibility of variation from 1 week to the next for frail elderly subjects. The 20- and 30.5-cm widths are recommended for future use of the PWT.  相似文献   

16.
OBJECTIVE: The aim of this study was to evaluate interrater and intrarater reliability for the Assisting Hand Assessment. METHOD: For interrater reliability, two designs were used: 2 occupational therapists rated the same 18 children, and 20 occupational therapists rated the same 8 children. For intrarater reliability, 20 raters each rated one child twice. Both English and Swedish versions of the instrument were used. Intraclass correlation coefficients (ICCs) and standard error of measurement (SEM) were calculated. RESULTS: ICCs for the sum score for the interrater were 0.98 (two raters) and 0.97 (20 raters) and for the intrarater 0.99. SEM was 1.5 for interrater and 1.2 for the intrarater study, which gave an error interval of +/-3 raw scores for interrater and +/- 2.4 raw scores for intrarater. CONCLUSION: This study shows excellent interrater and intrarater reliability for sum scores.  相似文献   

17.
OBJECTIVES: This study validates and tests the reliability of an audit instrument constructed to evaluate the content of nursing discharge notes. DESIGN: Instrument validation and reliability testing. MAIN OUTCOME MEASURES: Factor analysis identifying structure through data summarization of the instrument, association between scores in test-retest, and interrater reliability between auditors. RESULTS: Validity: Three factors emerged in the factor analysis: 'General information', 'Planning', and 'Assessment', accounting for 76% of the variance regarding the quantitative aspect and 79% of the variance regarding the qualitative aspect, confirming the distinctiveness. Reliability: The Spearman rank-order correlation coefficient calculated per item in the test-retest ranged from 0.72 to 1.0 (p=0.01). The correlation coefficient for the total score was 0.98 (p=0.01). There were no differences in item scores between the test and retest in 93% of the comparisons (n=486). Between the two auditors, the Spearman rank-order correlation coefficient in each item ranged from 0.83 to 1.00 (p=0.01) and weighted kappa values from 0.70 to 1.00 with the exception of one item in both calculations. The correlation coefficient for the auditors' total score was 0.99 (p=0.01). The Student's paired t-test comparing the two auditors' mean values in five different parts of the instrument showed no significant differences in score. CONCLUSION: The Cat-ch-Ing EPI instrument shows a high reliability and validity as an audit instrument to evaluate the content of nursing discharge notes.  相似文献   

18.
OBJECTIVE: To evaluate the intra- and interrater reliability of tests from the Ergo-Kit (EK) functional capacity evaluation method in adults without musculoskeletal complaints. DESIGN: Within-subjects design. SETTING: Academic medical center in The Netherlands. PARTICIPANTS: Twenty-seven subjects without musculoskeletal complaints (15 men, 12 women). INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Seven EK tests (2 isometric, 3 dynamic lifting, 2 manipulation tests) were each assessed 3 times (over 4 days), twice by 1 rater (R1) and once by another rater (R2). Intrarater reliability was calculated using the EK test scores assessed by R1. Interrater reliability was calculated using the EK test scores assessed by both raters. Counterbalancing the rater order made possible the calculation of 2 interrater reliability levels (at time intervals of 4 and 8d). All reliability levels were expressed as intraclass correlation coefficients (ICCs). RESULTS: Intrarater and interrater reliability (8-d time interval) was high (ICC, >.80) for the isometric lifting tests, moderate (ICC range, .50-.80) for the dynamic lifting tests, and low (ICC, <.50) for the manipulation tests. The interrater reliability of the isometric and dynamic lifting tests (4-d time interval) was high (ICC, >.80), and it was moderate (ICC range, .50-.80) for both manipulation tests. CONCLUSIONS: The isometric and dynamic lifting tests of the EK have a moderate to high level of reliability; the manipulation tests have a low level of reliability.  相似文献   

19.
ObjectiveThe purposes of this study were to determine the intrarater and interrater reliability of the craniocervical posture in a sagittal view using quantitative measurements on photographs and radiographs and to determine the agreement of the visual assessment of posture between raters.MethodsOne photograph and 1 radiograph of the sagittal craniocervical posture were simultaneously taken from 39 healthy female subjects. Three angles were measured on the photographs and 10 angles on the radiographs of 22 subjects using Alcimage software (Alcimage; Uberlândia, MG, Brazil). Two repeated measurements were performed by 2 raters. The measurements were compared within and between raters to test the intrarater and interrater reliability, respectively. Intraclass correlation coefficient and SEM were used. κ Agreement was calculated for the visual assessment of 39 subjects using photographs and radiographs between 2 raters.ResultsGood to excellent intrarater and interrater intraclass correlation coefficient values were found on both photographs and radiographs. Interrater SEM was large and clinically significant for cervical lordosis photogrammetry and for 1 angle measuring cervical lordosis on radiographs. Interrater κ agreement for the visual assessment using photographs was poor (κ = 0.37).ConclusionThe raters were reliable to measure angles in photographs and radiographs to quantify craniocervical posture with exception of 2 angles measuring lordosis of the cervical spine when compared between raters. The visual assessment of posture between raters was not reliable.  相似文献   

20.
ObjectiveThe purpose of this study was to assess the intrarater and interrater reliability of marking 2 angles with the TEMPLO software and to provide relevant information for clinical practice.MethodsA prospective test–retest study has been conducted. Four raters took measures on 2 days, with 2 weeks in between. Craniovertebral angle and trunk forward lean were drawn on 22 video frames using TEMPLO. Reliability was examined using intraclass correlation coefficients including standard errors of measurement and minimal detectable change values as measures of precision expressed in the unit of the test (°).ResultsIntraclass correlation coefficients for intrarater and interrater reliability ranged from 0.98 to 1.00. Standard errors of measurement and minimal detectable change values ranged from 0.4° to 0.8° and 0.8° to 2.3°, respectively.ConclusionThese results indicate excellent reliability for craniovertebral angle and trunk forward lean assessed with TEMPLO software. Changes exceeding 2.3° may be expected to fall outside the test’s variability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号