首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Introduction: Physiotherapists commonly use the manual inclinometer and Flexicurve for the clinical measurement of thoracic spinal posture. The aim of this study is to examine the concurrent validity of the Flexicurve and manual inclinometer in relation to the radiographic Cobb angle for the measurement of thoracic kyphosis. Methods: Eleven subjects (seven males, four females) underwent a sagittal plane spinal radiograph. Immediately following the radiograph, a physiotherapist measured thoracic kyphosis using the Flexicurve and manual inclinometer before the subjects moved from position. Cobb angles were subsequently measured from the radiographs by an independent examiner. Results: A strong correlation was demonstrated between both the Cobb angle and the Flexicurve angle (r = 0.96) and the Cobb angle and the manual inclinometer angle (r = 0.86). On observation of the Bland–Altman plots, the inclinometer showed good agreement with the Cobb angle (mean difference 4.8 ° ± 8.9 °). However, the Flexicurve angle was systematically smaller than the Cobb angle (mean difference 20.3 ° ± 6.1 °), which reduces its validity. Conclusion: The manual inclinometer is recommended as a valid instrument for measuring thoracic kyphosis, with good agreement with the gold standard. While the Flexicurve is highly correlated to the gold standard, they have poor agreement. Therefore, physiotherapists should take caution when interpreting its results.  相似文献   

2.
BackgroundTechnological resources, such as smartphones can contribute to the quantitative assessment of posture.PurposesTest the validity and reliability of using a postural assessment application to quantify the frontal plane knee posture in orthostatism and to test the influence of the use of external markers on the precision of this measure.DesignMethodological study.MethodsThe knee frontal plane posture of 30 volunteers were analyzed by two independent examiners. The photographs were taken with different external marker arrangements. The photographs were analyzed at two moments using the Kinovea software and PhysioCode Posture (PCP) application. Reliability was analyzed using the intraclass correlation coefficient (ICC) between measures with each instrument conducted at two moments with a 7-day interval. Concurrent validity of PCP with Kinovea measure was analyzed using Pearson's correlation coefficient. Standard error measurement (SEM), minimum detectable change (MDC) and Bland Altman plots were analyzed.ResultsPCP demonstrated excellent intra-rater [ICC = 0.92 (95% confidence interval [CI] 0.90–0.93)] and inter-rater [ICC = 0.88 (95%CI 0.85–0.90)] reliability. Concurrent validity analysis showed excellent agreement between PCP and Kinovea software (r = 0.88). The use of markers, independent of positioning, did not influence the measurement properties of measures with both softwares. The SEM was inferior to 1.2°, and the MDC was below 2.85°. No systematic errors were observed in the Bland Altman graphs.ConclusionsThe use of PCP application to measure knee posture was valid and demonstrated excellent intra- and inter-rater reliability levels. The use of external markers did not influence the measurement.  相似文献   

3.
BackgroundWalking speed measurements are clinically important, but varying test procedures may influence measurements and impair clinical utility. This study assessed the concurrent validity of walking speed in individuals with chronic stroke measured during the 10-m walk test with variations in 1) the presence of an electronic mat, 2) the speed measurement device, and 3) the measurement distance relative to the total test distance.MethodsTwenty-five individuals with chronic stroke performed walking tests at comfortable and maximal walking speeds under three conditions: 1) 10-m walk test (without electronic mat) measured by stopwatch, 2) 10-m walk test (partially over an electronic mat) measured by software, and 3) 10-m walk test (partially over an electronic mat) measured by stopwatch. Analyses of systematic bias, proportional bias, and absolute agreement were performed to determine concurrent validity between conditions.FindingsWalking speeds were not different between measurements (P ≥ 0.11), except maximal walking speed was faster when speed was measured with software vs. stopwatch (P = 0.002). Absolute agreement between measurements was excellent (ICC ≥ 0.97, P < 0.001). There was proportional bias between software vs. stopwatch (R2 ≥ 0.19, P ≤ 0.03) and between tests with vs. without the electronic mat (R2 = 0.27, P = 0.008). Comparisons between conditions revealed that walking speed and concurrent validity may be influenced by walking test distance, presence of an electronic mat, speed measurement device, and relative measurement distance.InterpretationWalking test procedures influence walking speed and concurrent validity between measurements. Waking test procedures should be as similar as possible with normative data or between repeated measurements to optimize validity.  相似文献   

4.
ObjectiveTo synthesize evidence regarding the psychometric properties of the Brief-Balance Evaluation Systems Test (BESTest) in assessing postural controls across various populations.Data SourcesArticles were searched in 9 databases from inception to March 2020.Study SelectionTwo reviewers independently screened titles, abstracts, and full-text articles to include studies that reported at least 1 psychometric property of the Brief-BESTest. There were no language restrictions.Data ExtractionThe 2 independent reviewers extracted data (including psychometric properties of Brief-BESTest) from the included studies. The methodological quality of the included studies was appraised by the Consensus-based Standards for the Selection of Health Status Measurement Instruments checklist, and the quality of statistical outcomes was assessed by the Terwee et al method. A best-evidence synthesis for each measurement property of the Brief-BESTest in each population was conducted.Data SynthesisTwenty-four studies encompassing 13 populations were included. There was moderate to strong positive evidence to support the internal consistency (Cronbach α>0.82), criterion validity (ρ≥0.73, r≥0.71), and construct validity (ρ≥0.66, r≥0.50, area under curve>0.72) of the Brief-BESTest in different populations. Moderate to strong positive evidence supported the responsiveness of the Brief-BESTest in detecting changes in postural controls of patients 4 weeks after total knee arthroplasty or patients with subacute stroke after 4-week rehabilitation. However, there was strong negative evidence for the structural validity of this scale in patients with various neurologic conditions. The evidence for the reliability of individual items and measurement errors remains unknown.ConclusionsThe Brief-BESTest is a valid (criterion- and construct-related) tool to assess postural control in multiple populations. However, further studies on the reliability of individual items and minimal clinically important difference of the Brief-BESTest are warranted before recommending it as an alternative to the BESTest and Mini-BESTest in clinical research/practice.  相似文献   

5.
IntroductionThe knee extension prone test (KEPT) can be a low-cost and affordable alternative for this assess knee hyperextension deficit.ObjectiveTo analyze concurrent validity and reliability of a new method for assessing knee extension prone (knee extension prone test; KEPT).MethodsParticipants were divided into two groups: Group 1 comprised healthy participants (HG) and Group 2 comprised participants with a history of knee injury (IG). Two examiners performed the following evaluations: (1) lateral knee goniometry, (2) anterior tibial inclinometry, (3) lateral photogrammetry in supine, (4) lateral photogrammetry in prone, and (5) KEPT. Concurrent validity was analyzed by Pearson's linear correlation coefficient (r), and intra- and inter-examiner reliability were analyzed by intraclass correlation coefficient (ICC).ResultsKEPT demonstrated good intra-examiner (ICC = 0.85, 95% CI = 0.75–0.89) and excellent inter-examiner (ICC = 0.92, 95% CI = 0.88–0.94) reliability. The standard error of measurement was 0.47° and 1.30° and the minimum detectable change was 2.35° and 6.5° for intra- and inter-examiner agreement, respectively. Concurrent validity of KEPT ranged from moderate to good (r = 0.54–0.78, p < 0.01).ConclusionKEPT is a valid and reliable method for assessing knee hyperextension deficit in both healthy individuals and patients with knee injuries.  相似文献   

6.
7.
ObjectiveTo analyze the concurrent validity of the Digital Image-based Postural Assessment (DIPA) method for identifying the magnitude and classification of thoracic kyphosis in adults.MethodologyOn the same day and in the same place, thoracic kyphosis was assessed in 68 adults using 2 methods: the DIPA software protocol and radiography. The DIPA software provided angular values of thoracic kyphosis based on trigonometric relations, while with the radiograph, the curvature was calculated using the Cobb method. The following tests were applied in the statistical analysis: Pearson's correlation, Bland-Altman's graphic representation, root mean square error, and receiver operating characteristic (ROC) curve; α = 0.05. The reference angular values for the standard thoracic posture used in DIPA were determined with the ROC curve based on the Cobb angles.ResultsThe correlation between the angles obtained for thoracic kyphosis using the DIPA and Cobb methods was found to be high (r = 0.813, P < .001), and the accuracy was ±4°. According to Bland-Altman's representation, the magnitudes provided by the DIPA software were in agreement with those of the Cobb method. In reference values for determining the standard posture of the thoracic spine, the ROC curve indicated good accuracy in diagnosing a decrease in thoracic kyphosis (with a value of 33.9°) and excellent accuracy in diagnosing thoracic hyperkyphosis (with a value 39.9°) when using DIPA.ConclusionThe DIPA postural assessment method is valid in the sagittal plane for identifying the magnitude of thoracic kyphosis in adults. Furthermore, it is accurate in diagnosing alterations in thoracic kyphosis.  相似文献   

8.
IntroductionMovement compensations during internal rotation of the shoulder can provoke pain. Reliably observing and measuring compensations in the shoulder using visual and palpatory methods can result in more efficacious treatments of shoulder pathology. Despite this, the reliability of these measures and the relationship between them is unknown.MethodsBilateral shoulders of 33 Doctor of Physical Therapy (DPT) students were measured. Two third-year DPT student examiners used visual inspection and physical palpation to identify the first signs of internal rotation (IR) passive stiffness. Measurements were taken and recorded by a third examiner using the GetMyROM (Version 1.1) iPhone application.ResultsGood intra-rater reliability for both examiners was identified for physical palpation (ICC = 0.896, 95% CI = 0.830, 0.936, ICC = 0.901, 95% CI = 0.839, 0.939) and visual inspection (ICC = 0.813, 95% CI = 0.699, 0.884, ICC = 0.782, 95% CI = 0.667, 0.880). Moderate interrater reliability was found between the examiners for physical palpation (ICC = 0.681, 95% CI = 0.479, 0.797) while poor interrater reliability was found between examiners for visual inspection (ICC = 0.481, 95% CI = 0.234, 0.648). The correlation between physical palpation and visual inspection indicated moderate reliability for both examiners (r = 0.815, p = 0.01, r = 0.832, p = 0.01).ConclusionThe findings of this research study indicate that both physical palpation and visual inspection are reliable methods for measuring relative flexibility of shoulder IR when performed by the same examiner. However, the reliability for both methods decreases when performed by different examiners. Additionally, a strong correlation was found between both measures.  相似文献   

9.
《Manual therapy》2014,19(2):90-96
Joint mobilizations are commonly used by clinicians to decrease pain and restore joint arthrokinematics following musculoskeletal injury. The force applied during a joint mobilization treatment is subjective to the individual clinician but may have an effect on patient outcomes. The purpose of this systematic review was to critically appraise and synthesize the studies which examined the reliability of clinicians' force application during joint mobilization. A systematic search of PubMed and EBSCO Host databases from inception to March 1, 2013 was conducted to identify studies assessing the reliability of force application during joint mobilizations. Two reviewers utilized the Quality Appraisal of Reliability Studies (QAREL) assessment tool to determine the quality of included studies. The relative reliability of the included studies was examined through intraclass correlation coefficients (ICC) to synthesize study findings. All results were collated qualitatively with a level of evidence approach. A total of seven studies met the eligibility and were included. Five studies were included that assessed inter-clinician reliability, and six studies were included that assessed intra-clinician reliability. The overall level of evidence for inter-clinician reliability was strong for poor-to-moderate reliability (ICC = −0.04 to 0.70). The overall level of evidence for intra-clinician reliability was strong for good reliability (ICC = 0.75–0.99). This systematic review indicates there is variability in force application between clinicians but individual clinicians apply forces consistently. The results of this systematic review suggest innovative instructional methods are needed to improve consistency and validate the forces applied during of joint mobilization treatments. This is particularly evident for improving the consistency of force application across clinicians.  相似文献   

10.
ObjectivesThere are limited non-invasive methods to assess lower extremity arterial injuries in the emergency department (ED) and pre-hospital setting. The ankle-brachial index (ABI) requires careful auscultation by Doppler, an approach made difficult in noisy environments. We sought to determine the agreement of the ABI measured using the pulse oximeter plethysmograph waveform (Pleth) with auscultation by Doppler in a controlled setting. A secondary outcome sought to examine the agreement of ABI by automated oscillometric sphygmomanometer (AOS) with Doppler.MethodsWe measured blood pressure in the right upper and lower extremities of healthy volunteers using: (1) Doppler and manual sphygmomanometer; (2) Pleth and manual sphygmomanometer; and (3) AOS. The Bland-Altman approach to assessing agreement between methods was used comparing mean differences between ABI pairs to their means for Doppler versus Pleth and Doppler versus AOS. The intraclass correlation coefficient (ICC) from mixed effects models examined intra- and inter-rater reliability.ResultsAmong 100 participants with normal ABI the mean ABI (95%CI) were Doppler 1.11 (0.90–1.33), Pleth 1.10 (0.91–1.30), and AOS 1.10 (0.90–1.30). The ABI difference (95% CI for limits of agreement) were 0.01 (−0.20,0.18) for Doppler-Pleth and 0.02 (−0.26, 0.22) for Doppler-AOS. The ICC for the Doppler-Pleth comparison (ICC = 0.56, 95% CI 0.47–0.63) was greater than for the Doppler-AOS (ICC = 0.32, 95% CI 0.19–0.43).ConclusionsThe ABI measured using the Pleth has a high level of agreement with measurement by Doppler. The AOS and Doppler have good agreement with greater measurement variability. Pleth and AOS may be reasonable alternatives to Doppler for ABI.  相似文献   

11.
BackgroundFunctional performance tests are inexpensive, accessible, and easy to apply tools that can be used to help practitioners in daily decision making process. The purpose of this study was to evaluate the reliability and validity of the One Arm Hop Test (OAHT) and Seated Medicine Ball Throw Test (SBMT) in young adults.MethodsCross-sectional study with a sample consisted of 59 young adults. The subjects performed the OAHT and SMBT in two moments separated by seven days and by two examiners. The Closed Kinetic Chain Upper Extremity Stability Test (CKCUEST) was performed at the second moment. The time in OAHT, distance in SMBT, mean number of touches, normalized score, and power of the CKCUEST were measured. Reliability was determined using Intraclass Correlation Coeficient (ICC) and Bland-Altman Plots. Validity was assessed via Pearson's Correlation Coefficient (r) between these tests and CKCUEST.ResultsWe found good reliability of the OAHT between different raters (dominant limb – ICC = 0.83; non-dominant limb – ICC = 0.80) and moderate reliability between the same rater (dominant limb – ICC = 0.63; non-dominant limb – ICC = 0.62). In the SMBT we found good reliability inter-examiner (ICC = 0.84) and intra-examiner (ICC = 0.77). Low to moderate correlations with the CKCUEST were found (r < 0.70; p < 0.05).ConclusionsThe OAHT and the SMBT show moderate/good reliability intra and inter-examiner, however these tests are poorly correlated with CKCUEST. The SMBT presented higher values of ICC than OAHT. A combination of the SMBT and CKCUEST is recommended in clinical practice.  相似文献   

12.
IntroductionAlthough the pressure biofeedback unit (PBU) is used for muscular assessment and training, there is little evidence of its reproducibility and repeatability.ObjectiveThis study aims to assess intra- and inter-rater reproducibility and repeatability of the PBU in the assessment of the transverse abdominal (TrA), internal oblique (IO), low back multifidi, and deep neck flexors (DNF).MethodsFifty individuals had three muscular groups tested: TrA/IO, lower back multifidi, and DNF. For repeatability, one rater did three consecutive measures; for intra-rater reproducibility the same rater did two measures with seven-day intervals, and for inter-rater reproducibility, three raters, on the same day, did the measures. Data were analyzed with: Intraclass Correlation Coefficient (ICC), Standard Error of Measurement (SEM), and Minimal Detectable Change (MDC). (α = 0,05).ResultsRepeatability: TrA/IO (ICC = 0.847), Multifidi (ICC = 0.860), DNF (ICC = 0.831). Inter-rater reproducibility: TrA/IO (ICC = 0.876), Multifidi (ICC = 0.508), DNF (ICC = 0.442). Intra-rater reproducibility: TrA/IO (ICC = 0.747), Multifidi (ICC = 0.293), DNF (ICC = 0.685). Except for Multifidi, all the SEM values were less than 10 mmHg and the MDC values were less than 15 mmHg.ConclusionsThe PBU can be used with reliability by different evaluators, although the evaluation of multifidi is not indicated.  相似文献   

13.
ContextHigh-quality advance care planning (ACP) discussions are important to ensure patient receipt of goal-concordant care; however, there is no existing tool for assessing ACP communication quality.ObjectivesThe objective of this study was to develop and validate a novel instrument that can be used to assess ACP communication skills of clinicians and trainees.MethodsWe developed a 20-item ACP Communication Assessment Tool (ACP-CAT) plus two summative items. Randomized rater pairs assessed residents' performances in video-recorded standardized patient encounters before and after an ACP training program using the ACP-CAT. We tested the tool for its 1) discriminating ability, 2) interrater reliability, 3) concurrent validity, 4) feasibility, and 5) raters' satisfaction.ResultsFifty-eight pre/post-training video recordings from 29 first-year internal medicine residents at Mount Sinai Hospital were evaluated. ACP-CAT reliably discriminated performance before and after training (median score 6 vs. 11, P < 0.001). For both pre/post-training encounters, interrater reliability was high for ACP-CAT total scores (intraclass correlation coefficient or ICC = 0.83 and 0.82) and the summative items Overall impression of ACP communication skills (ICC = 0.73 and 0.80) and Overall ability to respond to emotion (ICC = 0.83 and 0.82). Concurrent validity was shown by the strong correlation between ACP-CAT total score and both summative items. Raters spent an average of 4.8 minutes to complete the ACP-CAT, found it feasible, and were satisfied with its use.ConclusionACP-CAT provides a validated measure of ACP communication quality for assessing video-recorded encounters and can be further studied for its applicability with clinicians in different clinical contexts.  相似文献   

14.
15.
ObjectiveThe purpose of this study was to test the validity and determine the accuracy of surface topography in relation to photogrammetry for measuring the thoracic kyphosis angle in patients with scoliosis.MethodsThis was a prospective, cross-sectional study of diagnostic accuracy that followed the guidelines recommended by the Standards for Reporting Diagnostic Accuracy. We consecutively included 51 participants aged 7 to 18 years. Exclusion criteria were surgical treatment of the spine, neurological disease, lower limb discrepancy greater than 1.5 cm, and body mass index above 29 kg/m². Each participant was evaluated using both a surface topography scan and photogrammetry in random order. The measurement obtained through photogrammetry was used as a reference in this study. For statistical purposes, Pearson's correlation test, Bland-Altman graphical analysis, and the receiver operating characteristic curve (P < .05) were performed.ResultsThe correlation between the measurements was strong and significant (r = 0.76, P < .001) with an average difference of 0.4° in the Bland-Altman analysis. The receiver operating characteristic curve area was excellent for hypokyphosis (93.4%) and good for hyperkyphosis (86.4%), both being significant (P < .005).ConclusionThe agreement and strong correlation between the 2 methods indicate the validity of surface topography to measure the thoracic kyphosis angle. The surface topography provides accurate measures for the thoracic kyphosis angle with cutoff points for hypo- (33.3°) and hyperkyphosis (40.8°) for individuals with scoliosis.  相似文献   

16.
ObjectiveMeasuring muscle quantity and quality is very important because the loss of muscle quantity and quality is associated with several adverse effects specifically in older people. Ultrasound is a method widely used to measure muscle quantity and quality. One problem with ultrasound is its limited field of view, which makes it impossible to measure the muscle quantity and quality of certain muscles. In this study, we aimed to evaluate the intra- and inter-rater reliability of extended-field-of-view (EFOV) ultrasound for the measurement of muscle quantity and quality in nine muscles of the limbs and trunk.MethodsTwo examiners took two ultrasound EFOV images with a linear probe from each of the muscle sites. The intraclass correlation coefficient (ICC) was used, and the standard error of measurement and coefficient of variation were calculated.ResultsIntra-rater reliability was good to excellent (ICC = 0.2–1.00) for all muscle measurements. The inter-rater reliability for most of the muscle measurements was good to excellent (ICC = 0.82–0.98). Inter-rater reliability was moderate (0.58–0.72) for some muscle quantity measurements of the tibialis anterior, gastrocnemius, rectus femoris, biceps femoris and triceps brachii muscles.ConclusionMuscle quantity and quality can be measured reliably using EFOV US.  相似文献   

17.
BackgroundPostural control deficits are one of the most common impairments treated in pediatric physiotherapeutic practice. Adequate evaluation of these deficits is imperative to identify postural control deficits, plan treatment and assess efficacy. Currently, there is no gold standard evaluation for postural control deficits. However, the number of studies investigating the psychometric properties of functional pediatric postural control tests has increased significantly.ObjectiveTo facilitate the selection of an appropriate pediatric functional postural control test in research and clinical practice.MethodsSystematic review following the PRISMA guidelines. PubMed, Web of Science and Scopus were systematically searched (last update: June 2022; PROSPERO: CRD42021246995). Studies were selected using the PICOs-method (pediatric populations (P), functional assessment tools for postural control (I) and psychometric properties (O). The risk of bias was rated with the COSMIN checklist and the level of evidence was determined with GRADE. For each test, the postural control systems were mapped, and the psychometric properties were extracted.ResultsSeventy studies investigating 26 different postural control tests were included. Most children were healthy or had cerebral palsy. Overall, the evidence for all measurement properties was low to very low. Most tests (95%) showed good reliability (ICC>0.70), but inconsistent validity results. Structural validity, internal consistency and responsiveness were only available for 3 tests. Only the Kids-BESTest and FAB covered all postural control systems.ConclusionCurrently, 2 functional tests encompass the entire construct of postural control. Although reliability is overall good, validity results depend on task, age and pathology. Future research should focus on test batteries and should particularly explore structural validity and responsiveness in different populations with methodologically strong study designs.  相似文献   

18.
19.

Objective

The purpose of this study was to review referential values for thoracic kyphosis and lumbar lordosis for radiography and photogrammetry analysis and search for information about the interrater and intrarater reliability.

Methods

The databases PubMed/Medline and LILACS were searched using the following keywords: radiograph and posture, postural alignment, and photogrammetry or photometry or biophotogrammetry. Studies containing values of thoracic kyphosis and lumbar lordosis or a reliability test assessed by radiography and photogrammetry were selected. Random numbers were generated in MATLAB from each study individually to establish normative values for the thoracic kyphosis and lumbar lordosis for both methods. After that, frequencies (median, first quartile, and third quartile) were obtained in SPSS 20.0 (IBM Corp, Armonk, New York).

Results

Twenty-six articles were selected, of which 23 studies contained values for thoracic kyphosis and lumbar lordosis and 10 tested the intra- and interrater reliability of both methods. For the studies with radiography that calculated the angle by the same method of assessment, the mean was 44.07° (4.75) for L1 to L5 and 58.01° (5.75) for L1 to S1, and for T1 to T12 the mean was 48.33° (6.24). Most studies used the intraclass correlation coefficient test, showing strong reliability.

Conclusion

No concordance among the results for both methods was shown. Also, it was not possible to perform the same procedure with the photogrammetry studies because of the great discrepancy in procedures and angle calculations. To assess the reliability, it is necessary to use the proper statistical test.  相似文献   

20.
ObjectiveTo assess the Mayo-Portland Adaptability Inventory—version 4 (MPAI-4) and related measures’ measurement properties and the quality of evidence supporting these results; and identify the interpretability and feasibility of the MPAI-4 and related measures.Data SourcesWe conducted a systematic review according to COnsensus-based Standards for the selection of health Measurement Instruments (COSMIN) guidelines. We searched 9 electronic databases and registries, and hand searched reference lists of included articles.Study SelectionTwo independent reviewers screened and selected all articles. From 605 retrieved articles, 48 were included.Data ExtractionTwo independent reviewers appraised the evidence quality and rated the extracted classical test theory and Rasch results from each study.Data SynthesisWe used meta-analysis and COSMIN's approach to synthesize measurement properties evidence (insufficient, sufficient), and the modified Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to synthesize evidence quality (very low, low, moderate, high) by diagnosis (traumatic brain injury [TBI], stroke), and setting (inpatient, outpatient). The MPAI-4 and its subscales are sufficiently comprehensible (GRADE: very low), but there is currently no other content validity evidence (relevance, comprehensiveness). The MPAI-4 and its participation index (M2PI) have sufficient interrater reliability for stroke and TBI outpatients (GRADE: moderate), whereas interrater reliability between TBI inpatients and clinicians is currently insufficient (GRADE: moderate). There is no evidence for measurement error. For stroke and TBI outpatients, the MPAI-4 and M2PI have sufficient construct validity (GRADE: high) and responsiveness (GRADE: moderate-high). For TBI inpatients, the MPAI-4 and M2PI have mixed indeterminant/sufficient construct validity and responsiveness evidence (GRADE: moderate-high). There is 1 study with mixed insufficient/sufficient evidence for each MPAI-4 adaptation (21- and 22-item MPAI, 9-item M2PI) (GRADE: low-high).ConclusionUsers can be most confident in using the MPAI-4 and M2PI in TBI and stroke outpatient settings. Future research is needed on reliability, measurement error, predictive validity, and content validity of the MPAI-4 and its related measures across populations and settings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号