首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
The aim of this study was to measure the inter- and intraobserver variations as well as integrality of the Zwipp, Crosby-Fitzgibbons, Sanders, and Eastwood-Atkins classification systems based on more accurate CT scans. Five hundred and forty-nine patients with intra-articular calcaneal fractures from January 2018 to December 2019 taken from a database in our level-I trauma center (3 affiliated hospitals) were included. For each case, normative CT (1 mm slices) scans were available. Four different observers reviewed all CT scans 2 times according to these 4 most prevalent fracture classification systems (FCSs) within a 2-month interval. For these 4 FCSs, the kappa [κ] coefficient was used to evaluate interobserver reliability and intraobserver reproducibility, and the percentage that can be classified was used to indicate integrality. The κ values were measured for Zwipp (κ = 0.38 interobserver, κ = 0.61 intraobserver), Crosby-Fitzgibbons (κ = 0.48 interobserver, κ = 0.79 intraobserver), Sanders (κ = 0.40 interobserver, κ = 0.57 intraobserver), and Eastwood-Atkins (κ = 0.44 interobserver, κ = 0.72 intraobserver). Furthermore, the integralities were calculated for Zwipp (100%), Crosby-Fitzgibbons (100%), Sanders (92%) as well as Eastwood-Atkins (89.6%). Compared with previous literatures, CT scanning with higher accuracy can significantly improve intraobserver reproducibility of Zwipp and Eastwood-Atkins FCSs, but it has no positive effect on variability of Sanders FCS and interobserver reliability of Crosby-Fitzgibbons FCS. In terms of integrality, Zwipp and Crosby-Fitzgibbons FCSs appear to be superior to the other 2 FCSs.  相似文献   

2.
The aim of the present study was to assess the reliability of commonly used intra-articular calcaneal fracture classification systems and to compare them with the newer AO Integral Classification of Injuries (ICI) system. Forty computed tomography and radiographic images of 40 intra-articular calcaneal fractures were reviewed independently by 3 reviewers on 2 separate occasions and classified according to the Essex-Lopresti, Atkins, Zwipp and Tscherne, Sanders, and AO-ICI classification systems. The reviewers were unaware of the patients' identity and all aspects of clinical care. The data were analyzed using kappa (κ) statistics to assess the intra- and interobserver reliability. The κ values were calculated for Essex-Lopresti (κ = 0.85 intraobserver, κ = 0.78 interobserver), Atkins (κ = 0.42 intraobserver, κ = 0.73 interobserver), Zwipp and Tscherne (κ = 0.40 intraobserver, κ = 0.47 interobserver), Sanders (κ = 0.31 intraobserver, κ = 0.35 interobserver), and AO-ICI (κ = 0.41 intraobserver, κ = 0.33 interobserver). The AO-ICI classification system had levels of reproducibility similar to that of the Sanders classification, currently the most widely used system. The Essex-Lopresti classification demonstrated improved reliability compared with that reported in previous studies. This can be attributed to using sagittal computed tomography images, in addition to the originally described plain radiographs, for assessment. This improvement is relevant because of its accepted prognostic predictability.  相似文献   

3.
BackgroundSanders classification, based on the number of displaced fractured fragments of posterior facet, can predict the prognosis of calcaneal intraarticular fractures. The aim of the study was assessing not only intraobserver reproducibility and interobserver reliability of Sanders classification but also the agreement between preoperative reported types based on computed tomography (CT) scan and direct observation during the surgery.MethodsIn this cross-sectional study, preoperative CT scans of 100 patients with intra-articular calcaneal fracture operated by a single surgeon were studied by two orthopedic and trauma surgeons (A & B), twice with an interval of three weeks. Their result were compared with each other and with the number of displaced fractured fragments recorded in the operation notes. Quadratic weighted kappa test was used to check the agreement between two observers and between the observers and the surgeon.ResultsIntraobserver reproducibility for Sanders classification of intraarticular calcaneal fractures was found to be good to excellent (A1–A2: 0.91 and B1–B2: 0.75). There was a moderate agreement between the two observers (A1–B1: 0.56, A1–B2:0.58, A2–B1:0.48, and A2–B2:0.51). The agreement between reported types of Sanders classification and the number of displaced fractured fragments seen during the surgery was fair (A1-surgeon: 0.27, A2-surgeon: 0.29, B1-surgeon: 0.38, and B2-surgeon: 0.50).ConclusionsAgreement between Sanders classification and what is real during surgery is fair. Hence, Sanders classification as determined in the widest cut of coronal CT scan extended posteriorly should be cautiously interpreted for surgery.  相似文献   

4.
The reliability of the AO/Orthopaedic Trauma Association classification system has not been evaluated for diaphyseal fractures or fractures attributable to gunshot injuries. Therefore, the current authors assessed its reliability for diaphyseal femur fractures and investigated the effect of a gunshot mechanism of injury. Forty-seven diaphyseal femur fractures, 23 caused by gunshots and 24 caused by blunt trauma, were classified by four observers on two occasions. The interobserver and intraobserver reliability of each level of the AO/Orthopaedic Trauma Association classification was assessed with kappa statistics. Determination of fracture type had substantial interobserver and intraobserver reliability for gunshot and blunt injuries. Reliability decreased at the subsequent levels of the classification. Fractures caused by gunshots compared with those caused by blunt trauma were characterized by significantly lower interobserver agreement on fracture group (k = 0.26 versus 0.45) and subgroup (k = 0.21 versus 0.38). The AO/Orthopaedic Trauma Association classification system has substantial interobserver and intraobserver reliability when evaluating the type of diaphyseal femur fractures. Determination of fracture group and subgroup, however, progressively reduces the reliability of the classification, especially for fractures caused by a gunshot. Diaphyseal femur fractures caused by gunshots, by means of their fracture patterns, cannot be classified reliably with the AO/Orthopaedic Trauma Association classification system.  相似文献   

5.
We examined the added value of 3-dimensional (3D) prints in improving the interobserver reliability of the Sanders classification of displaced intraarticular calcaneal fractures. Twenty-four observers (radiologists, trainees, and foot surgeons) were asked to rate 2-dimensional (2D) computed tomography images and 3D prints of a series of 11 fractures, selected from cases treatment at our level I trauma center between 2014 and 2016. The interobserver reliability for the Sanders classification was assessed using kappa coefficients. Three versions of the Sanders classification were considered: Sanders classification with subclasses, Sanders classification without subclasses, and the combination of Sanders types III and IV because of the high incidence of comminution in both types. The reference standard for classification was the perioperative findings by a single surgeon. The 3D print always yielded higher values for agreement and chance-corrected agreement. The Brennan-Prediger–weighted kappa equaled 0.35 for the 2D views and 0.63 for the 3D prints for the Sanders classification with subclasses (p?=?.004), 0.55 (2D) and 0.76 (3D) for the classification without subclasses (p?=?.003), and 0.58 (2D) and 0.78 (3D) for the fusion of Sanders types III and IV (p?=?.027). Greater agreement was also found between the perioperative evaluation and the 3D prints (88% versus 65% for the 2D views; p?<?.0001). However, a greater percentage of Sanders type III-IV were classified with 2D than with 3D (56% versus 32%; p?<?.0001). The interobserver agreement for the evaluation of calcaneal fractures was improved with the use of 3D prints after “digital disarticulation.”  相似文献   

6.
BACKGROUND: Complex fractures of the distal part of the humerus can be difficult to characterize on plain radiographs and two-dimensional computed tomography scans. We tested the hypothesis that three-dimensional reconstructions of computed tomography scans improve the reliability and accuracy of fracture characterization, classification, and treatment decisions. METHODS: Five independent observers evaluated thirty consecutive intra-articular fractures of the distal part of the humerus for the presence of five fracture characteristics: a fracture line in the coronal plane; articular comminution; metaphyseal comminution; the presence of separate, entirely articular fragments; and impaction of the articular surface. Fractures were also classified according to the AO/ASIF Comprehensive Classification of Fractures and the classification system of Mehne and Matta. Two rounds of evaluation were performed and then compared. Initially, a combination of plain radiographs and two-dimensional computed tomography scans (2D) were evaluated, and then, two weeks later, a combination of radiographs, two-dimensional computed tomography scans, and three-dimensional reconstructions of computed tomography scans (3D) were assessed. RESULTS: Three-dimensional computed tomography improved both the intraobserver and the interobserver reliability of the AO classification system and the Mehne and Matta classification system. Three-dimensional computed tomography reconstructions also improved the intraobserver agreement for all fracture characteristics, from moderate (average kappa [kappa2D] = 0.554) to substantial agreement (kappa3D = 0.793). The addition of three-dimensional images had limited influence on the interobserver reliability and diagnostic characteristics (sensitivity, specificity, and accuracy) for the recognition of specific fracture characteristics. Three-dimensional computed tomography images improved intraobserver agreement (kappa2D = 0.62 compared with kappa3D = 0.75) but not interobserver agreement (kappa2D = 0.24 compared with kappa3D = 0.28) for treatment decisions. CONCLUSIONS: Three-dimensional reconstructions improve the reliability, but not the accuracy, of fracture classification and characterization. The influence of three-dimensional computed tomography was much more notable for intraobserver comparisons than for interobserver comparisons, suggesting that different observers see different things in the scans-most likely a reflection of the training, knowledge, and experience of the observer with regard to these relatively uncommon and complex injuries.  相似文献   

7.
The purpose of this study was to establish the interobserver reliability and intraobserver reproducibility of the staging of Kienb?ck's disease according to Lichtman's classification. Posteroanterior and lateral wrist radiographs of 64 patients with a diagnosis of Kienb?ck's disease and 10 control subjects were reviewed independently by 4 observers on 2 separate occasions. The reviewers included 3 hand fellowship-trained surgeons and 1 orthopedist who was not fellowship-trained in hand surgery. A stage was assigned to each set of radiographs according to the Lichtman classification. Paired comparisons for reliability among the 4 observers showed an average absolute percentage agreement of 74% and an average paired weighted kappa coefficient of 0.71. Furthermore, all the controls were correctly classified as stage I, which is in accordance with the Lichtman system. With regard to reproducibility, observers duplicated their initial readings 79% of the time with an average weighted kappa coefficient of 0.77. These results indicate substantial reliability and reproducibility of the Lichtman classification for Kienb?ck's disease.  相似文献   

8.
Our study was undertaken to assess the inter- and intra-observer variability of the classification system of Sanders for calcaneal fractures. Five consultant orthopaedic surgeons with different subspecialty interests classified CT scans of 28 calcaneal fractures using this classification system. After six months, they reclassified the scans. Kappa statistics were used to analyse the two groups. The interobserver variability of the classification system was 0.32 (95% confidence interval (CI) 0.26 to 0.38). The subclasses were then combined and assessment of agreement between the general classes as a whole gave a kappa value of 0.33 (95% CI 0.25 to 0.41). The mean kappa value for intra-observer variability of the classification system was 0.42 (95% CI 0.22 to 0.62). When the subclasses were combined, it was 0.45 (95% CI 0.21 to 0.65). Our results show that, despite its popularity, the classification system of Sanders has only fair agreement among users.  相似文献   

9.
This study assessed the reliability and validity of a new classification system for fractures of the femur after hip arthroplasty. Forty radiographs were evaluated by 6 observers, 3 experts and 3 nonexperts. Each observer read the radiographs on 2 separate occasions and classified each case as to its type (A, B, C) and subtype (B1, B2, B3). Reliability was assessed by looking at the intraobserver and interobserver agreement using the kappa statistic. Validity was assessed within the B group by looking at the agreement between the radiographic classification and the intraoperative findings. Our findings suggest that this classification system is reliable and valid. Intraobserver agreement was consistent across observers, ranging from 0.73 to 0.83. There was a negligible difference between experts and nonexperts. Interobserver agreement was 0.61 for the first reading and 0.64 for the second reading by kappa analysis, indicating substantial agreement between observers. Validity analysis revealed an observed agreement kappa value of 0.78, indicating substantial agreement. This study has shown that this classification is reliable and valid.  相似文献   

10.
Fracture-classification systems are used to recommend treatment and predict outcomes. In this study, a modified Gartland classification system of supracondylar humerus fractures in children was assessed for intraobserver and interobserver variability. Five observers classified radiographs of 50 consecutive children with extension supracondylar humerus fractures on three separate occasions. After a 2-week interval, 90% of fractures were classified the same on both readings, with and intraobserver kappa value of 0.84. After a 36-week interval, 89% of the fractures were classified the same, with a kappa value of 0.81. Interobserver reliability was evaluated by pairwise comparison among observers, resulting in an overall kappa value of 0.74. The reliability of the Gartland classification for supracondylar humerus fractures in children is better than that published for other fracture-classification systems. However, 10% of the time, a second reading by the same observer is different. This makes treatment recommendations based only on fracture type imprecise.  相似文献   

11.
In order to assess interobserver and intraobserver reliability of an evaluation system of the International Clubfoot Study Group, 30 children treated for unilateral clubfoot and their radiographs were examined by three different observers. The mean intraobserver kappa value was found to be 0.62. The mean interobserver kappa value was 0.73. These kappa values correlated with a substantial degree of agreement. Interobserver reliability for all subgroup evaluations (morphologic, functional and radiological) and total scores was 90% or over. This also indicates a good interobserver reliability. In conclusion, the Bensahel et al. and International Clubfoot Study Group outcome evaluation system may be used reliably for the assessment of outcome of the treatment of clubfoot.  相似文献   

12.
BACKGROUND: The reproducibility and repeatability of modern systems for classification of thoracolumbar injuries have not been sufficiently studied. We assessed the interobserver and intraobserver reproducibility of the AO (Arbeitsgemeinschaft für Osteosynthesefragen) classification and compared it with that of the Denis classification. Our purpose was to determine whether the newer, AO system had better reproducibility than the older, Denis classification. METHODS: Anteroposterior and lateral radiographs and computerized tomography scans (axial images and sagittal reconstructions) of thirty-one acute traumatic fractures of the thoracolumbar spine were presented to nineteen observers, all trained spine surgeons, who classified the fractures according to both the AO and the Denis classification systems. Three months later, the images of the thirty-one fractures were scrambled into a different order, and the observers repeated the classification. The Cohen kappa (kappa) test was used to determine interobserver and intraobserver agreement, which was measured with regard to the three basic classifications in the AO system (types A, B, and C) as well as the nine subtypes of that system. We also measured the agreement with regard to the four basic types in the Denis classification (compression, burst, seat-belt, and fracture-dislocation) and with regard to the sixteen subtypes of that system. RESULTS: The AO classification was fairly reproducible, with an average kappa of 0.475 (range, 0.389 to 0.598) for the agreement regarding the assignment of the three types and an average kappa of 0.537 for the agreement regarding the nine subtypes. The average kappa for the agreement regarding the assignment of the four Denis fracture types was 0.606 (range, 0.395 to 0.702), and it was 0.173 for agreement regarding the sixteen subtypes. The intraobserver agreement (repeatability) was 82% and 79% for the AO and Denis types, respectively, and 67% and 56%, for the AO and Denis subtypes, respectively. CONCLUSIONS: Both the Denis and the AO system for the classification of spine fractures had only moderate reliability and repeatability. The tendency for well-trained spine surgeons to classify the same fracture differently on repeat testing is a matter of some concern.  相似文献   

13.
BackgroundThis study assessed the reliability and validity of the modified Unified Classification System for femur fractures after hip arthroplasty.MethodsFour hundred and two cases were evaluated by 6 observers, 3 experts and 3 trainee surgeons. Each observer read the radiographs on 2 separate occasions and classified each case as to its type. Reliability was assessed by looking at the intraobserver and interobserver agreement using the Kappa statistic. Validity was assessed within the B group by looking at the agreement between the radiographic classification and the intraoperative findings. Interobserver and intraobserver agreement and validity were analyzed, using weighted kappa statistics.ResultsThe mean k value for interobserver agreement was found to be 0.882 (0.833–0.929) for consultants (almost perfect agreement) and 0.776 (0.706–0.836) for the trainees (substantial agreement). Intraobserver k values ranged from 0.701 to 0.972, showing substantial to almost perfect agreement. Validity analysis of 299 type B cases revealed 89.854% agreement with a mean k value of 0.849 (0.770–0.946) (almost perfect agreement).ConclusionsThis study has shown that the modified Unified Classification System is reliable and valid. We believe it is useful to improve the judgment of the implant stability, and establish the therapeutic strategy for periprosthetic femoral fracture.  相似文献   

14.
15.
The purpose of the current investigation was to determine interobserver and intraobserver reliability of the classification system of Steinberg et al for osteonecrosis of the femoral head. Sixty-five anteroposterior and lateral radiographs of hips were selected randomly from a pool of patients with confirmed osteonecrosis of the femoral head. Six fellowship-trained observers viewed the radiographs (Reading 1). The observers used six main stages of the classification excluding A, B, and C subgroups. The same observers viewed the radiographs 4 months later in reverse order (Reading 2). Reading 1 was used to calculate interobserver kappa values. Reading 2 was used to calculate intraobserver kappa values. Stage-specific kappa values for interobserver variation between all viewers were as follows: Stage I, k = 0.64; Stage II, k = 0.51; Stage III, k = 0.21; Stage IV, k = 0.49; Stage V, k = 0.36; and Stage VI, k = 0.80. Stage-specific kappa values for intraobserver variation between all viewers were as follows: Stage I, k = 0.74; Stage II, k = 0.60; Stage III, k = 0.46; Stage IV, k = 0.59; Stage V, k = 0.27; and Stage VI, k = 0.78. An average of 10 of 21 (48%) of these errors involved Stage III. An average of 6.3 of 21 (30%) intraobserver errors involved Stage V. The presence of the crescent sign in Stage III and joint space narrowing in Stage V markedly diminished the overall reliability of any four- to six-stage classification system. Based on the authors' experience and analysis of the current classifications of osteonecrosis of the femoral head, an easy and reproducible Pittsburgh classification system is proposed.  相似文献   

16.

Background

Although the reliability of determining acromial morphology has been examined, to date, there has not been an analysis of interobserver and intraobserver reliability on determining the presence and measuring the size of an acromial enthesophyte.

Questions/Purposes

The hypothesis of this study was that there will be poor intraobserver and interobserver reliability in the (1) determination of the presence of an acromial enthesophyte, (2) determination of the size of an acromial enthesophyte, and (3) determination of acromial morphology.

Patients and Methods

Fifteen fellowship-trained orthopedic shoulder surgeons reviewed the radiographs of 15 patients at two different intervals. Measurement of acromial enthesophytes was performed using two techniques: (1) enthesophyte length and (2) enthesophyte–humeral distance. Acromial morphology was also determined. Interobserver and intraobserver agreement was determined using intraclass correlation and kappa statistical methods.

Results

The interobserver reliability was fair to moderate and the intraobserver reliability moderate for determining the presence of an acromial enthesophyte. The measurement of the enthesophyte length showed poor interobserver and intraobserver reliability. The measurement of the enthesophyte–humeral distance showed poor interobserver reliability and moderate intraobserver reliability. The interobserver and intraobserver reliability in determining acromial morphology was found to be moderate and good, respectively.

Conclusions

There is fair to moderate reliability among fellowship-trained shoulder surgeons in determining the presence of an acromial enthesophyte. However, there is poor reliability among observers in measuring the size of the enthesophyte. This study suggests that the enthesophyte–humeral distance may be more reliable than the enthesophyte length when measuring the size of the enthesophyte.  相似文献   

17.
The aim of this study was to assess inter- and intraobserver agreement of the traditional systems (Ruedi-Allgower, AO [Arbeitsgemeinschaft für Osteosynthesefragen], and Topliss) and the newly proposed Leonetti classification system of pilon fractures. We studied all patients at our center who underwent pilon fracture surgery over a 2-year period: 68 patients (70 legs) were included. Four observers independently classified each pilon fracture according to the Ruedi-Allgower, AO, Topliss, and Leonetti systems by evaluating radiographs and computed tomography images on 2 occasions. The inter- and intraobserver agreements were calculated using the Fleiss kappa test. Interobserver reliability was good for AO types (A, B, and C) and Ruedi-Allgower (κ = 0.71 and 0.61, respectively), whereas the interobserver reliability was moderate for AO groups (A1, A2, A3, B1, B2, B3, C1, C2, and C3), Topliss families, Topliss subfamilies, Leonetti types, and Leonetti subtypes. Intraobserver reproducibility was excellent for the Ruedi-Allgower classification, AO types, and Topliss families and good for AO groups, Topliss subfamilies, and Leonetti types and subtypes. Ruedi-Allgower and AO classification systems are the most reliable among those currently used for pilon fractures, but with lower agreement at the AO group level. The use of Topliss and Leonetti classification systems is not recommended because of less favorable results.  相似文献   

18.
Introduction We compare the intra- and interobserver reproducibility of classifications of tibial plateau fractures most commonly used in our clinical practice. These were the AO and Schatzker classifications.Patients and methods Agreement was measured using kappa coefficients on the data obtained from three observers reviewing 30 fractures and these values were interpreted according to Landis and Koch.Results It was found that both classifications were substantially reliable with regards to intraobserver reliability but that the Schatzker system was only fairly reliable and the AO classification moderately reliable with regards to interobserver reliability. Breaking down the AO classification, with regards to intraobserver reliability, the AO group was substantially reliable and the type excellently reliable. For interobserver reliability, the AO group was moderately reliable while the AO type was substantially reliable.Conclusion For tibial plateau fractures seen on plain x-ray, the AO classification is more reliable between observers than the Schatzker classification.  相似文献   

19.
Treatment recommendations for metacarpal neck fractures of the small finger are generally based on the degree of apex dorsal angulation at the fracture site. We evaluated the variability of measurement of fracture angulation and the effect this variability has on treatment recommendations for these injuries. A total of 96 radiographs (anteroposterior, lateral, oblique views) of 32 patients with fractures of the small finger metacarpal neck were evaluated independently by 3 fellowship-trained orthopedic hand surgeons. Treatment recommendations for each fracture were tabulated. This process was repeated 6 weeks later to evaluate intraobserver variability. Kappa coefficients of inter- and intraobserver reliability of fracture angulation measurement and treatment plans were generated. The mean reliability coefficient of the measurement of fracture angulation between the 3 different observers was slight. Similarly, the reproducibility of fracture angulation measurement within observers was fair. Agreement between observers for appropriate treatment recommendations for each fracture was fair and agreement within observers for treatment was only slightly better. The measurement of fracture angulation of small finger metacarpal neck fractures seems to be subject to a high degree of inter- and intraobserver variability.  相似文献   

20.
The impact of the garden classification on proposed operative treatment   总被引:2,自引:0,他引:2  
The current study evaluates the interobserver reliability and intraobserver reproducibility of the Garden classification of femoral neck fractures, assesses the influence of a lateral radiograph on a fracture's classification, and determines the classification's impact on the surgeon's choice of operative treatment. Forty radiographs of femoral neck fractures were evaluated independently by five orthopaedic surgeons. Kappa values were calculated for interobserver reliability and intraobserver variability with respect to the readers' ability to assess the fractures using the Garden classification and to determine fracture displacement with and without access to a lateral radiograph. In 69% of the instances in which a reader changed the classification of a fracture, the proposed treatment of the fracture did not change. The Garden classification has poor interobserver reliability but good intraobserver reproducibility. The addition of a lateral radiograph does not seem to improve the reliability of the current Garden classification system but may improve the reader's ability to determine fracture displacement. To improve the reliability and usefulness of the Garden classification, the authors suggest that the classification should be modified to have only two stages (Garden A-nondisplaced or valgus impacted and Garden B-displaced) and to include the use of a lateral radiograph.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号