首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 125 毫秒
1.
目的:探讨并比较AO分型、Denis分类、TLICS评分分类系统在胸腰段骨折诊断中的可信度和可重复性。方法:选择临床及影像学资料(X线片、CT、MRI)完整的31例胸腰段骨折患者,将该31例患者的资料提供给6名骨科医生,分别采用AO分型、Denis分类、TLICS评分分类三种方法进行脊柱骨折分类。3个月后进行重复分类。采用加权Cohen′s Kappa系数评价观察者间可信度和观察者内可重复性。结果:AO分型、Denis分类、TLICS评分分类的观察者间平均Kappa系数分别为0.517、0.639、0.713;三种分类方法的观察者内平均Kappa系数分别为0.766、0.832、0.804。结论:三种胸腰段骨折分类方法比较,TLICS评分分类方法的可信度和可重复性较高,Denis分类方法次之,AO分型方法较差,前者更具临床实用价值。  相似文献   

2.
两种常用胸腰椎骨折分类系统的临床价值   总被引:2,自引:2,他引:0  
目的探讨Denis和Gertzbein分类系统在胸腰椎骨折中的临床应用价值。方法40例急性胸腰椎骨折的X线片和CT提供给10位骨科医生,用Denis和Gertzbein两种分类方法进行分类。3个月后再分类。用Kappa(k)指数评价观察者间和观察者内的可信度。结果Denis四类型观察者间的平均Kappa(k)指数为0·588,16亚型为0·342;Gertzbein三类型为0·603,9亚型为0·420。Denis四类型观察者内的平均Kappa(k)指数为0·706,16个亚型为0·432;Gertzbein三类型为0·746,9亚型为0·511。结论Denis和Gertzbein分类系统都只呈中等程度的相符性和可重复性,都存在不同程度的缺陷。  相似文献   

3.
目的:评价新型胸腰椎骨折损伤AO分型系统的可信度和可重复性,探讨影响分型一致性的主要原因。方法:选取5名医师,根据术前正侧位X线片、CT、MRI影像,用新型AO分型系统独立对收治的70例胸腰椎骨折损伤患者进行分型。对同一例患者,5名医师在一次分型中只要有1名医师分型不同即认定为不一致。6周后,打乱资料顺序再次分型。全部资料均不含与分型有关的任何标记,应用加权Cohen′s Kappa系数(unweighted Cohen Kappa coefficients)评价观察者间可信度和观察者内可重复性。结果:新型AO分型系统的可信度Kappa系数为0.602,可重复性平均Kappa系数为0.782。在3大骨折类型中,压缩型(A型)和分离移位型(C型)损伤的判定具有中、高度的可信度和极好的可重复性,可信度Kappa系数分别为0.604、0.662,可重复性平均Kappa系数分别为0.787、0.761;牵张型损伤(B型)判定的一致性相对较差,可信度Kappa系数为0.362,可重复性平均Kappa系数为0.657。损伤各亚型整体一致性,可信度Kappa系数为0.526,可重复性平均Kappa系数为0.701;其中B2型一致性最差,可信度Kappa系数为0.214,可重复性平均Kappa系数为0.633;其次为A4型,可信度Kappa系数为0.322,可重复性平均Kappa系数为0.685。结论:新型胸腰椎骨折损伤AO分型系统具有中、高度的一致性和极好的可重复性,但对A4和B2型骨折判定的可信度较差。  相似文献   

4.
胸腰椎损伤分类及损伤程度评分系统的初步评估   总被引:1,自引:0,他引:1  
目的 评估胸腰椎损伤分类及损伤程度评分系统(TLICS)的可信度(Interobserver reliability)和可重复性(Intraobserv er reproducibility),及其对胸腰椎损伤治疗的指导作用.方法 2006年1月~2007年6月收治脊柱胸腰段骨折48例,均行胸腰椎X线、CT、MRI检查,根据TLICS系统进行评定,3个月后进行再次评估.使用Cohen加权Kappa系数(Unweighted Co hen kappa coefficients)对观察者问一致性和可重复性进行分析.结果 计算TLICS亚类Kappa系数位于中度和较高可靠性之间(0.43~0.72).TLICS系统可重复性的Kappa系数则位于中度和较高度可重复性之间(0.55~0.76).TLICS系统诊断准确率为95.1%,敏感性87.5%,特异性96.8%.结论 TLICS分类系统具有较高的可靠性和可重复性,使用简单,易于掌握,此方法对于胸腰椎损伤的评估较全面和准确,可以作为患者临床治疗选择的依据.  相似文献   

5.
目的 :对退变性腰椎滑脱的French分型及CARDS分型进行可重复性与可信度的对比分析,探讨两种分型在退变性腰椎滑脱患者中的应用价值。方法:回顾性分析2012年1月~2016年6月期间118例腰椎退变性滑脱(L4/5 91例、L5/S1 27例)患者,其中男性26例,女性92例,平均年龄61.1±8.1岁。3名脊柱外科医师对患者术前X线片独自进行两次测量,分别使用French分型和临床与影像学分型(clinical and radiographic degenerative spondylolisthesis,CARDS分型)进行评估和分型,收集结果 ,作同一观察者间可重复性及不同观察者间可信度分析。应用Kappa值比较分析两种分型的差异性。结果:3位观察者使用French分型系统共进行708次(118例×3×2次)分型,包括1型261次,2型107次,3型83次,4型54次,5型203次,观察者内分型一致率80.5%~86.4%(Kappa值0.740~0.815),属于"基本可信";观察者间分型一致率为79.7%~82.2%(Kappa值0.728~0.758),属于"基本可信"。测量并分型单个患者平均花费时间约138s。CARDS分型系统共708次分型中,包括A型(A1)19次,B型(B1 90次,B2 59次)149次,C型(C1 291次,C2 108次)399次,D型(D1 98次,D2 43次)141次,观察者内总体一致率90.7%~93.2%(Kappa值0.878~0.911),属于"完全可信";观察者间总体一致率88.1%~94.1%(Kappa值0.844~0.921),属于"完全可信"。测量并分型单个患者平均花费时间约67s。结论 :两种分型系统具有较高的可重复性与可信度,CARDS分型可信度与可重复性优于French分型。  相似文献   

6.
目的 对比分析胸腰段骨质疏松性骨折严重程度评分评估系统(TLOFSAS)与骨质疏松性骨折(OF)分型评估胸腰椎骨质疏松性椎体压缩性骨折(OVCF)的可重复性与可信度.方法 选取2017年8月—2019年5月于巴中市中心医院就诊的146例胸腰椎OVCF患者为研究对象.以TLOFSAS与OF分型为标准,由6位脊柱外科医师分别对每例患者的临床资料进行独立评价,2周后再次进行独立评价.收集评估结果,分别行观察者内可重复性及观察者间可信度分析,应用Kappa一致性检验分析TLOFSAS与OF分型的差异性.结果 6位医师使用TLOFSAS共进行1752(146×6×2)次评估,其中<4分876次,4分582次,>4分294次;不同观察者间平均评分一致性为77.4%,平均可信度κ值为0.76,同一观察者内平均评分一致性为79.6%,平均可重复性κ值为0.786.6位医师使用OF分型共进行1752次评估,其中1型224次,2型694次,3型644次,4型154次,5型36次;不同观察者间平均评分一致性为84.3%,平均可信度κ值为0.79;同一观察者内平均评分一致性为85.2%,平均可重复性κ值为0.796.采取TLOFSAS评估的患者手术选择率(90.62%,58/64)高于OF分型(85.29%,58/68),但差异无统计学意义(P>0.05).结论 TLOFSAS与OF分型评估胸腰椎OVCF均具有较好的可重复性和可信度,值得临床推广应用.  相似文献   

7.
目的比较青少年特发性脊柱侧凸King、Lenke和PuMC(协和)分型系统的可信度和可重复性,探讨PUMC(协和)分型的临床应用价值。方法随机选取2002年1月至2004年12月手术治疗的100例青少年特发性脊柱侧凸病例,男22例,女78例;年龄1肌18岁,平均14.9岁。主弯Cobb角40°-75°,平均52°每例患者均有完整的术前X线片资料,包括术前站立位全脊柱正侧位及仰卧位左右Bending片和骨盆X线片,X线片均不进行预先测量。由4名有分型经验的脊柱外科医生分别进行脊柱侧凸的King、Lenke和PUMC(协和)分型,2周后再次进行分型,收集结果后对分型的可信度和可重复性进行分析。计算Kappa检验的一致性。结果King、Lenke、PUMC(协和)分型的可信度平均为81.2%(Kappa值=0.773)、60.5%(Kappa值=0。560)、8413%(Kappa值=0.819),可重复性平均为91.5%(Kappa值=0.897)、81.8%(Kappa值=0.796)、92%(Kappa值=0.907)。结论PUMC(协和)分型包括了脊柱侧凸在三平面内的畸形特点,分型全面,易于掌握,而且具有很好的可信度和可重复性,适合脊柱侧凸的三维矫形。  相似文献   

8.
目的比较研究胫骨平台骨折AO、Schatzker和Hohl and Moore分型的可信度和可重复性。方法 4名不同年资的观察者对60例胫骨平台骨折按这3种分型方法进行分型,8周后再对打乱顺序后同一组患者再次分型,通过Kappa值和分型一致平均百分比进行可信度和可重复性分析。结果不同观察者对AO、Schatzker和Hohl and Moore 3种分型可信度评估的Kappa值分别为0.502、0.675、0.391,最后分型判断一致平均百分比分别为70.6%、87.9%、56.3%。同一观察者前后两个阶段3种分型可重复性的Kappa值分别为0.807、0.926、0.739,最后分型判断一致平均百分比分别为89.1%、94.8%、82.1%。结论这3种分型方法均不是理想的分型系统,但Schatzker分型在可信度和可重复性上明显优于AO和Hohl and Moore分型,易于掌握,更适合指导胫骨平台骨折的治疗。但是目前需要创建更精确的评估方法对骨折分型进行综合、全面的评估。  相似文献   

9.
[目的]对髋臼骨折Letoumel分型系统进行评价并分析此种分型系统的临床应用价值。[方法]于已行手术治疗的265例髋臼骨折患者的病例资料库中按Letoumel分型系统10个亚型中每种亚型随机抽取6例,再分为2组:平片组30例(提供骨盆正位、髂骨斜位、闭孔斜位X线片)及CT组30例(提供X线片、二维CT),抽取9位骨科创伤专业医生对两组进行读片并根据Letoume]分型系统做出诊断;每位观察者只读片而不给予其他临床资料。第二阶段即3个月后再次对相同资料进行分析,将结果统计Kappa值用来评估观察者间的可信度和可重复性进行一致性检验。[结果]不同观察者间在前后2个阶段的可信度为平片组0.65(0.70)、CT组0.66(0.71);同一观察者前后2次读片可重复性分别为平片组0.74、CT组0.77。[结论]以Letoumel分型系统对髋臼骨折进行分类诊断时,可以获得一致度较高的诊断结果,CT虽然对于髋臼骨折的治疗具有重大的指导意义,但并不能明显提高对髋臼骨折Letournel分型诊断的可信度。  相似文献   

10.
胸腰椎骨折分类与临床治疗密不可分,以往经典的胸腰椎骨折分类系统存在诸多不足。胸腰椎损伤严重度评分系统(TLISS)和胸腰椎损伤分类及严重度评分系统(TLICS)将骨折与神经功能状态相结合,对胸腰椎骨折进行综合评价,具有全面性及较高的可信度和可重复性,是目前最为可靠的分类评分系统。  相似文献   

11.
BACKGROUND: The reproducibility and repeatability of modern systems for classification of thoracolumbar injuries have not been sufficiently studied. We assessed the interobserver and intraobserver reproducibility of the AO (Arbeitsgemeinschaft für Osteosynthesefragen) classification and compared it with that of the Denis classification. Our purpose was to determine whether the newer, AO system had better reproducibility than the older, Denis classification. METHODS: Anteroposterior and lateral radiographs and computerized tomography scans (axial images and sagittal reconstructions) of thirty-one acute traumatic fractures of the thoracolumbar spine were presented to nineteen observers, all trained spine surgeons, who classified the fractures according to both the AO and the Denis classification systems. Three months later, the images of the thirty-one fractures were scrambled into a different order, and the observers repeated the classification. The Cohen kappa (kappa) test was used to determine interobserver and intraobserver agreement, which was measured with regard to the three basic classifications in the AO system (types A, B, and C) as well as the nine subtypes of that system. We also measured the agreement with regard to the four basic types in the Denis classification (compression, burst, seat-belt, and fracture-dislocation) and with regard to the sixteen subtypes of that system. RESULTS: The AO classification was fairly reproducible, with an average kappa of 0.475 (range, 0.389 to 0.598) for the agreement regarding the assignment of the three types and an average kappa of 0.537 for the agreement regarding the nine subtypes. The average kappa for the agreement regarding the assignment of the four Denis fracture types was 0.606 (range, 0.395 to 0.702), and it was 0.173 for agreement regarding the sixteen subtypes. The intraobserver agreement (repeatability) was 82% and 79% for the AO and Denis types, respectively, and 67% and 56%, for the AO and Denis subtypes, respectively. CONCLUSIONS: Both the Denis and the AO system for the classification of spine fractures had only moderate reliability and repeatability. The tendency for well-trained spine surgeons to classify the same fracture differently on repeat testing is a matter of some concern.  相似文献   

12.

Purpose

The objective of this study was to analyze the interobserver reliability and intraobserver reproducibility of the new AOSpine thoracolumbar spine injury classification system in young Chinese orthopedic surgeons with different levels of experience in spinal trauma. Previous reports suggest that the new AOSpine thoracolumbar spine injury classification system demonstrates acceptable interobserver reliability and intraobserver reproducibility. However, there are few studies in Asia, especially in China.

Methods

The AOSpine thoracolumbar spine injury classification system was applied to 109 patients with acute, traumatic thoracolumbar spinal injuries by two groups of spinal surgeons with different levels of clinical experience. The Kappa coefficient was used to determine interobserver reliability and intraobserver reproducibility.

Results

The overall Kappa coefficient for all cases was 0.362, which represents fair reliability. The Kappa statistic was 0.385 for A-type injuries and 0.292 for B-type injuries, which represents fair reliability, and 0.552 for C-type injuries, which represents moderate reliability. The Kappa coefficient for intraobserver reproducibility was 0.442 for A-type injuries, 0.485 for B-type injuries, and 0.412 for C-type injuries. These values represent moderate reproducibility for all injury types. The raters in Group A provided significantly better interobserver reliability than Group B (P < 0.05). There were no between-group differences in intraobserver reproducibility.

Conclusions

This study suggests that the new AO spine injury classification system may be applied in day-to-day clinical practice in China following extensive training of healthcare providers. Further prospective studies in different healthcare providers and clinical settings are essential for validation of this classification system and to assess its utility.
  相似文献   

13.
胸腰椎骨折分类评分及严重度评分的可靠性评价   总被引:1,自引:0,他引:1  
目的通过与Denis评分对比对胸腰椎骨折分类评分及严重度评分(TLICS)的可靠性进行评价。方法对自2010-01—2012—12诊治的胸腰椎单一椎体骨折未合并神经损伤41例,所有TLICS评分均≤3分。但根据Denis评分均涉及两柱骨折,符合手术指征。随机分为手术组(A组)和非手术组(B组),A组21例均采用后路切开复位内固定术,B组20例采用传统手法复位治疗。结果术后随访lO一19个月,平均15个月。2组治疗后均较治疗前Cobb角、椎体前缘高度、VAS评分等恢复良好,差异有统计学意义(P〈0.05)。2组间比较,手术组较非手术组恢复好,但差异无明显统计学意义(P〉0.05)。结论对于胸腰椎爆裂骨折TLICS评分≤3分选择非手术治疗亦获得良好疗效,相对于Denis评分,该评分系统更具可靠性和科学性。  相似文献   

14.
The impact of the garden classification on proposed operative treatment   总被引:2,自引:0,他引:2  
The current study evaluates the interobserver reliability and intraobserver reproducibility of the Garden classification of femoral neck fractures, assesses the influence of a lateral radiograph on a fracture's classification, and determines the classification's impact on the surgeon's choice of operative treatment. Forty radiographs of femoral neck fractures were evaluated independently by five orthopaedic surgeons. Kappa values were calculated for interobserver reliability and intraobserver variability with respect to the readers' ability to assess the fractures using the Garden classification and to determine fracture displacement with and without access to a lateral radiograph. In 69% of the instances in which a reader changed the classification of a fracture, the proposed treatment of the fracture did not change. The Garden classification has poor interobserver reliability but good intraobserver reproducibility. The addition of a lateral radiograph does not seem to improve the reliability of the current Garden classification system but may improve the reader's ability to determine fracture displacement. To improve the reliability and usefulness of the Garden classification, the authors suggest that the classification should be modified to have only two stages (Garden A-nondisplaced or valgus impacted and Garden B-displaced) and to include the use of a lateral radiograph.  相似文献   

15.

Purpose

The aim of this multicentre study was to determine whether the recently introduced AOSpine Classification and Injury Severity System has better interrater and intrarater reliability than the already existing Thoracolumbar Injury Classification and Severity Score (TLICS) for thoracolumbar spine injuries.

Methods

Clinical and radiological data of 50 consecutive patients admitted at a single centre with a diagnosis of an acute traumatic thoracolumbar spine injury were distributed to eleven attending spine surgeons from six different institutions in the form of PowerPoint presentation, who classified them according to both classifications. After time span of 6 weeks, cases were randomly rearranged and sent again to same surgeons for re-classification. Interobserver and intraobserver reliability for each component of TLICS and new AOSpine classification were evaluated using Fleiss Kappa coefficient (k value) and Spearman rank order correlation.

Results

Moderate interrater and intrarater reliability was seen for grading fracture type and integrity of posterior ligamentous complex (Fracture type: k = 0.43 ± 0.01 and 0.59 ± 0.16, respectively, PLC: k = 0.47 ± 0.01 and 0.55 ± 0.15, respectively), and fair to moderate reliability (k = 0.29 ± 0.01 interobserver and 0.44+/0.10 intraobserver, respectively) for total score according to TLICS. Moderate interrater (k = 0.59 ± 0.01) and substantial intrarater reliability (k = 0.68 ± 0.13) was seen for grading fracture type regardless of subtype according to AOSpine classification. Near perfect interrater and intrarater agreement was seen concerning neurological status for both the classification systems.

Conclusions

Recently proposed AOSpine classification has better reliability for identifying fracture morphology than the existing TLICS. Additional studies are clearly necessary concerning the application of these classification systems across multiple physicians at different level of training and trauma centers to evaluate not only their reliability and reproducibility, but also the other attributes, especially the clinical significance of a good classification system.
  相似文献   

16.

Purpose

The aims of this study were (1) to demonstrate the AOSpine thoracolumbar spine injury classification system can be reliably applied by an international group of surgeons and (2) to delineate those injury types which are difficult for spine surgeons to classify reliably.

Methods

A previously described classification system of thoracolumbar injuries which consists of a morphologic classification of the fracture, a grading system for the neurologic status and relevant patient-specific modifiers was applied to 25 cases by 100 spinal surgeons from across the world twice independently, in grading sessions 1 month apart. The results were analyzed for classification reliability using the Kappa coefficient (κ).

Results

The overall Kappa coefficient for all cases was 0.56, which represents moderate reliability. Kappa values describing interobserver agreement were 0.80 for type A injuries, 0.68 for type B injuries and 0.72 for type C injuries, all representing substantial reliability. The lowest level of agreement for specific subtypes was for fracture subtype A4 (Kappa = 0.19). Intraobserver analysis demonstrated overall average Kappa statistic for subtype grading of 0.68 also representing substantial reproducibility.

Conclusion

In a worldwide sample of spinal surgeons without previous exposure to the recently described AOSpine Thoracolumbar Spine Injury Classification System, we demonstrated moderate interobserver and substantial intraobserver reliability. These results suggest that most spine surgeons can reliably apply this system to spine trauma patients as or more reliably than previously described systems.
  相似文献   

17.
Letournal and Judet classification of acetabular fracture is widely used. The classification is based on the identification of fracture lines on plain radiographs. Three-dimensional CT scan was claimed to give a better view of the fracture line. Our study showed that intraobserver reproducibility and interobserver reliability were almost the same when classification was done by using plain radiographs and 3D-CT scan. And 3D-CT scan did not increase either the interobserver reliability or the intraobserver reproducibility in classifying the fracture.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号