首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
At many academic hospitals, radiology residents provide preliminary interpretations of CT studies performed outside of regular working hours. We examined the rate of discrepancies between resident interpretations and final reports issued by staff. We prospectively obtained 1,756 preliminary reports and corresponding final reports for computed tomography (CT) scans performed on call between November 2006 and March 2007. The overall rate of clinically significant discrepancies (those that would potentially alter the patient’s clinical course prior to issue of the final report) was 2.0%. Major discrepancy rates for abdominal/pelvic, chest, cervical spine and head CT were 4.1%, 2.5%, 1.0% and 0.7%, respectively. Senior residents had fewer major discrepancies compared to their junior colleagues. Time of interpretation was also evaluated, but a statistically significant relationship was not observed. In summary, this study demonstrates a low discrepancy rate between residents and staff radiologists and identifies areas where after-hours service may be further improved.  相似文献   

2.
PurposeTo assess the incidence and clinical significance of discrepancy in subspecialty interpretation of outside breast imaging examinations for newly diagnosed breast cancer patients presenting to a tertiary cancer center.Materials and MethodsThis Institutional Review Board–approved retrospective study included patients presenting from July 2016 to March 2017 to a National Cancer Institute–designated comprehensive cancer center for second opinion after breast cancer diagnosis. Outside and second opinion radiology reports of 252 randomly selected patients were compared by two subspecialty breast radiologists to consensus. A peer review score was assigned, modeled after ACR’s RADPEERTM peer review metric: 1—agree; 2—minor discrepancy (unlikely clinically significant); 3—moderate discrepancy (may be clinically significant); 4—major discrepancy (likely clinically significant). Among cases with clinically significant discrepancies, rates of clinical management change (management alterations including change in follow-up, neoadjuvant therapy use, and surgical management as a direct result of image review), and detection of additional malignancy were assessed through electronic medical record review.ResultsA significant difference in interpretation (scores = 3 or 4) was seen in 41 of 252 cases (16%, 95% confidence interval [CI], 11.7%-20.8%). The difference led to additional workup in 38 of 252 cases (15%, 95% CI 10.6%-19.5%) and change in clinical management in 18 of 252 cases (7.1%, 95% CI 4.0%-10.2%), including 15 of 252 with change in surgical management (6.0%, 95% CI, 3.0%-8.9%). An additional malignancy or larger area of disease was identified in 11 of 252 cases (4.4%, 95% CI, 1.8%-6.9%).ConclusionDiscrepancy between outside and second-opinion breast imaging subspecialists frequently results in additional workup for breast cancer patients, changes in treatment plan, and identification of new malignancies.  相似文献   

3.
BACKGROUND AND PURPOSE: Prior studies have revealed little difference in residents' abilities to interpret cranial CT scans. The purpose of this study was to assess the performance of radiology residents at different levels of training in the interpretation of emergency head CT images. METHODS: Radiology residents prospectively interpreted 1324 consecutive head CT scans ordered in the emergency department at the University of Arizona Health Science Center. The residents completed a preliminary interpretation form that included their interpretation and confidence in that interpretation. One of five neuroradiologists with a Certificate of Added Qualification subsequently interpreted the images and classified their assessment of the residents' interpretations as follows: "agree," "disagree-insignificant," or "disagree-significant." The data were analyzed by using analysis-of-variance or chi-squared methods. RESULTS: Overall, the agreement rate was 91%; the insignificant disagreement rate, 7%; and the significant disagreement rate, 2%. The level of training had a significant (P =.032) effect on the rate of agreement; upper-level residents had higher rates of agreement than those of more junior residents. There were 62 false-negative findings. The most commonly missed findings were fractures (n = 18) and chronic ischemic foci (n = 12). The most common false-positive interpretations involved 10 suspected intracranial hemorrhages and suspected fractures. CONCLUSION: The level of resident training has a significant effect on the rate of disagreement between the preliminary interpretations of emergency cranial CT scans by residents and the final interpretations by neuroradiologists. Efforts to reduce residents' errors should focus on the identification of fractures and signs of chronic ischemic change.  相似文献   

4.
BACKGROUND AND PURPOSE:Trainees'' interpretations of neuroradiologic studies are finalized by faculty neuroradiologists. We aimed to identify the factors that determine the degree to which the preliminary reports are modified.MATERIALS AND METHODS:The character length of the preliminary and final reports and the percentage character change between the 2 reports were determined for neuroradiology reports composed during November 2012 to October 2013. Examination time, critical finding flag, missed critical finding flag, trainee level, faculty experience, imaging technique, and native-versus-non-native speaker status of the reader were collected. Multivariable linear regression models were used to evaluate the association between mean percentage character change and the various factors.RESULTS:Of 34,661 reports, 2322 (6.7%) were read by radiology residents year 1; 4429 (12.8%), by radiology residents year 2; 3663 (10.6%), by radiology residents year 3; 2249 (6.5%), by radiology residents year 4; and 21,998 (63.5%), by fellows. The overall mean percentage character change was 14.8% (range, 0%–701.8%; median, 6.6%). Mean percentage character change increased for a missed critical finding (+41.6%, P < .0001), critical finding flag (+1.8%, P < .001), MR imaging studies (+3.6%, P < .001), and non-native trainees (+4.2%, P = .018). Compared with radiology residents year 1, radiology residents year 2 (−5.4%, P = .002), radiology residents year 3 (−5.9%, P = .002), radiology residents year 4 (−8.2%, P < .001), and fellows (−8.7%; P < .001) had a decreased mean percentage character change. Senior faculty had a lower mean percentage character change (−6.88%, P < .001). Examination time and non-native faculty did not affect mean percentage character change.CONCLUSIONS:A missed critical finding, critical finding flag, MR imaging technique, trainee level, faculty experience level, and non-native-trainee status are associated with a higher degree of modification of a preliminary report. Understanding the factors that influence the extent of report revisions could improve the quality of report generation and trainee education.

Understanding the prevalence, causes, and types of discrepancies and errors in examination interpretation is a critical step in improving the quality of radiology reports. In an academic setting, discrepancies and errors can result from nonuniform training levels of residents and fellows. However, even the “experts” err, and a prior study found a 2.0% clinically significant discrepancy rate among academic neuroradiologists.1 A number of factors can affect the accuracy of radiology reports. One variable of interest at teaching hospitals is the effect of the involvement of trainees on discrepancies in radiology reports. Researchers have found that compared with studies read by faculty alone, the rate of clinically significant detection or interpretation error was 26% higher when studies were initially reviewed by residents, and it was 8% lower when the studies were initially interpreted by fellows.2 These findings suggest that perhaps faculty placed too much trust in resident interpretations, which led to a higher rate of discrepancies, while on the other hand, having a second experienced neuroradiology fellow look at a case can help in reducing the error rate.2In our academic setting, preliminary reports initially created by trainees are subsequently reviewed and finalized by faculty or staff. The changes made to preliminary reports are a valuable teaching tool for trainees because clear and accurate report writing is a critical skill for a radiologist.3 Recently, computer-based tools have been created to help trainees compare the changes between preliminary and final reports to improve their clinical skills and to facilitate their learning. Sharpe et al4 described the implementation of a Radiology Report Comparator, which allows trainees to view a merged preliminary/final report with all the insertions and deletions highlighted in “tracking” mode. Surrey et al5 proposed using the Levenshtein percentage or percentage character change (PCC) between preliminary and final reports as a quantitative method of indirectly assessing the quality of preliminary reports and trainee performance. The Levenshtein percentage, a metric used in computer science, compares 2 texts by calculating the total number of single-character changes between the 2 documents, divided by the total character count in the final text.5In this study, we analyzed preliminary neuroradiology reports dictated by trainees and the subsequent finalized reports revised by our faculty. We set out to identify the factors that determine the degree to which the preliminary reports are modified by faculty for residents and fellows, for daytime and nighttime shifts, and for CT and MR imaging examinations. We hypothesized that study complexity, lack of experience (for both trainee and faculty), and perhaps limited language skills (native-versus-non-native speaker) would result in a greater number of corrections.  相似文献   

5.
AIM: To assess the impact on patient management of formal neuroradiology "second reading" of computed tomography (CT) and magnetic resonance imaging (MRI) images initially interpreted by general radiologists. MATERIALS AND METHODS: Second opinion reports during the calendar year 2004 were compared with the original report and assessed for major or minor discrepancies. A major discrepancy was separated from a minor discrepancy whereby a change in opinion significantly affected patient management. RESULTS: There were 506 second opinions during 2004 given by three consultant neuroradiologists. Incomplete data were found in 141. Forty-one percent were CT images and the remainder MRI. The majority of second opinions were requested by neurologists. Most of the remaining referrals were from neurosurgeons or the primary radiologist. There was a 13% major and a 21% minor discrepancy rate. The remaining 66% were in complete agreement. There was a mixture of overcalls, misinterpretation, and undercalls. There were similar rates of minor and major discrepancies in both CT and MRI. CONCLUSION: There is a significant major discrepancy rate between specialist neuroradiology second opinion and general radiologists. The benefit of a formal specialist second opinion service is clearly demonstrated; however, it is time-consuming.  相似文献   

6.
The purpose of this study was to determine the discrepancy rate between the preliminary interpretation of abdominal radiographs by emergency physicians compared to the final report rendered by gastrointestinal radiologists, and to assess the impact of such discrepancies on patient management. A retrospective analysis was performed on a sample of abdominal plain radiographs obtained in the emergency department of a private urban teaching hospital. Written preliminary interpretations by the emergency physician were compared to the final dictated reports of the gastrointestinal radiologist. An emergency physician determined whether availability of the final interpretation would have changed patient management. There were 387 abdominal plain film studies that satisfied the criteria for inclusion. Of these, 98 discordant interpretations were noted (an interpretive discrepancy rate of 25.3%). In 16 of the 98 cases (16%), the interpretive discrepancy was deemed to have resulted in a difference in patient management, i.e., a management-relevant discrepancy rate of 4.1% of the total study population. This analysis shows a higher interpretive discrepancy rate for emergency department interpretation of abdominal radiographs than has been reported with emergency department interpretations of other types of radiographs. The most common clinically relevant interpretive discrepancies were misinterpretation of intestinal obstruction and unrecognized urinary tract calculi. Presented at the 6th Annual Scientific Program, American Society of Emergency Radiology, Scottsdale, AZ, March 28, 1995.  相似文献   

7.
OBJECTIVE: This study was designed to assess the accuracy of general radiologists in the interpretation via teleradiology of emergency CT scans of the head. MATERIALS AND METHODS: We studied the interpretations of 716 consecutive emergency CT scans of the head by a group of 15 board-certified general radiologists practicing in the community (as opposed to an academic setting). The scans were sent via teleradiology, and the preliminary interpretations were made. Three of the general radiologists were functioning as nighthawks, and the remaining 12 were acting as on-call radiologists in addition to their normal daytime duties. Each CT examination was interpreted by one of five neuroradiologists the day after the initial interpretation had been performed. The findings of the final interpretation and the preliminary interpretation were categorized as showing agreement, insignificant disagreement, or significant disagreement. The reports in the two categories indicating disagreement were reviewed and reclassified by a consensus of three university-based neuroradiologists. RESULTS: Agreement between the initial interpretation by the general radiologist and the final interpretation by the neuroradiologist was found in 95% of the CT scans. The interpretations were judged to show insignificant disagreement in 3% (23/716) of the scans and to show significant disagreement in 2% (16/716). Of the 16 significant errors, five were false-positive findings and 11 were false-negative findings. Forty-seven CT scans depicted significant or active disease, and in 11 (23%) of these scans, the final report differed significantly from the preliminary interpretation. Three patients had pituitary masses, none of which had been described on the preliminary interpretation. CONCLUSION: The rate of significant discordance between board-certified on-call general radiologists and neuroradiologists in the interpretation of emergency CT scans was 2%, which was comparable to previously published reports of residents' performance. The pituitary gland may be a blind spot, and additional attention should be focused on this area.  相似文献   

8.
RATIONALE AND OBJECTIVES: The goal was to determine discordance rates between preliminary radiology reports provided by on-call radiology house staff and final reports from attending radiologists on cross-sectional imaging studies requested by emergency department staff after hours. MATERIALS AND METHODS: A triplicate carbon copy reporting form was developed to provide permanent records of preliminary radiology reports and to facilitate communication of discrepant results to the emergency department. Data were collected over 21 weeks to determine the number of discordant readings. Patients' medical records were reviewed to show whether discrepancies were significant or insignificant and to assess their impact on subsequent management and patient outcome. RESULTS: The emergency department requested 2830 cross-sectional imaging studies after hours and 2311 (82%) had a copy of the triplicate form stored in radiology archives. Discrepancies between the preliminary and final report were recorded in 47 (2.0%), with 37 (1.6%) considered significant: 14 patients needed no change, 13 needed a minor change, and 10 needed a major change in subsequent management. Ten (0.43%) of the discordant scans were considered insignificant. A random sample of 104 (20%) of the 519 scans without a paper triplicate form was examined. Seventy-one (68%) did have a scanned copy of the triplicate form in the electronic record, with a discrepancy recorded in 3 (4.2%), which was not statistically different from the main cohort (P = .18). CONCLUSION: Our study suggests a high level of concordance between preliminary reports from on-call radiology house staff and final reports by attending subspecialty radiologists on cross-sectional imaging studies requested by the emergency department.  相似文献   

9.
RATIONALE AND OBJECTIVES: To determine the incidence of radiology resident preliminary interpretation errors for plain film, body computed tomography, and neuroradiology (neuro)computed tomographic examinations read on call. MATERIALS AND METHODS: We retrospectively reviewed the data in a prospectively acquired resident quality assurance (QA) database dating between January 2000 and March 2007. The database comprises all imaging studies initially interpreted by an on-call resident and later reviewed by a board-certified attending radiologist who determined the level of discrepancy between the two interpretations according to a graded scale from 0 (no discrepancy) to 3 (major discrepancy). We reviewed the data with respect to resident training level, imaging modality, and variance level. Statistical analysis was performed with chi(2) test, alpha = 0.05. We compared our results with other published series studying resident and attending accuracy. RESULTS: A total of 141,381 cases were entered into the database during the review period. Of all examinations, 95.7% had zero variance, 3.3% minor variance, and 1.0% major variance. There was a slight, statistically significant increase in overall accuracy with increased resident year from 95.4% of examinations read by first-year residents (R1s) to 96.1% by fourth-year resident (R4s) (P < .0001). Overall percentages of exams with major discrepancies were 1.0% for R1s, 1.1% for second-year residents, 1.0% for third-year residents, and 0.98% for R4s. CONCLUSIONS: The majority of preliminary resident interpretations are highly accurate. The incidence of major discrepancies is extremely low and similar, even with R1s, to that of attending radiologists published in other studies. A slight, statistically significant decrease in the error rate is detectable as residents gain experience throughout the 4 years of residency.  相似文献   

10.
BACKGROUND AND PURPOSE:The repeatability of head CT interpretations may be studied in different contexts: in peer-review quality assurance interventions or in interobserver agreement studies. We assessed the agreement between double-blind reports of outpatient CT scans in a routine academic practice.MATERIALS AND METHODS:Outpatient head CT scans (119 patients) were randomly selected to be read twice in a blinded fashion by 8 neuroradiologists practicing in an academic institution during 1 year. Nonstandardized reports were analyzed to extract 4 items (answer to the clinical question, major findings, incidental findings, recommendations for further investigations) from each report, to identify agreement or discrepancies (classified as class 2 [mentioned or not mentioned or contradictions between reports], class 1 [mentioned in both reports but diverging in location or severity], 0 [concordant], or not applicable), according to a standardized data-extraction form. Agreement regarding the presence or absence of clinically significant or incidental findings was studied with κ statistics.RESULTS:The interobserver agreement regarding head CT studies with positive and negative results for clinically pertinent findings was 0.86 (0.77–0.95), but concordance was only 75.6% (67.2%–82.5%). Class 2 discrepancy was found in 15.1%; class 1 discrepancy, in 9.2% of cases. The κ value for reporting incidental findings was 0.59 (0.45–0.74), with class 2 discrepancy in 29.4% of cases. Most discrepancies did not impact the clinical management of patients.CONCLUSIONS:Discrepancies in double-blind interpretations of head CT examinations were more common than reported in peer-review quality assurance programs.

The delivery of optimal radiology services may require continuous vigilance and perhaps quality assurance interventions.13 The content of these interventions may not be evident, however. In addition, the manner in which the error, discrepancy, and disagreement should be handled both in theory and in clinical practice is evolving.4Discrepancies in peer-review approaches have been known for a long time.57 In 1959, Garland8 claimed that radiologists missed approximately 30% of tuberculosis cases in screening chest x-ray examinations.9 Garland''s report launched a series of investigations that continue today. However, there is no consensus on a standard method or protocol for evaluating errors and discrepancies in imaging reports, and rates published in the literature differ widely.13,1014 Multiple variations in study parameters, including sampling sources, methods, imaging modalities, specialties, categories, interpreter training levels, and degrees of blinding, may have contributed to this wide spectrum.2,3,9Recently, CT and MR imaging reports of the head, neck, and spine were re-read by staff neuroradiologists, and a 2% clinically significant discrepancy rate was found, an excellent result compared with the 3%–6% radiologic error rates published in general radiology practices.3,15,16To anyone who has studied reliability or precision of diagnostic imaging tests, such levels of disagreement between interpretations may appear unbelievably low. Peer-review quality assurance “errors and discrepancies” and disagreements in reliability studies of imaging test interpretations may not measure the same things. Discrepancies in the reporting of imaging studies can thus be approached from at least 2 different perspectives.From a quality assurance point of view, optimal radiology services require continuous quality assurance interventions. One report is the true right one, and discrepancies are errors that must be minimized. Performance can be measured; deviations and outliers can be identified, and appropriate measures can then be taken to improve performance.13A different vocabulary is used when discrepancies are examined from a scientific point of view. In the typical absence of a criterion standard of “truth,” the uncertainty is a reality that must be admitted and taken into account when using imaging reports for clinical decisions. Reliability and agreement can be measured by using proper methods, including independent readings; and concordant or diverging verdicts can be tabulated and summarized, though imperfectly, by using marginal sums and appropriate statistical tools (such as κ statistics). No test and certainly no imaging study requiring an element of interpretation will ever be perfectly repeatable.Reconciliation between these 2 perspectives is desirable. The credibility of quality assurance programs disconnected from scientific methods is shaky. If only errors could be defined, perhaps as discrepancies beyond “normal discrepancies.” Unfortunately attempts to define an acceptable level of radiologic discrepancy are probably futile. Multiple variables are at play, and distinctions, even between acceptable discrepancy and negligence, may remain blurry.17To our knowledge, reliability and agreement in the independent interpretation of head CT scans by expert neuroradiologists in a routine academic clinical practice have not been reported. In contrast to a peer-review approach, examining discrepancies after independent interpretations of clinical cases in everyday practice and looking for consensus on discrepant cases may provide a realistic and favorable framework for continuous quality improvement for each and all professionals, rather than the identification of specific deviant individuals. With this end in view, we studied the discrepancy in independent double readings of outpatient head CT scans in an academic practice. We hypothesized that our study would show a discrepancy rate in the range of ≥5%.  相似文献   

11.
PURPOSE: To compare a reduced (three-sequence) magnetic resonance (MR) imaging protocol with a full (eight- to 10-sequence) MR imaging protocol in adults suspected of having stroke. MATERIALS AND METHODS: Six neuroradiologists interpreted a consecutive sample of 265 MR images in patients suspected of having stroke. Each read reduced-protocol images in a discrete series of 40 patients (one read images in only 15) and corresponding full-protocol images 1 month later (reduced/full protocol). Five of the readers each read images in 10 additional cases, five each as full/full and reduced/reduced protocol controls. kappa values between full and reduced protocols, reader assessment of protocol adequacy, confidence level, and need for additional sequences or examinations were evaluated. RESULTS: In the reduced/full protocol, the kappa value for detecting ischemia was 0.797; and that for detecting any clinically important abnormality, 0.635. Statistically similar kappa values were found with the full/full control design (kappa = 0.802 and 0.715, respectively). The full protocol was judged more adequate than the reduced protocol (2.0 of 5.0 points vs 1.6, P <.001) and generated greater diagnostic confidence (8.6 of 10.0 points vs 8.9, P =.01), less need for additional sequences (2.7 of 6.0 points vs 1.5, P <.001), and more requests for additional examinations (28.4% vs 36.3%). CONCLUSION: Disagreement between interpretations of reduced- and full-protocol images might be attributable to baseline-level intraobserver inconsistency, as demonstrated in control designs. A greater number of sequences did not lead to greater consistency.  相似文献   

12.
PURPOSE: To evaluate and compare conventional magnetic resonance (MR) imaging and MR arthrography in the diagnosis of the most common traumatic metacarpophalangeal (MCP) joint injuries, which were created surgically in cadavers. MATERIALS AND METHODS: Injuries to various MCP joint structures were surgically created randomly in 28 fingers of seven human cadaveric hands. Injuries to the main collateral ligaments (CLs) (n = 12), accessory CL (n = 15), sagittal band (n = 14), transverse fibers of the extensor hood (n = 5), first annular pulley (n = 16), deep transverse metacarpal ligament (DTML) (n = 5), and palmar plate (n = 10) were analyzed. Conventional MR images and MR arthrograms were evaluated, with differences in interpretation resolved in consensus. The sensitivities, specificities, and accuracies of both MR imaging methods were determined, and the differences were tested for significance by using the McNemar test. RESULTS: Sensitivity was 28.6%-93.8% with conventional MR imaging versus 50.0%-93.3% with MR arthrography. Specificity was 66.7%-100% with conventional MR imaging versus 83.3%-100% with MR arthrography. Although the MR arthrographic results usually were higher, the differences were not significant. The kappa values for interobserver agreement were 0.314-0.638 for conventional MR imaging versus 0.364-1.00 for MR arthrography. Sensitivity for the detection of lesions of the main and accessory CLs and the first annular pulley was slightly higher than that for the detection of lesions of the extensor hood, DTML, and palmar plate structures. CONCLUSION: MR imaging and MR arthrography enable the diagnosis of simulated MCP joint injuries. MR arthrography does not have a significant advantage over conventional MR imaging.  相似文献   

13.
PurposeThe objective of this paper is to assess the volume, accuracy, and timeliness of radiology resident preliminary reports as part of an independent call system. This study seeks to understand the relationship between resident year in training, study modality, and discrepancy rate.MethodsResident preliminary interpretations on radiographs, ultrasound, CT, and MRI from October 2009 through December 2013 were prospectively scored by faculty on a modified RADPEER scoring system. Discrepancy rates were evaluated based on postgraduate year of the resident and the study modality. Turnaround times for reports were also reviewed. Differences between groups were compared with a chi-square test with a significance level of 0.05. Institutional review board approval was waived as only deidentified data were used in the study.ResultsA total of 416,413 studies were reported by 93 residents, yielding 135,902 resident scores. The rate of major resident–faculty assessment discrepancies was 1.7%. Discrepancy rates improved with increasing experience, both overall (PGY-3: 1.8%, PGY-4: 1.7%, PGY-5: 1.5%) and for each individual modality. Discrepancy rates were highest for MR (3.7%), followed by CT (2.4%), radiographs (1.4%), and ultrasound (0.6%). Emergency department report turnaround time averaged 31.7 min. The average graduating resident has been scored on 2,746 ± 267 reports during residency.ConclusionsResident preliminary reports have a low rate of major discrepancies, which improves over 3 years of call-taking experience. Although more complex cross-sectional studies have slightly higher discrepancy rates, discrepancies were still within the range of faculty report variation.  相似文献   

14.
Purpose: This study was performed to determine whether significant changes to patient treatment plan or outcome result from discrepancies between on-call radiology residents and follow-up attending radiologists in their interpretation of examinations. Methods: For 70 days we recorded on-call radiology residents' readings of all computed tomography and ultrasound examinations performed in our institution and the follow-up attending radiologists' readings of these same examinations. A chart review was performed to determine whether interpretation discrepancies changed the treatment plan and clinical outcome. Results: Eight-hundred thirty-four examinations met the study guidelines. The overall discrepancy rate was 5.16 %. Of these discrepancies, 6.98 % affected the treatment plan (0.36 % of all 834 studies) and none affected the clinical outcome. Conclusion: Where there is a discrepancy between interpretation of computed tomography and ultrasound after hours by on-call radiology residents and follow-up readings by attending radiologists, this discrepancy has no significant effect on the immediate or long-term care of patients.  相似文献   

15.

Objective

To assess the discrepancy rate for the interpretation of abdominal and pelvic computed tomography (CT) examinations among experienced radiologists.

Methods

Ninety abdominal and pelvic CT examinations reported by three experienced radiologists who specialize in abdominal imaging were randomly selected from the radiological database. The same radiologists, blinded to previous interpretation, were asked to re-interpret 60 examinations: 30 of their previous interpretations and 30 interpreted by others. All reports were assessed for the degree of discrepancy between initial and repeat interpretations according to a three-level scoring system: no discrepancy, minor, or major discrepancy. Inter- and intrareader discrepancy rates and causes were evaluated.

Results

CT examinations included in the investigation were performed on 90 patients (43 men, mean age 59 years, SD 14, range 19–88) for the following indications: follow-up/evaluation of malignancy (69/90, 77%), pancreatitis (5/90, 6%), urinary tract stone (4/90, 4%) or other (12/90, 13%). Interobserver and intraobserver major discrepancy rates were 26 and 32%, respectively. Major discrepancies were due to missed findings, different opinions regarding interval change of clinically significant findings, and the presence of recommendation.

Conclusions

Major discrepancy of between 26 and 32% was observed in the interpretation of abdominal and pelvic CT examinations.  相似文献   

16.
《Radiography》2019,25(4):359-364
IntroductionWe evaluated the reporting competency of radiographers providing preliminary clinical evaluations (PCE) for intraluminal pathology of computed tomography colonography (CTC).MethodFollowing validation of a suitable tool, audit was undertaken to compare radiographer PCE against radiology reports. A database was designed to capture radiographer and radiologist report data. The radiographer's PCE of intraluminal pathology was given a score, the “pathology discrepancy and significance” (PDS) score based on the pathology present, any discrepancy between the PCE and the final report, and the significance of that discrepancy on the management of the patient. Agreement was assessed using percentage agreement and Kappa coefficient. Significant discrepancies between findings were compared against endoscopy and pathology reports.ResultsThere was agreement or insignificant discrepancy between the radiographer PCE and the radiology report for 1736 patients, representing 97.0% of cases. There was a significant discrepancy between findings in 2.8% of cases and a major discrepancy recorded for 0.2% of cases. There was a 98.4% agreement in the 229 cases where significant pathologies were present.ConclusionFrom a database of 1815 studies acquired over three years and representing work done in a clinical environment, this study indicates a potential for trained radiographers to provide a PCE of intraluminal pathology.  相似文献   

17.
PURPOSE: To prospectively determine, for both digital subtraction angiography (DSA) and contrast material-enhanced magnetic resonance (MR) angiography, the accuracy of subjective visual impression (SVI) in the evaluation of internal carotid artery (ICA) stenosis, with objective caliper measurements serving as the reference standard. MATERIALS AND METHODS: Local ethics committee approval and written informed patient consent were obtained. A total of 142 symptomatic patients (41 women, 101 men; mean age, 70 years; age range, 44-89 years) suspected of having ICA stenosis on the basis of Doppler ultrasonographic findings underwent both DSA and contrast-enhanced MR angiography. With each modality, three independent neuroradiologists who were blinded to other test results first visually estimated and subsequently objectively measured stenoses. Diagnostic accuracy and percentage misclassification for correct categorization of 70%-99% stenosis were calculated for SVI, with objective measurements serving as the reference standard. Interobserver variability was determined with kappa statistics. RESULTS: After exclusion of arteries that were unsuitable for measurement, 180 vessels remained for analysis with DSA and 159 vessels remained for analysis with contrast-enhanced MR angiography. With respect to 70%-99% stenosis, SVI was associated with average misclassification of 8.9% for DSA (8.9%, 7.8%, and 10.0% for readers A, B, and C, respectively) and of 11.7% for contrast-enhanced MR angiography (11.3%, 8.8%, and 15.1% for readers A, B, and C, respectively). Negative predictive values were excellent (92.3%-100%). Interobserver variability was higher for SVI (DSA, kappa = 0.62-0.71; contrast-enhanced MR angiography, kappa = 0.57-0.69) than for objective measurements (DSA, kappa = 0.75-0.80; contrast-enhanced MR angiography, kappa = 0.66-0.72). CONCLUSION: SVI alone is not recommended for evaluation of ICA stenosis with both DSA and contrast-enhanced MR angiography. SVI may be acceptable as an initial screening tool to exclude the presence of 70%-99% stenosis, but caliper measurements are warranted to confirm the presence of such stenosis.  相似文献   

18.
The practical usefulness of a digital large image intensifier system was tested on 400 consecutive, routine chest examinations. Reading and reporting was carried out directly from the digital images on the TV monitors. For each patient images were also read independently from 100 mm photofluorograms. When the reports for each examination were compared, there was total agreement in reporting in 40% of the cases and clinically insignificant differences in interpretation in 56%. In the remaining 16 patients (4%), the opinion of the observers differed as to whether significant disease was present or not. However, these disagreements could not be related to imaging technique. We believe that a majority of chest examinations can be performed with digital technique and can be read directly from the monitor screen.  相似文献   

19.
BACKGROUND AND PURPOSE:Aside from basic Accreditation Council for Graduate Medical Education guidelines, few metrics are in place to monitor fellows'' progress. The purpose of this study was to determine objective trends in neuroradiology fellowship training on-call performance during an academic year.MATERIALS AND METHODS:We retrospectively reviewed the number of cross-sectional neuroimaging studies dictated with complete reports by neuroradiology fellows during independent call. Monthly trends in total call cases, report turnaround times, relationships between volume and report turnaround times, and words addended to preliminary reports by attending neuroradiologists were evaluated with regression models. Monthly variation in frequencies of call-discrepancy macros were assessed via χ2 tests. Changes in frequencies of specific macro use between fellowship semesters were assessed via serial 2-sample tests of proportions.RESULTS:From 2012 to 2017, for 29 fellows, monthly median report turnaround times significantly decreased during the academic year: July (first month) = 79 minutes (95% CI, 71–86 minutes) and June (12th month) = 55 minutes (95% CI, 52–60 minutes; P value = .023). Monthly report turnaround times were inversely correlated with total volumes for CT (r = –0.70, F = 9.639, P value = .011) but not MR imaging. Words addended to preliminary reports, a surrogate measurement of report clarity, slightly improved and discrepancy rates decreased during the last 6 months of fellowship. A nadir for report turnaround times, discrepancy errors, and words addended to reports was seen in December and January.CONCLUSIONS:Progress through fellowship correlates with a decline in report turnaround times and discrepancy rates for cross-sectional neuroimaging call studies and slight improvement in indirect quantitative measurement of report clarity. These metrics can be tracked throughout the academic year, and the midyear would be a logical time point for programs to assess objective progress of fellows and address any deficiencies.

A fellow''s progress in an academic year is primarily assessed using qualitative, thus subjective, criteria, including achievement of Accreditation Council for Graduate Medical Education–prescribed milestones and faculty evaluations. While the Accreditation Council for Graduate Medical Education provides requirements for total yearly cases read1 and individual programs may have internal metrics for fellows'' progress, there are no concrete external objective measurements for documenting fellows'' progress within the academic year. Often, fellows are unsure whether their efficiency in generating reports, report turnaround times (RTATs) for on-call examinations, or quality of on-call reports is satisfactory.The total number of studies dictated by the fellow and the RTATs of on-call studies may be reviewed by the attendings and program director with the fellows, but more meaningful interpretation of these numbers is lacking because there are no comparison benchmarks or quantitative checkpoints within the fellowship year. Knowledge of these factors is critical in a fellowship program so that program directors and fellows are jointly aware of progress throughout the year and remediation or additional focused training can be implemented, as necessary. More data on neuroradiology fellowship training are especially needed because a survey in 2016 demonstrated that 25% of practicing neuroradiologists in the United States believe that fellows'' abilities have declined.2 Prior studies have analyzed various other factors related to radiology residency training, including total cases read, turnaround time, and on-call accuracy,3,4 but to our knowledge, no studies have analyzed the quantitative trends in fellowship training during an academic year.We hypothesized that within an academic year, the RTAT for on-call studies dictated by fellows will decrease (ie, improve). Meanwhile, the discrepancy rates will decrease, and clarity of reports will improve. We also hypothesized that participating in independent call will have residual short-term effects on increasing clinical productivity during a subsequent regular work week.  相似文献   

20.
Contrast-enhanced MR studies were compared with noncontrast MR and contrast-enhanced CT scans in the evaluation of intraparenchymal brain metastases. Fifty consecutive inpatients were studied with short and long repetition time (TR) sequences before and after the administration of gadopentetate dimeglumine. In addition, a delayed short TR sequence was performed. The contrast CT, noncontrast MR, immediate postcontrast short TR sequence, postcontrast long TR sequence, and delayed postcontrast short TR sequence were each read blindly and independently by two neuroradiologists. These results were then compared with a final interpretation, reached by all the neuroradiologists in the study, using all the clinical information and imaging findings. Postcontrast short TR scans proved to be superior to other sequences. They were particularly useful in the detection of metastases in the posterior fossa and cortex. The delayed postcontrast short TR scan held no definite advantage over the immediate postcontrast short TR scan, although metastases were sometimes seen slightly better after the delay. While long TR sequences were not always sensitive or specific, they often did provide ancillary information and were particularly useful in cases of hemorrhagic metastases. Because of these findings, we recommend that the evaluation of intraparenchymal metastases consist of a single postcontrast long TR scan followed by a single postcontrast short TR scan. While these sequences should be very accurate in the detection of metastases, we also generally perform a single precontrast short TR scan as well, since the question of hemorrhage or bone lesion may be clinically relevant.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号