首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
PurposeTo determine the initial digital breast tomosynthesis (DBT) performance of radiology trainees with varying degrees of breast imaging experience.MethodsTo test trainee performance with DBT, we performed a reader study, after obtaining IRB approval. Two medical students, 20 radiology residents, 4 nonbreast imaging fellows, 3 breast imaging fellows, and 3 fellowship-trained breast imagers reviewed 60 unilateral DBT studies (craniocaudal and medio-lateral oblique views). Trainees had no DBT experience. Each reader recorded a final BI-RADS assessment for each case. The consensus interpretations from fellowship-trained breast imagers were used to establish the ground truth. Area under the receiver operating characteristic curve (AUC), sensitivity, and specificity were calculated. For analysis, first- through third-year residents were classified as junior trainees, and fourth-year residents plus nonbreast imaging fellows were classified as senior trainees.ResultsThe AUCs were .569 for medical students, .721 for junior trainees, .701 for senior trainees, and .792 for breast imaging fellows. The junior and senior trainee AUCs were equivalent (P < .01) using a two one-sided test for equivalence, with a significance threshold of 0.1. The sensitivities and specificities were highest for breast imaging fellows (.778 and .815 respectively), but similar for junior (.631 and .714, respectively) and senior trainees (.678 and .661, respectively).ConclusionsInitial performance with DBT among radiology residents and nonbreast imaging fellows is independent of years of training. Radiology educators should consider these findings when developing educational materials.  相似文献   

2.
PurposeTo assess the prevalence and characteristics of medical malpractice litigation involving radiology trainees.MethodsUsing a LexisNexis legal database keyword search, we identified all state and federal lawsuits between 2009 and 2018 yielding formal appellate and lower court opinions (precedent setting “complex litigation”) potentially involving physician trainees. Available judicial records were systematically reviewed to identify malpractice matters with material trainee involvement. Cases were categorized by criteria including specialty and location. Incidence rates were calculated for all specialties. Radiology lawsuits were characterized further.ResultsInitial LexisNexis Boolean database search yielded 8,935 potentially relevant cases with 580 confirmed as malpractice materially involving physician trainees. Annual cases trended downward (high 70, low 37). Most originated in New York (195 of 580; 33.6%), Ohio (41; 7.1%), and Pennsylvania (34; 5.9%) and involved surgery (204; 35.2%), obstetrics and gynecology (114; 19.7%), and medicine (105; 18.1%). The case incidence rate for all trainees was 0.63 per 1,000 trainee years. Of 309 cases with known outcomes, defendant physicians prevailed in 238 (77.0%). Radiology trainees represented only 23 cases (4.0%), corresponding to an incidence rate ratio of 0.79 (confidence interval 0.52-1.20). Radiology litigation most frequently involved alleged missed diagnoses (14 of 23; 60.8%) and procedural complications (7; 30.4%). Defendant radiologists prevailed in 9 of the 13 cases with known outcomes (69.2%).ConclusionComplex medical malpractice litigation involving physician trainees is infrequent and decreasing over time. Lawsuits involving radiology trainees are uncommon, less likely than for many nonradiology trainees, and typically involve alleged missed diagnoses or procedural complications. Defendant radiologists usually prevail.  相似文献   

3.
PurposeAdvances in artificial intelligence applied to diagnostic radiology are predicted to have a major impact on this medical specialty. With the goal of establishing a baseline upon which to build educational activities on this topic, a survey was conducted among trainees and attending radiologists at a single residency program.MethodsAn anonymous questionnaire was distributed. Comparisons of categorical data between groups (trainees and attending radiologists) were made using Pearson χ2 analysis or an exact analysis when required. Comparisons were made using the Wilcoxon rank sum test when the data were not normally distributed. An α level of 0.05 was used.ResultsThe overall response rate was 66% (69 of 104). Thirty-six percent of participants (n = 25) reported not having read a scientific medical article on the topic of artificial intelligence during the past 12 months. Twenty-nine percent of respondents (n = 12) reported using artificial intelligence tools during their daily work. Trainees were more likely to express doubts on whether they would have pursued diagnostic radiology as a career had they known of the potential impact artificial intelligence is predicted to have on the specialty (P = .0254) and were also more likely to plan to learn about the topic (P = .0401).ConclusionsRadiologists lack exposure to current scientific medical articles on artificial intelligence. Trainees are concerned by the implications artificial intelligence may have on their jobs and desire to learn about the topic. There is a need to develop educational resources to help radiologists assume an active role in guiding and facilitating the development and implementation of artificial intelligence tools in diagnostic radiology.  相似文献   

4.
PurposeThe aim of this study was to evaluate the accuracy of visual mammographic breast density assessment and determine if training can improve this assessment, to compare the accuracy of qualitative density assessment before and after training with a quantitative assessment tool, and to evaluate agreement between qualitative and quantitative density assessment methods.MethodsConsecutive screening mammograms performed over a 4-month period were visually assessed by two study breast radiologists (the leads), who selected 200 cases equally distributed among the four BI-RADS density categories. These 200 cases were shown to 20 other breast radiologists (the readers) before and after viewing a training module on visual density assessment. Agreement between reader assessment and lead radiologist assessment was calculated for both reading sessions. Quantitative volumetric density of the 200 mammograms, determined using a commercially available tool, was compared with both sets of reader assessment and with lead radiologist assessment.ResultsCompared with lead radiologist assessment, reader accuracy of breast density assessment increased from 65% before training to 72% after training (odds ratio, 1.41; P < .0001). Training specifically improved assignment to BI-RADS categories 1 (P < .0001) and 4 (P < .10). Compared with quantitative assessment, reader accuracy showed statistically nonsignificant improvement with training (odds ratio, 1.1; P = .26). Substantial agreement between qualitative and quantitative breast density assessment was demonstrated (κ = 0.78).ConclusionsTraining may improve the accuracy of mammographic breast density assessment. Substantial agreement between qualitative and quantitative breast density assessment exists.  相似文献   

5.
ObjectiveOur purpose was to assess the calibration of resident, fellow, and attending radiologists on a simple image classification task (presence or absence of an anterior cruciate ligament [ACL] tear based on interpretation of sagittal proton density, fat-saturated MR images) and to assess whether teaching residents could improve their calibration.MethodsWe created a test containing 30 randomized, sagittal, proton density, fat-saturated MR images of the ACL (15 normal, 15 torn). This test was administered in person to 20 trainees and 3 attendings at one medical center in one state. An online version of the test was given to 23 trainees and 14 attendings from 11 other medical centers in nine other states. Subjects were asked to give their confidence level (0%-100%) that each ACL was torn.ResultsCross-sectional data were collected from 60 radiologists (mean time after medical school = 9.3 years, minimum = 1 year, maximum = 36 years). This demonstrated a statistically significant improvement in calibration as a function of increasing experience (P = .020). Longitudinal data were collected from 12 trainees at the start and end of their musculoskeletal radiology rotation, with an intervening review of the primary and secondary signs of ACL tear on MR. A statistically significant improvement in calibration was noted during the rotation (P = .028).ConclusionsConfidence calibration is a promising tool for quality improvement and radiologist self-assessment. Our study showed that calibration loss improves with experience in radiologists tested on a common and clinically important image classification task. We also demonstrated that calibration can be successfully taught to residents over a relatively short period (2-4 weeks).  相似文献   

6.

Objectives

To develop a prediction model for breast cancer based on common mammographic findings on screening mammograms aiming to reduce reader variability in assigning BI-RADS.

Methods

We retrospectively reviewed 352 positive screening mammograms of women participating in the Dutch screening programme (Nijmegen region, 2006–2008). The following mammographic findings were assessed by consensus reading of three expert radiologists: masses and mass density, calcifications, architectural distortion, focal asymmetry and mammographic density, and BI-RADS. Data on age, diagnostic workup and final diagnosis were collected from patient records. Multivariate logistic regression analyses were used to build a breast cancer prediction model, presented as a nomogram.

Results

Breast cancer was diagnosed in 108 cases (31 %). The highest positive predictive value (PPV) was found for spiculated masses (96 %) and the lowest for well-defined masses (10 %). Characteristics included in the nomogram are age, mass, calcifications, architectural distortion and focal asymmetry.

Conclusion

With our nomogram we developed a tool assisting screening radiologists in determining the chance of malignancy based on mammographic findings. We propose cutoff values for assigning BI-RADS in the Dutch programme based on our nomogram, which will need to be validated in future research. These values can easily be adapted for use in other screening programmes.

Key points

? There is substantial reader variability in assigning BI-RADS in mammographic screening. ? There are no strict guidelines linking mammographic findings to BI-RADS categories. ? We developed a model (nomogram) predicting the presence of breast cancer. ? Our nomogram is based on common findings on positive screening mammograms. ? The nomogram aims to assist screening radiologists in assigning BI-RADS categories.  相似文献   

7.
Sickles EA  Wolverton DE  Dee KE 《Radiology》2002,224(3):861-869
PURPOSE: To evaluate performance parameters for radiologists in a practice of breast imaging specialists and general diagnostic radiologists who interpret a large series of consecutive screening and diagnostic mammographic studies. MATERIALS AND METHODS: Data (ie, patient age; family history of breast cancer; availability of previous mammograms for comparison; and abnormal interpretation, cancer detection, and stage 0-I cancer detection rates) were derived from review of mammographic studies obtained from January 1997 through August 2001. The breast imaging specialists have substantially more initial training in mammography and at least six times more continuing education in mammography, and they interpret 10 times more mammographic studies per year than the general radiologists. Differences between specialist and general radiologist performances at both screening and diagnostic examinations were assessed for significance by using Student t and chi(2) tests. RESULTS: The study involved 47,798 screening and 13,286 diagnostic mammographic examinations. Abnormal interpretation rates for screening mammography (ie, recall rate) were 4.9% for specialists and 7.1% for generalists (P <.001); and for diagnostic mammography (ie, recommended biopsy rate), 15.8% and 9.9%, respectively (P <.001). Cancer detection rates at screening mammography were 6.0 cancer cases per 1,000 examinations for specialists and 3.4 per 1,000 for generalists (P =.007); and at diagnostic mammography, 59.0 per 1,000 and 36.6 per 1,000, respectively (P <.001). Stage 0-I cancer detection rates at screening mammography were 5.3 cancer cases per 1,000 examinations for specialists and 3.0 per 1,000 for generalists (P =.012); and at diagnostic mammography, 43.9 per 1,000 and 27.0 per 1,000, respectively (P <.001). CONCLUSION: Specialist radiologists detect more cancers and more early-stage cancers, recommend more biopsies, and have lower recall rates than general radiologists.  相似文献   

8.
BACKGROUND AND PURPOSE:Trainees'' interpretations of neuroradiologic studies are finalized by faculty neuroradiologists. We aimed to identify the factors that determine the degree to which the preliminary reports are modified.MATERIALS AND METHODS:The character length of the preliminary and final reports and the percentage character change between the 2 reports were determined for neuroradiology reports composed during November 2012 to October 2013. Examination time, critical finding flag, missed critical finding flag, trainee level, faculty experience, imaging technique, and native-versus-non-native speaker status of the reader were collected. Multivariable linear regression models were used to evaluate the association between mean percentage character change and the various factors.RESULTS:Of 34,661 reports, 2322 (6.7%) were read by radiology residents year 1; 4429 (12.8%), by radiology residents year 2; 3663 (10.6%), by radiology residents year 3; 2249 (6.5%), by radiology residents year 4; and 21,998 (63.5%), by fellows. The overall mean percentage character change was 14.8% (range, 0%–701.8%; median, 6.6%). Mean percentage character change increased for a missed critical finding (+41.6%, P < .0001), critical finding flag (+1.8%, P < .001), MR imaging studies (+3.6%, P < .001), and non-native trainees (+4.2%, P = .018). Compared with radiology residents year 1, radiology residents year 2 (−5.4%, P = .002), radiology residents year 3 (−5.9%, P = .002), radiology residents year 4 (−8.2%, P < .001), and fellows (−8.7%; P < .001) had a decreased mean percentage character change. Senior faculty had a lower mean percentage character change (−6.88%, P < .001). Examination time and non-native faculty did not affect mean percentage character change.CONCLUSIONS:A missed critical finding, critical finding flag, MR imaging technique, trainee level, faculty experience level, and non-native-trainee status are associated with a higher degree of modification of a preliminary report. Understanding the factors that influence the extent of report revisions could improve the quality of report generation and trainee education.

Understanding the prevalence, causes, and types of discrepancies and errors in examination interpretation is a critical step in improving the quality of radiology reports. In an academic setting, discrepancies and errors can result from nonuniform training levels of residents and fellows. However, even the “experts” err, and a prior study found a 2.0% clinically significant discrepancy rate among academic neuroradiologists.1 A number of factors can affect the accuracy of radiology reports. One variable of interest at teaching hospitals is the effect of the involvement of trainees on discrepancies in radiology reports. Researchers have found that compared with studies read by faculty alone, the rate of clinically significant detection or interpretation error was 26% higher when studies were initially reviewed by residents, and it was 8% lower when the studies were initially interpreted by fellows.2 These findings suggest that perhaps faculty placed too much trust in resident interpretations, which led to a higher rate of discrepancies, while on the other hand, having a second experienced neuroradiology fellow look at a case can help in reducing the error rate.2In our academic setting, preliminary reports initially created by trainees are subsequently reviewed and finalized by faculty or staff. The changes made to preliminary reports are a valuable teaching tool for trainees because clear and accurate report writing is a critical skill for a radiologist.3 Recently, computer-based tools have been created to help trainees compare the changes between preliminary and final reports to improve their clinical skills and to facilitate their learning. Sharpe et al4 described the implementation of a Radiology Report Comparator, which allows trainees to view a merged preliminary/final report with all the insertions and deletions highlighted in “tracking” mode. Surrey et al5 proposed using the Levenshtein percentage or percentage character change (PCC) between preliminary and final reports as a quantitative method of indirectly assessing the quality of preliminary reports and trainee performance. The Levenshtein percentage, a metric used in computer science, compares 2 texts by calculating the total number of single-character changes between the 2 documents, divided by the total character count in the final text.5In this study, we analyzed preliminary neuroradiology reports dictated by trainees and the subsequent finalized reports revised by our faculty. We set out to identify the factors that determine the degree to which the preliminary reports are modified by faculty for residents and fellows, for daytime and nighttime shifts, and for CT and MR imaging examinations. We hypothesized that study complexity, lack of experience (for both trainee and faculty), and perhaps limited language skills (native-versus-non-native speaker) would result in a greater number of corrections.  相似文献   

9.
PurposeThe aim of this study was to evaluate changes in diagnostic radiology resident and fellow workloads in recent years.MethodsBerenson-Eggers Type of Service categorization was applied to Medicare Part B Physician/Supplier Procedure Summary Master Files to identify total and resident-specific claims for radiologist imaging services between 1998 and 2010. Data were extracted and subgroup analytics performed by modality. Volumes were annually normalized for active diagnostic radiology trainees.ResultsFrom 1998 to 2010, Medicare claims for imaging services rendered by radiologists increased from 78,901,255 to 105,252,599 (+33.4%). Service volumes increased across all modalities: for radiography from 55,661,683 to 59,654,659 (+7.2%), for mammography from 5,780,624 to 6,570,673 (+13.7%), for ultrasound from 5,851,864 to 9,853,459 (+68.4%), for CT from 9,351,780 to 22,527,488 (+140.9%), and for MR from 2,255,304 to 6,646,320 (+194.7%). Total trainee services nationally increased 3 times as rapidly. On an average per trainee basis, however, the average number of diagnostic services rendered annually to Medicare Part B beneficiaries increased from 499 to 629 (+26.1%). By modality, this represents an average change from 333 to 306 examinations (−8.1%) for radiography, from 20 to 18 (−7.4%) for mammography, from 37 to 56 (+49.7%) for ultrasound, from 88 to 202 (+129.1%) for CT, and from 20 to 47 (+132.0%) for MRI.ConclusionsBetween 1998 and 2010, the number of imaging examinations interpreted by diagnostic radiology residents and fellows on Medicare beneficiaries increased on average by 26% per trainee, with growth largely accounted for by disproportionate increases in more complex services (CT and MRI).  相似文献   

10.
ObjectiveTo conduct a simulation study to determine whether artificial intelligence (AI)-aided mammography reading can reduce unnecessary recalls while maintaining cancer detection ability in women recalled after mammography screening.Materials and MethodsA retrospective reader study was performed by screening mammographies of 793 women (mean age ± standard deviation, 50 ± 9 years) recalled to obtain supplemental mammographic views regarding screening mammography-detected abnormalities between January 2016 and December 2019 at two screening centers. Initial screening mammography examinations were interpreted by three dedicated breast radiologists sequentially, case by case, with and without AI aid, in a single session. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and recall rate for breast cancer diagnosis were obtained and compared between the two reading modes.ResultsFifty-four mammograms with cancer (35 invasive cancers and 19 ductal carcinomas in situ) and 739 mammograms with benign or negative findings were included. The reader-averaged AUC improved after AI aid, from 0.79 (95% confidence interval [CI], 0.74–0.85) to 0.89 (95% CI, 0.85–0.94) (p < 0.001). The reader-averaged specificities before and after AI aid were 41.9% (95% CI, 39.3%–44.5%) and 53.9% (95% CI, 50.9%–56.9%), respectively (p < 0.001). The reader-averaged sensitivity was not statistically different between AI-unaided and AI-aided readings: 89.5% (95% CI, 83.1%–95.9%) vs. 92.6% (95% CI, 86.2%–99.0%) (p = 0.053), although the sensitivities of the least experienced radiologists before and after AI aid were 79.6% (43 of 54 [95% CI, 66.5%–89.4%]) and 90.7% (49 of 54 [95% CI, 79.7%–96.9%]), respectively (p = 0.031). With AI aid, the reader-averaged recall rate decreased by from 60.4% (95% CI, 57.8%–62.9%) to 49.5% (95% CI, 46.5%–52.4%) (p < 0.001).ConclusionAI-aided reading reduced the number of recalls and improved the diagnostic performance in our simulation using women initially recalled for supplemental mammographic views after mammography screening.  相似文献   

11.
BackgroundThe radiology trainee on-call experience has undergone many changes in the past decade. The development of numerous online information sources has changed the landscape of opportunities for trainees seeking information while on-call. In this study, we sought to understand the current on-call information seeking behaviors of radiology trainees.MethodsWe surveyed radiology fellows and residents at three major metropolitan area academic institutions. Survey topics included demographic information, on-call volumes, on-call resource seeking behaviors, preferred first and second line on-call resources and rationale for particular resource usage.ResultsA total of 78 responses from trainees were recorded, 30.5% of the entire surveyed population. 70.5% of trainees preferred Radiopaedia as their first line resource. 26.9% of trainees preferred StatDx as their second line resource. 75.6% of respondents preferred their first line resource because it was easiest and fastest to access. 70.3% of respondents assigned a rating of 4 out of 5 when asked how often information they look for is found while on-call. There was a statistically significant difference according to gender (p = 0.002) with a higher percentage of males listing Radiopaedia as their first line resource compared to females.DiscussionThe radiology trainee on-call experience is influenced by various factors. Over the past decade, online resources, particularly the open access resource Radiopaedia and the paid service StatDx, have overwhelmingly become the preferred first and second line options, as demonstrated by our study results.  相似文献   

12.
ObjectiveLegislation in 38 states requires patient notification of dense mammographic breast tissue because increased density is a marker of breast cancer risk and can limit mammographic sensitivity. Because radiologist density assessments vary widely, our objective was to implement and measure the impact of a deep learning (DL) model on mammographic breast density assessments in clinical practice.MethodsThis institutional review board–approved prospective study identified consecutive screening mammograms performed across three clinical sites over two periods: 2017 period (January 1, 2017, through September 30, 2017) and 2019 period (January 1, 2019, through September 30, 2019). The DL model was implemented at sites A (academic practice) and B (community practice) in 2018 for all screening mammograms. Site C (community practice) was never exposed to the DL model. Prospective densities were evaluated, and multivariable logistic regression models evaluated the odds of a dense mammogram classification as a function of time and site.ResultsWe identified 85,124 consecutive screening mammograms across the three sites. Across time intervals, odds of a dense classification decreased at sites exposed to the DL model, site A (adjusted odds ratio [aOR], 0.93; 95% confidence interval [CI], 0.86-0.99; P = .024) and site B (aOR, 0.81 [95% CI, 0.70-0.93]; P = .003), and odds increased at the site unexposed to the model (site C) (aOR, 1.13 [95% CI, 1.01-1.27]; P = .033).DiscussionA DL model reduces the odds of screening mammograms categorized as dense. Accurate density assessments could help health care systems more appropriately use limited supplemental screening resources and help better inform traditional clinical risk models.  相似文献   

13.
PurposeThe aim of this study was to better understand the relationship between digital breast tomosynthesis (DBT) difficulty and radiology trainee performance.MethodsTwenty-seven radiology residents and fellows and three expert breast imagers reviewed 60 DBT studies consisting of unilateral craniocaudal and medial lateral oblique views. Trainees had no prior DBT experience. All readers provided difficulty ratings and final BI-RADS® scores. Expert breast imager consensus interpretations were used to determine the ground truth. Trainee sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated for low- and high-difficulty subsets of cases as assessed by each trainee him or herself (self-assessed difficulty) and consensus expert-assessed difficulty.ResultsFor self-assessed difficulty, the trainee AUC was 0.696 for high-difficulty and 0.704 for low-difficulty cases (P = .753). Trainee sensitivity was 0.776 for high-difficulty and 0.538 for low-difficulty cases (P < .001). Trainee specificity was 0.558 for high-difficulty and 0.810 for low-difficulty cases (P < .001). For expert-assessed difficulty, the trainee AUC was 0.645 for high-difficulty and 0.816 for low-difficulty cases (P < .001). Trainee sensitivity was 0.612 for high-difficulty and .784 for low-difficulty cases (P < .001). Trainee specificity was 0.654 for high-difficulty and 0.765 for low-difficulty cases (P = .021).ConclusionsCases deemed difficult by experts were associated with decreases in trainee AUC, sensitivity, and specificity. In contrast, for self-assessed more difficult cases, the trainee AUC was unchanged because of increased sensitivity and compensatory decreased specificity. Educators should incorporate these findings when developing educational materials to teach interpretation of DBT.  相似文献   

14.
PurposeThe aim of this study was to evaluate radiologists’ experiences with patient interactions in the era of open access of patients to radiology reports.MethodsThis prospective, nonrandom survey of staff and trainee radiologists (n = 128) at a single large academic institution was performed with approval from the institutional review board with a waiver of the requirement to obtain informed consent. A multiple-choice questionnaire with optional free-text comments was constructed with an online secure platform (REDCap) and distributed via departmental e-mail between June 1 and July 31, 2016. Participation in the survey was voluntary and anonymous, and responses were collected and aggregated via REDCap. Statistical analysis of categorical responses was performed with the χ2 test, with statistical significance defined as P < .05.ResultsAlmost three-quarters of surveys (73.4% [94 of 128]) were completed. Staff radiologists represented 54.3% of survey respondents (51 of 94) and trainees 45.7% (43 of 94). Most respondents (78.7% [74 of 94]) found interactions with patients to be a satisfying experience. More than half of radiologists (54.3% [51 of 94]) desired more opportunities for patient interaction, with no significant difference in the proportion of staff and trainee radiologists who desired more patient interaction (56.9% [29 of 51] versus 51.2% [22 of 43], P = .58). Staff radiologists who specialized in vascular and interventional radiology and mammography were significantly more likely to desire more patient interaction compared with other specialists (77.8% [14 of 18] versus 45.5% [15 of 33], P = .03). Only 4.2% of radiologists (4 of 94) found patient interactions to be detrimental to normal workflow, with 19.1% of radiologists (18 of 94) reporting having to spend more than 15 min per patient interaction.ConclusionsMost academic staff and trainee radiologists would like to have more opportunities for patient interaction and consider patient interaction rarely detrimental to workflow.  相似文献   

15.
PurposeThe aim of this study is to determine the impact of a simulation-based ultrasound-guided (USG) breast biopsy training session on radiology trainee procedural knowledge, comfort levels, and overall procedural confidence and anxiety.MethodsTwenty-one diagnostic radiology residents from a single academic institution were recruited to participate in an USG breast biopsy training session. The residents filled out a questionnaire before and after the training session. Ten multiple-choice questions tested general knowledge in diagnostic breast ultrasound and USG breast biopsy concepts. Subjective comfort levels with ultrasound machine and biopsy device functionality, patient positioning, proper biopsy technique, image documentation, needle safety and overall procedural confidence and anxiety levels were reported on a 5-point Likert scale before and after training.ResultsParticipants demonstrated significant improvement in number of correctly answered general knowledge questions after training (P < .0001). Significant improvement was seen in resident comfort level in ultrasound machine functionality, patient positioning, biopsy device functionality, biopsy technique, image documentation, as well as overall confidence level (all P < .05). Participants indicated a slight but not significant reduction in anxiety levels (P = .27).ConclusionsA simulation-based USG breast biopsy training session may improve radiology trainee procedural knowledge, comfort levels, and overall procedural confidence.  相似文献   

16.
PurposeTo quantitatively and qualitatively assess the impact of attending neuroradiology coverage on radiology resident perceptions of the on-call experience, referring physician satisfaction, and final report turnaround times.Materials and Methods24/7/365 attending neuroradiologist coverage began in October 2016 at our institution. In March 2017, an online survey of referring physicians, (emergency medicine, neurosurgery, and stroke neurology) and radiology residents was administered at a large academic medical center. Referring physicians were queried regarding their perceptions of patient care, report accuracy, timeliness, and availability of attending radiologists before and after the implementation of overnight neuroradiology coverage. Radiology residents were asked about their level of independence, workload, and education while on-call. Turnaround time (TAT) was measured over a 5-month period before and after the implementation of overnight neuroradiology coverage.ResultsA total of 28 of 64 referring physicians surveyed responded, for a response rate of 67%. Specifically, 19 of 23 second (junior resident on-call) and third year radiology residents (senior resident on-call) replied, 4 of 4 stroke neurology fellows replied, 8 of 21 neurosurgery residents, and 16 of 39 emergency medicine residents replied. Ninety-five percent of radiology residents stated they had adequate independence on call, 100% felt they have enough faculty support while on call, and 84% reported that overnight attending coverage has improved the educational value of their on-call experience. Residents who were present both before and after the implementation of TAT metrics thought their education, and independence had been positively affected. After overnight neuroradiology coverage, 85% of emergency physicians perceived improved accuracy of reports, 69% noted improved timeliness, and 77% found that attending radiologists were more accessible for consultation. The surveyed stroke neurology fellows and neurosurgery residents reported positive perception of the TAT, report quality, and availability of accessibility of attending radiologist.ConclusionsIn concordance with prior results, overnight attending coverage significantly reduced turnaround time. As expected, referring physicians report increased satisfaction with overnight attending coverage, particularly with respect to patient care and report accuracy. In contrast to some prior studies, radiology residents reported both improved educational value of the on-call shifts and preserved independence. This may be due to the tasking the overnight neuroradiology attending with dual goals of optimized TAT, and trainee growth. Unique implementation including subspecialty trained attendings may facilitate radiology resident independence and educational experience with improved finalized report turnaround.  相似文献   

17.
PurposeTo prospectively validate electromagnetic hand motion tracking in interventional radiology to detect differences in operator experience using simulation.MethodsSheath task: Six attending interventional radiologists (experts) and 6 radiology trainees (trainees) placed a wire through a sheath and performed a “pin-pull” maneuver, while an electromagnetic motion detection system recorded the hand motion. Radial task: Eight experts and 12 trainees performed palpatory radial artery access task on a radial access simulator. The trainees repeated the task with the nondominant hand. The experts were classified by their most frequent radial artery access technique as having either palpatory, ultrasound, or overall limited experience. The time, path length, and number of movements were calculated. Mann-Whitney U tests were used to compare the groups, and P < .05 was considered significant.ResultsSheath task: The experts took less time, had shorter path lengths, and used fewer movements than the trainees (11.7 seconds ± 3.3 vs 19.7 seconds ± 6.5, P < .01; 1.1 m ± 0.3 vs 1.4 m ± 0.4, P < .01; and 19.5 movements ± 8.5 vs 31.0 movements ± 8.0, P < .01, respectively). Radial task: The experts took less time, had shorter path lengths, and used fewer movements than the trainees (24.2 seconds ± 10.6 vs 33.1 seconds ± 16.9, P < .01; 2.0 m ± 0.5 vs 3.0 m ± 1.9, P < .001; and 36.5 movements ± 15.0 vs 54.5 movements ± 28.0, P < .001, respectively). The trainees had a shorter path length for their dominant hand than their nondominant hand (3.0 m ± 1.9 vs 3.5 m ± 1.9, P < .05). The expert palpatory group had a shorter path length than the ultrasound and limited experience groups (1.8 m ± 0.4 vs 2.0 m ± 0.4 and 2.3 m ± 1.2, respectively, P < .05).ConclusionsElectromagnetic hand motion tracking can differentiate between the expert and trainee operators for simulated interventional tasks.  相似文献   

18.
ObjectiveThe training experience in interventional radiology (IR) residency programs varies widely across the country. The introduction of an IR training pathway has provided the impetus for the specialty to better define outstanding IR education and for programs to rethink how their curricula prepare IR trainees for real-world practice. Although ACGME competencies define several training components that are necessary for independent practice, few quantitative or qualitative studies have explored current perceptions on what constitutes optimal IR training. Our goal was to qualitatively explore program training features deemed most important to adequately prepare IR physicians for practice and assess whether there were differences in perception between academic and nonacademic practices.MethodsSemistructured interviews were conducted with 71 IR attending physicians, trainees, and support staff across the United States. All interviews were performed over the telephone by a single researcher for consistency and systematically coded by two independent coders for common themes. Frequency and prevalence of themes and facilitating features were analyzed.ResultsThe most frequently perceived facilitating features included longitudinal patient care experience, practice-building education, interspecialty collaboration exposure, broad case mix, clinical decision-making exposure, diagnostic radiology training, procedural skills training, and graduated autonomy. Comparing nonacademic versus academic practice settings, significantly more nonacademic IR attending physicians expressed practice-building education (prevalence 72% versus 42%, frequency 2.2 versus 0.7, P < .01) as an important training experience.DiscussionAn understanding of perceived facilitating features for optimal IR trainee preparation, including potentially different needs between academic and nonacademic practices, can help programs prepare their trainees for a successful transition into practice.  相似文献   

19.
PURPOSE: To determine the preferences of radiologists among eight different image processing algorithms applied to digital mammograms obtained for screening and diagnostic imaging tasks. MATERIALS AND METHODS: Twenty-eight images representing histologically proved masses or calcifications were obtained by using three clinically available digital mammographic units. Images were processed and printed on film by using manual intensity windowing, histogram-based intensity windowing, mixture model intensity windowing, peripheral equalization, multiscale image contrast amplification (MUSICA), contrast-limited adaptive histogram equalization, Trex processing, and unsharp masking. Twelve radiologists compared the processed digital images with screen-film mammograms obtained in the same patient for breast cancer screening and breast lesion diagnosis. RESULTS: For the screening task, screen-film mammograms were preferred to all digital presentations, but the acceptability of images processed with Trex and MUSICA algorithms were not significantly different. All printed digital images were preferred to screen-film radiographs in the diagnosis of masses; mammograms processed with unsharp masking were significantly preferred. For the diagnosis of calcifications, no processed digital mammogram was preferred to screen-film mammograms. CONCLUSION: When digital mammograms were preferred to screen-film mammograms, radiologists selected different digital processing algorithms for each of three mammographic reading tasks and for different lesion types. Soft-copy display will eventually allow radiologists to select among these options more easily.  相似文献   

20.
ObjectiveTo evaluate a tomosynthesis screening mammography automated outcomes feedback application’s adoption and impact on performance.MethodsThis prospective intervention study evaluated a feedback application that provided mammographers subsequent imaging and pathology results for patients that radiologists had personally recalled from screening. Deployed to 13 academic and 5 private practice attending radiologists, adoption was studied from March 29, 2018, to March 20, 2019. Radiologists indicated if reviewed feedback would influence future clinical decisions. For a subset of eight academic radiologists consistently interpreting screening mammograms during the study, performance metrics were compared pre-intervention (January 1, 2016, to September 30, 2017) and post-intervention (October 1, 2017 to June 30, 2018). Abnormal interpretation rate, positive predictive value of biopsies performed, sensitivity, specificity, and cancer detection rate were compared using Pearson’s χ2 test. Logistic regression models were fit, adjusting for age, race, breast density, prior comparison, breast cancer history, and radiologist.ResultsThe 18 radiologists reviewed 68.5% (1,398 of 2,042) of available feedback cases and indicated that 17.4% of cases (243 of 1,398) could influence future decisions. For the eight academic radiologist subset, after multivariable adjustment with comparison to pre-intervention, average abnormal interpretation rate decreased (from 7.5% to 6.7%, adjusted odds ratio [aOR] 0.86, P < .01), positive predictive value of biopsies performed increased (from 40.6% to 51.3%, aOR 1.48, P = .011), and specificity increased (from 93.0% to 93.9%, aOR 1.17, P < .01) post-intervention. There was no difference in cancer detection rate per 1,000 examinations (from 5.8 to 6.1, aOR 1.01, P = .91) or sensitivity (from 81.2% to 78.7%, aOR 0.84, P = .47).ConclusionsRadiologists used a screening mammography automated outcomes feedback application. Its use decreased false-positive examinations, without evidence of reduced cancer detection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号