首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
MotivationAs quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems.ObjectiveThe objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way.MethodsA novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively.ResultsThe evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos.ConclusionWith the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems.  相似文献   

2.
ObjectiveEffective time and resource management in the operating room requires process information concerning the surgical procedure being performed. A major parameter relevant to the intraoperative process is the remaining intervention time. The work presented here describes an approach for the prediction of the remaining intervention time based on surgical low-level tasks.Materials and methodsA surgical process model optimized for time prediction was designed together with a prediction algorithm. The prediction accuracy was evaluated for two different neurosurgical interventions: discectomy and brain tumor resections. A repeated random sub-sampling validation study was conducted based on 20 recorded discectomies and 40 brain tumor resections.ResultsThe mean absolute error of the remaining intervention time predictions was 13 min 24 s for discectomies and 29 min 20 s for brain tumor removals. The error decreases as the intervention progresses.DiscussionThe approach discussed allows for the on-line prediction of the remaining intervention time based on intraoperative information. The method is able to handle demanding and variable surgical procedures, such as brain tumor resections. A randomized study showed that prediction accuracies are reasonable for various clinical applications.ConclusionThe predictions can be used by the OR staff, the technical infrastructure of the OR, and centralized management. The predictions also support intervention scheduling and resource management when resources are shared among different operating rooms, thereby reducing resource conflicts. The predictions could also contribute to the improvement of surgical workflow and patient care.  相似文献   

3.
MotivationThe primary economy-driven documentation of patient-specific information in clinical information systems leads to drawbacks in the use of these systems in daily clinical routine. Missing meta-data regarding underlying clinical workflows within the stored information is crucial for intelligent support systems. Unfortunately, there is still a lack of primary clinical needs-driven electronic patient documentation. Hence, physicians and surgeons must search hundreds of documents to find necessary patient data rather than accessing relevant information directly from the current process step. In this work, a completely new approach has been developed to enrich the existing information in clinical information systems with additional meta-data, such as the actual treatment phase from which the information entity originates.MethodsStochastic models based on Hidden Markov Models (HMMs) are used to create a mathematical representation of the underlying clinical workflow. These models are created from real-world anonymized patient data and are tailored to therapy processes for patients with head and neck cancer. Additionally, two methodologies to extend the models to improve the workflow recognition rates are presented in this work.ResultsA leave-one-out cross validation study was performed and achieved promising recognition rates of up to 90% with a standard deviation of 6.4%.ConclusionsThe method presented in this paper demonstrates the feasibility of predicting clinical workflow steps from patient-specific information as the basis for clinical workflow support, as well as for the analysis and improvement of clinical pathways.  相似文献   

4.
ObjectiveMost preventable adverse drug events and medication errors occur during medication ordering. Medication order entry and clinical decision support are available on paper or as computerized systems. In this post-hoc analysis we investigated frequency and clinical impact of blood glucose (BG) documentation- and user-related calculation errors as well as workflow deviations in diabetes management. We aimed to compare a paper-based protocol to a computerized medication management system combined with clinical workflow and decision support.MethodsSeventy-nine hospitalized patients with type 2 diabetes mellitus were treated with an algorithm driven basal-bolus insulin regimen. BG measurements, which were the basis for insulin dose calculations, were manually entered either into the paper-based workflow protocol (PaperG: 37 patients) or into GlucoTab®—a mobile tablet PC based system (CompG: 42 patients). We used BG values from the laboratory information system as a reference. A workflow simulator was used to determine user calculation errors as well as workflow deviations and to estimate the effect of errors on insulin doses. The clinical impact of insulin dosing errors and workflow deviations on hypo- and hyperglycemia was investigated.ResultsThe BG documentation error rate was similar for PaperG (4.9%) and CompG group (4.0%). In PaperG group, 11.1% of manual insulin dose calculations were erroneous and the odds ratio (OR) of a hypoglycemic event following an insulin dosing error was 3.1 (95% CI: 1.4–6.8). The number of BG values influenced by insulin dosing errors was eightfold higher than in the CompG group. In the CompG group, workflow deviations occurred in 5.0% of the tasks which led to an increased likelihood of hyperglycemia, OR 2.2 (95% CI: 1.1–4.6).DiscussionManual insulin dose calculations were the major source of error and had a particularly strong influence on hypoglycemia. By using GlucoTab®, user calculation errors were entirely excluded. The immediate availability and automated handling of BG values from medical devices directly at the point of care has a high potential to reduce errors. Computerized systems facilitate the safe use of more complex insulin dosing algorithms without compromising usability. In CompG group, missed or delayed tasks had a significant effect on hyperglycemia, while in PaperG group insufficient precision of documentation times limited analysis. The use of old BG measurements was clinically less relevant.ConclusionInsulin dosing errors and workflow deviations led to measurable changes in clinical outcome. Diabetes management systems including decision support should address nurses as well as physicians in a computerized way. Our analysis shows that such systems reduce the frequency of errors and therefore decrease the probability of hypo- and hyperglycemia.  相似文献   

5.
ObjectiveHealthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data.MethodsTo support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported.ResultsWe assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 h in parallel compared to 9 days if running sequentially.ConclusionThis work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers.  相似文献   

6.
《The Knee》2020,27(2):384-396
BackgroundIn ACL-reconstructed patients the postoperative knee biomechanics may differ from the intact knee biomechanical behavior which can alter knee kinematics and kinetics, and as a result lead to the progression of knee osteoarthritis. The aim of this study was to demonstrate the potential of finite element models to define the optimal choices in surgical parameters in terms of optimal graft positioning in combination with graft type in order to restore the kinematic and kinetic behavior of the knee as best as possible.MethodsA workflow was proposed based on cadaveric experiments in order to restore the injured knee to a near normal physiological condition. Femoral and tibial graft insertion sites and graft fixation tension were optimized to obtain similar intact knee laxity, for three common single-bundle and one double-bundle reconstructions. To verify the success of the surgery with the variables calculated using the proposed workflow, a full walking cycle was simulated with the intact, ACL-ruptured, optimal ACL-reconstructed and non-optimal reconstructed knees.ResultsOur results suggested that for patellar tendon and hamstring tendon grafts, anatomical positioning (fixation force: 40 N), and for quadriceps tendon graft, isometric positioning (fixation tension: 85 N) could recover the intact joint kinematics and kinetics. Also for double-bundle reconstruction, with the numerically calculated optimal insertion sites, both bundles needed 50-N fixation force.ConclusionsWith optimal graft positioning parameters, following the proposed workflow in this study, any of the single-bundle graft types and surgical techniques (single vs. double-bundle) may be used to acceptably recover the intact knee joint biomechanical behavior.  相似文献   

7.
8.
ObjectiveInfobuttons are decision support tools that offer links to information resources based on the context of the interaction between a clinician and an electronic medical record (EMR) system. The objective of this study was to explore machine learning and web usage mining methods to produce classification models for the prediction of information resources that might be relevant in a particular infobutton context.DesignClassification models were developed and evaluated with an infobutton usage dataset. The performance of the models was measured and compared with a reference implementation in a series of experiments.MeasurementsLevel of agreement (κ) between the models and the resources that clinicians actually used in each infobutton session.ResultsThe classification models performed significantly better than the reference implementation (p < .0001). The performance of these models tended to decrease over time, probably due to a phenomenon known as concept drift. However, the performance of the models remained stable when concept drift handling techniques were used.ConclusionsThe results suggest that classification models are a promising method for the prediction of information resources that a clinician would use to answer patient care questions.  相似文献   

9.
It is argued that with the introduction of electronic medical record (EMR) systems into the primary care sector, data collected can be used for secondary purposes which extend beyond individual patient care (e.g., for chronic disease management, prevention and clinical performance evaluation). However, EMR systems are primarily designed to support clinical tasks, and data entry practices of clinicians focus on the treatment of individual patients. Hence data collected through EMRs is not always useful in meeting these ends.PurposeIn this paper we follow a community health centre (CHC), and document the changes in work practices of the personnel that were necessary in order to make EMR data useful for secondary purposes.MethodsThis project followed an action research approach, in which ethnographic data were collected mainly by participant observations, by a researcher who also acted as an IT support person for the clinic's secondary usage of EMR data. Additionally, interviews were carried out with the clinical and administrative personnel of the CHC.ResultsThe case study demonstrates that meaningful use of secondary data occurs only after a long process, aimed at creating the pre-conditions for meaningful use of secondary data, has taken place.PreconditionsSpecific areas of focus have to be chosen for secondary data use, and initiatives have to be continuously evaluated and adapted to the workflow through a team approach. Collaboration between IT support and physicians is necessary to tailor the software to allow for the collection of clinically relevant data. Data entry procedures may have to be changed to encourage the usage of an agreed-upon coding scheme, required for meaningful use of secondary data. And finally resources in terms of additional personnel or dedicated time are necessary to keep up with data collection and other tasks required as a pre-condition to secondary use of data, communication of the results to the clinic, and eventual re-evaluation.ConsequencesChanges in the work practices observed in this case which were required to support secondary data use from the EMR included completion of additional tasks by clinical and administrative personnel related to the organization of follow-up tasks. Among physicians increased awareness of specific initiatives and guideline compliance in terms of chronic disease management and prevention was noticed. Finally, the clinic was able to evaluate their own practice and present the results to varied stakeholders.ConclusionsThe case describes the secondary usage of data by a clinic aimed at improving management of the clinic's patients. It illustrates that creating the pre-conditions for secondary use of data from EMRs is a complex process which can be seen as a shift in paradigms from a focus on individual patient care to chronic disease management and performance measurement. More research is needed about how to best support clinics in the process of change management necessitated by emerging clinical management goals.  相似文献   

10.
11.
ObjectiveThe objective of this study is to understand physicians’ usage of inpatient notes by (i) ascertaining different clinical note-entry and reading/retrieval styles in two different and widely used Electronic Health Record (EHR) systems, (ii) extrapolating potential factors leading to adoption of various note-entry and reading/retrieval styles and (iii) determining the amount of time to task associated with documenting different types of clinical notes.MethodsIn order to answer “what” and “why” questions on physicians’ adoption of certain-note-entry and reading/retrieval styles, an ethnographic study entailing Internal Medicine residents, with a mixed data analysis approach was performed. Participants were observed interacting with two different EHR systems in inpatient settings. Data was collected around the use and creation of History and Physical (H&P) notes, progress notes and discharge summaries.ResultsThe highest variability in template styles was observed with progress notes and the least variability was within discharge summaries, while note-writing styles were most consistent for H&P notes. The first sections to be read in a H&P and progress note were the Chief Complaint and Assessment & Plan sections, respectively. The greatest note retrieval variability, with respect to the order of how note sections were reviewed, was observed with H&P and progress notes. Physician preference for adopting a certain reading/retrieval order appeared to be a function of what best fits their workflow while fulfilling the stimulus demands. The time spent entering H&P, discharge summaries and progress notes were similar in both EHRs.ConclusionThis research study unveils existing variability in clinical documentation processes and provides us with important information that could help in designing a next generation EHR Graphical User Interface (GUI) that is more congruent with physicians’ mental models, task performance needs, and workflow requirements.  相似文献   

12.
ObjectivesEffective use of antibiotics is critical to control the global tuberculosis pandemic. High-dose isoniazid (INH) can be effective in the presence of low-level resistance. We performed a systematic literature review to improve our understanding of the differential impact of genomic Mycobacterium tuberculosis (Mtb) variants on the level of INH resistance. The following online databases were searched: PubMed, Web of Science and Embase. Articles reporting on clinical Mtb isolates with linked genotypic and phenotypic data and reporting INH resistance levels were eligible for inclusion.MethodsAll genomic regions reported in the eligible studies were included in the analysis, including: katG, inhA, ahpC, oxyR-ahpC, furA, fabG1, kasA, rv1592c, iniA, iniB, iniC, rv0340, rv2242 and nat. The level of INH resistance was determined by MIC: low-level resistance was defined as 0.1–0.4 μg/mL on liquid and 0.2–1.0 μg/mL on solid media, high-level resistance as >0.4μg/mL on liquid and >1.0 μg/mL on solid media.ResultsA total of 1212 records were retrieved of which 46 were included. These 46 studies reported 1697 isolates of which 21% (n = 362) were INH susceptible, 17% (n = 287) had low-level, and 62% (n = 1048) high-level INH resistance. Overall, 24% (n = 402) of isolates were reported as wild type and 76% (n = 1295) had ≥1 relevant genetic variant. Among 1295 isolates with ≥1 variant, 78% (n = 1011) had a mutation in the katG gene. Of the 867 isolates with a katG mutation in codon 315, 93% (n = 810) had high-level INH resistance. In contrast, only 50% (n = 72) of the 144 isolates with a katG variant not in the 315-position had high-level resistance. Of the 284 isolates with ≥1 relevant genetic variant and wild type katG gene, 40% (n = 114) had high-level INH resistance.ConclusionsPresence of a variant in the katG gene is a good marker of high-level INH resistance only if located in codon 315.  相似文献   

13.
ObjectiveClinicians pose complex clinical questions when seeing patients, and identifying the answers to those questions in a timely manner helps improve the quality of patient care. We report here on two natural language processing models, namely, automatic topic assignment and keyword identification, that together automatically and effectively extract information needs from ad hoc clinical questions. Our study is motivated in the context of developing the larger clinical question answering system AskHERMES (Help clinicians to Extract and aRrticulate Multimedia information for answering clinical quEstionS).Design and measurementsWe developed supervised machine-learning systems to automatically assign predefined general categories (e.g. etiology, procedure, and diagnosis) to a question. We also explored both supervised and unsupervised systems to automatically identify keywords that capture the main content of the question.ResultsWe evaluated our systems on 4654 annotated clinical questions that were collected in practice. We achieved an F1 score of 76.0% for the task of general topic classification and 58.0% for keyword extraction. Our systems have been implemented into the larger question answering system AskHERMES. Our error analyses suggested that inconsistent annotation in our training data have hurt both question analysis tasks.ConclusionOur systems, available at http://www.askhermes.org, can automatically extract information needs from both short (the number of word tokens <20) and long questions (the number of word tokens >20), and from both well-structured and ill-formed questions. We speculate that the performance of general topic classification and keyword extraction can be further improved if consistently annotated data are made available.  相似文献   

14.
PurposeThe purpose of this study is twofold: (1) to derive a workflow consensus from multiple clinical activity logs and (2) to detect workflow outliers automatically and without prior knowledge from experts.MethodsWorkflow mining is used in this paper to derive consensus workflow from multiple surgical activity logs using tree-guided multiple sequence alignment. To detect outliers, a global pair-wise sequence alignment (Needleman–Wunsch) algorithm is used. The proposed method is validated for Laparoscopic Cholecystectomy (LAPCHOL).ResultsAn activity log is directly derived for each LAPCHOL surgery from laparoscopic video using an already developed instrument tracking tool. We showed that a generic consensus can be derived from surgical activity logs using multi-alignment. In total 26 surgery logs are used to derive the consensus for laparoscopic cholecystectomy. The derived consensus conforms to the main steps of laparoscopic cholecystectomy as described in best practices. Using global pair-wise alignment, we showed that outliers can be detected from surgeries using the consensus and the surgical activity log.ConclusionAlignment techniques can be used to derive consensus and to detect outliers from clinical activity logs. Detecting outliers particularly in surgery is a main step to automatically mine and analyse the underlying cause of these outliers and improve surgical practices.  相似文献   

15.
16.
17.
18.
Thinking is biological work and involves the allocation of cognitive resources. The aim of this study was to investigate the impact of fluid intelligence on the allocation of cognitive resources while one is processing low-level and high-level cognitive tasks. Individuals with high versus average fluid intelligence performed low-level choice reaction time tasks and high-level geometric analogy tasks. We combined behavioral measures to examine speed and accuracy of processing with pupillary measures that indicate resource allocation. Individuals with high fluid intelligence processed the low-level choice reaction time tasks faster than normal controls. The task-evoked pupillary responses did not differ between groups. Furthermore, individuals with high fluid intelligence processed the high-level geometric analogies faster, more accurately, and showed greater pupil dilations than normal controls. This was only true, however, for the most difficult analogy tasks. In addition, individuals with high fluid intelligence showed greater preexperimental pupil baseline diameters than normal controls. These results indicate that individuals with high fluid intelligence have more resources available and thus can solve more demanding tasks. Moreover, high fluid intelligence appears to be accompanied by more task-free exploration.  相似文献   

19.
PurposePresent and assess clinical protocols and associated automated workflow for pre-surgical functional magnetic resonance imaging in brain tumor patients.MethodsProtocols were validated using a single-subject reliability approach based on 10 healthy control subjects. Results from the automated workflow were evaluated in 9 patients with brain tumors, comparing fMRI results to direct electrical stimulation (DES) of the cortex.ResultsUsing a new approach to compute single-subject fMRI reliability in controls, we show that not all tasks are suitable in the clinical context, even if they show meaningful results at the group level. Comparison of the fMRI results from patients to DES showed good correspondence between techniques (odds ratio 36).ConclusionProviding that validated and reliable fMRI protocols are used, fMRI can accurately delineate eloquent areas, thus providing an aid to medical decision regarding brain tumor surgery.  相似文献   

20.
目的 探讨不同考试情境下高中生自尊与考试焦虑关系.方法 采用高中生自尊评定问卷筛选高低自尊组共233名学生,平分每组被试在两种情境下进行考试,在考前和考中采用状态焦虑量表测量考试焦虑.结果 ①高中生考中焦虑与考前和常模比较差异显著(t=3.970,-11.860;P <0.001);②低自尊高中生在普通和重要考试情境下的考试焦虑差异显著(t=-2.780,-2.573;P<0.01);③低自尊高中生在两种考试情境下的考中和考前焦虑差异显著(t=-11.674,-13.261;P <0.001),高自尊的仅在重要考试情境下差异显著(t =-3.169,P<0.01);④在两种考试情境下,高低自尊高中生考试焦虑差异显著(t=-2.227,P<0.05;t =-4.672,P<0.001;t=-3.494,P<0.01;t =-5.655,P<0.001).结论 高中生考中焦虑问题突出;低自尊高中生的考试焦虑易受考试情境影响;考试情境对不同自尊高中生有一定影响;自尊对高中生考试焦虑有重要影响.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号