首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Objective

Applying the science of networks to quantify the discriminatory impact of the ICD-9-CM to ICD-10-CM transition between clinical specialties.

Materials and Methods

Datasets were the Center for Medicaid and Medicare Services ICD-9-CM to ICD-10-CM mapping files, general equivalence mappings, and statewide Medicaid emergency department billing. Diagnoses were represented as nodes and their mappings as directional relationships. The complex network was synthesized as an aggregate of simpler motifs and tabulation per clinical specialty.

Results

We identified five mapping motif categories: identity, class-to-subclass, subclass-to-class, convoluted, and no mapping. Convoluted mappings indicate that multiple ICD-9-CM and ICD-10-CM codes share complex, entangled, and non-reciprocal mappings. The proportions of convoluted diagnoses mappings (36% overall) range from 5% (hematology) to 60% (obstetrics and injuries). In a case study of 24 008 patient visits in 217 emergency departments, 27% of the costs are associated with convoluted diagnoses, with ‘abdominal pain’ and ‘gastroenteritis’ accounting for approximately 3.5%.

Discussion

Previous qualitative studies report that administrators and clinicians are likely to be challenged in understanding and managing their practice because of the ICD-10-CM transition. We substantiate the complexity of this transition with a thorough quantitative summary per clinical specialty, a case study, and the tools to apply this methodology easily to any clinical practice in the form of a web portal and analytic tables.

Conclusions

Post-transition, successful management of frequent diseases with convoluted mapping network patterns is critical. The http://lussierlab.org/transition-to-ICD10CM web portal provides insight in linking onerous diseases to the ICD-10 transition.  相似文献   

2.
3.

Background and objective

There is little evidence that electronic medical record (EMR) use is associated with better compliance with clinical guidelines on initiation of antiretroviral therapy (ART) among ART-eligible HIV patients. We assessed the effect of transitioning from paper-based to an EMR-based system on appropriate placement on ART among eligible patients.

Methods

We conducted a retrospective, pre-post EMR study among patients enrolled in HIV care and eligible for ART at 17 rural Kenyan clinics and compared the: (1) proportion of patients eligible for ART based on CD4 count or WHO staging who initiate therapy; (2) time from eligibility for ART to ART initiation; (3) time from ART initiation to first CD4 test.

Results

7298 patients were eligible for ART; 54.8% (n=3998) were enrolled in HIV care using a paper-based system while 45.2% (n=3300) were enrolled after the implementation of the EMR. EMR was independently associated with a 22% increase in the odds of initiating ART among eligible patients (adjusted OR (aOR) 1.22, 95% CI 1.12 to 1.33). The proportion of ART-eligible patients not receiving ART was 20.3% and 15.1% for paper and EMR, respectively (χ2=33.5, p<0.01). Median time from ART eligibility to ART initiation was 29.1 days (IQR: 14.1–62.1) for paper compared to 27 days (IQR: 12.9–50.1) for EMR.

Conclusions

EMRs can improve quality of HIV care through appropriate placement of ART-eligible patients on treatment in resource limited settings. However, other non-EMR factors influence timely initiation of ART.  相似文献   

4.

Objective

Quality indicators for the treatment of type 2 diabetes are often retrieved from a chronic disease registry (CDR). This study investigates the quality of recording in a general practitioner''s (GP) electronic medical record (EMR) compared to a simple, web-based CDR.

Methods

The GPs entered data directly in the CDR and in their own EMR during the study period (2011). We extracted data from 58 general practices (8235 patients) with type 2 diabetes and compared the occurrence and value of seven process indicators and 12 outcome indicators in both systems. The CDR, specifically designed for monitoring type 2 diabetes and reporting to health insurers, was used as the reference standard. For process indicators we examined the presence or absence of recordings on the patient level in both systems, for outcome indicators we examined the number of compliant or non-compliant values of recordings present in both systems. The diagnostic OR (DOR) was calculated for all indicators.

Results

We found less concordance for process indicators than for outcome indicators. HbA1c testing was the process indicator with the highest DOR. Blood pressure measurement, urine albumin test, BMI recorded and eye assessment showed low DOR. For outcome indicators, the highest DOR was creatinine clearance <30 mL/min or mL/min/1.73 m2 and the lowest DOR was systolic blood pressure <140 mm Hg.

Conclusions

Clinical items are not always adequately recorded in an EMR for retrieving indicators, but there is good concordance for the values of these items. If the quality of recording improves, indicators can be reported from the EMR, which will reduce the workload of GPs and enable GPs to maintain a good patient overview.  相似文献   

5.

Objective

To create a computable MEDication Indication resource (MEDI) to support primary and secondary use of electronic medical records (EMRs).

Materials and methods

We processed four public medication resources, RxNorm, Side Effect Resource (SIDER) 2, MedlinePlus, and Wikipedia, to create MEDI. We applied natural language processing and ontology relationships to extract indications for prescribable, single-ingredient medication concepts and all ingredient concepts as defined by RxNorm. Indications were coded as Unified Medical Language System (UMLS) concepts and International Classification of Diseases, 9th edition (ICD9) codes. A total of 689 extracted indications were randomly selected for manual review for accuracy using dual-physician review. We identified a subset of medication–indication pairs that optimizes recall while maintaining high precision.

Results

MEDI contains 3112 medications and 63 343 medication–indication pairs. Wikipedia was the largest resource, with 2608 medications and 34 911 pairs. For each resource, estimated precision and recall, respectively, were 94% and 20% for RxNorm, 75% and 33% for MedlinePlus, 67% and 31% for SIDER 2, and 56% and 51% for Wikipedia. The MEDI high-precision subset (MEDI-HPS) includes indications found within either RxNorm or at least two of the three other resources. MEDI-HPS contains 13 304 unique indication pairs regarding 2136 medications. The mean±SD number of indications for each medication in MEDI-HPS is 6.22±6.09. The estimated precision of MEDI-HPS is 92%.

Conclusions

MEDI is a publicly available, computable resource that links medications with their indications as represented by concepts and billing codes. MEDI may benefit clinical EMR applications and reuse of EMR data for research.  相似文献   

6.

Objective

Increasing use of electronic health records (EHRs) provides new opportunities for public health surveillance. During the 2009 influenza A (H1N1) virus pandemic, we developed a new EHR-based influenza-like illness (ILI) surveillance system designed to be resource sparing, rapidly scalable, and flexible. 4 weeks after the first pandemic case, ILI data from Indian Health Service (IHS) facilities were being analyzed.

Materials and methods

The system defines ILI as a patient visit containing either an influenza-specific International Classification of Disease, V.9 (ICD-9) code or one or more of 24 ILI-related ICD-9 codes plus a documented temperature ≥100°F. EHR-based data are uploaded nightly. To validate results, ILI visits identified by the new system were compared to ILI visits found by medical record review, and the new system''s results were compared with those of the traditional US ILI Surveillance Network.

Results

The system monitored ILI activity at an average of 60% of the 269 IHS electronic health databases. EHR-based surveillance detected ILI visits with a sensitivity of 96.4% and a specificity of 97.8% based on chart review (N=2375) of visits at two facilities in September 2009. At the peak of the pandemic (week 41, October 17, 2009), the median time from an ILI visit to data transmission was 6 days, with a mode of 1 day.

Discussion

EHR-based ILI surveillance was accurate, timely, occurred at the majority of IHS facilities nationwide, and provided useful information for decision makers. EHRs thus offer the opportunity to transform public health surveillance.  相似文献   

7.

Objective

To compare the use of structured reporting software and the standard electronic medical records (EMR) in the management of patients with bladder cancer. The use of a human factors laboratory to study management of disease using simulated clinical scenarios was also assessed.

Design

eCancerCareBladder and the EMR were used to retrieve data and produce clinical reports. Twelve participants (four attending staff, four fellows, and four residents) used either eCancerCareBladder or the EMR in two clinical scenarios simulating cystoscopy surveillance visits for bladder cancer follow-up.

Measurements

Time to retrieve and quality of review of the patient history; time to produce and completeness of a cystoscopy report. Finally, participants provided a global assessment of their computer literacy, familiarity with the two systems, and system preference.

Results

eCancerCareBladder was faster for data retrieval (scenario 1: 146 s vs 245 s, p=0.019; scenario 2: 306 vs 415 s, NS), but non-significantly slower to generate a clinical report. The quality of the report was better in the eCancerCareBladder system (scenario 1: p<0.001; scenario 2: p=0.11). User satisfaction was higher with the eCancerCareBladder system, and 11/12 participants preferred to use this system.

Limitations

The small sample size affected the power of our study to detect differences.

Conclusions

Use of a specific data management tool does not appear to significantly reduce user time, but the results suggest improvement in the level of care and documentation and preference by users. Also, the use of simulated scenarios in a laboratory setting appears to be a valid method for comparing the usability of clinical software.  相似文献   

8.

Objectives

To characterize patterns of electronic medical record (EMR) use at pediatric primary care acute visits.

Design

Direct observational study of 529 acute visits with 27 experienced pediatric clinician users.

Measurements

For each 20 s interval and at each stage of the visit according to the Davis Observation Code, we recorded whether the physician was communicating with the family only, using the computer while communicating, or using the computer without communication. Regression models assessed the impact of clinician, patient and visit characteristics on overall visit length, time spent interacting with families, and time spent using the computer while interacting.

Results

The mean overall visit length was 11:30 (min:sec) with 9:06 spent in the exam room. Clinicians used the EMR during 27% of exam room time and at all stages of the visit (interacting, chatting, and building rapport; history taking; formulation of the diagnosis and treatment plan; and discussing prevention) except the physical exam. Communication with the family accompanied 70% of EMR use. In regression models, computer documentation outside the exam room was associated with visits that were 11% longer (p=0.001), and female clinicians spent more time using the computer while communicating (p=0.003).

Limitations

The 12 study practices shared one EMR.

Conclusions

Among pediatric clinicians with EMR experience, conversation accompanies most EMR use. Our results suggest that efforts to improve EMR usability and clinician EMR training should focus on use in the context of doctor–patient communication. Further study of the impact of documentation inside versus outside the exam room on productivity is warranted.  相似文献   

9.

Objective

To determine how well statistical text mining (STM) models can identify falls within clinical text associated with an ambulatory encounter.

Materials and Methods

2241 patients were selected with a fall-related ICD-9-CM E-code or matched injury diagnosis code while being treated as an outpatient at one of four sites within the Veterans Health Administration. All clinical documents within a 48-h window of the recorded E-code or injury diagnosis code for each patient were obtained (n=26 010; 611 distinct document titles) and annotated for falls. Logistic regression, support vector machine, and cost-sensitive support vector machine (SVM-cost) models were trained on a stratified sample of 70% of documents from one location (dataset Atrain) and then applied to the remaining unseen documents (datasets Atest–D).

Results

All three STM models obtained area under the receiver operating characteristic curve (AUC) scores above 0.950 on the four test datasets (Atest–D). The SVM-cost model obtained the highest AUC scores, ranging from 0.953 to 0.978. The SVM-cost model also achieved F-measure values ranging from 0.745 to 0.853, sensitivity from 0.890 to 0.931, and specificity from 0.877 to 0.944.

Discussion

The STM models performed well across a large heterogeneous collection of document titles. In addition, the models also generalized across other sites, including a traditionally bilingual site that had distinctly different grammatical patterns.

Conclusions

The results of this study suggest STM-based models have the potential to improve surveillance of falls. Furthermore, the encouraging evidence shown here that STM is a robust technique for mining clinical documents bodes well for other surveillance-related topics.  相似文献   

10.

Background

The electronic medical record (EMR)/electronic health record (EHR) is becoming an integral component of many primary-care outpatient practices. Before implementing an EMR/EHR system, primary-care practices should have an understanding of the potential benefits and limitations.

Objective

The objective of this study was to systematically review the recent literature around the impact of the EMR/EHR within primary-care outpatient practices.

Materials and methods

Searches of Medline, EMBASE, CINAHL, ABI Inform, and Cochrane Library were conducted to identify articles published between January 1998 and January 2010. The gray literature and reference lists of included articles were also searched. 30 studies met inclusion criteria.

Results and discussion

The EMR/EHR appears to have structural and process benefits, but the impact on clinical outcomes is less clear. Using Donabedian''s framework, five articles focused on the impact on healthcare structure, 21 explored healthcare process issues, and four focused on health-related outcomes.  相似文献   

11.

Objective

To assess intensive care unit (ICU) nurses'' acceptance of electronic health records (EHR) technology and examine the relationship between EHR design, implementation factors, and nurse acceptance.

Design

The authors analyzed data from two cross-sectional survey questionnaires distributed to nurses working in four ICUs at a northeastern US regional medical center, 3 months and 12 months after EHR implementation.

Measurements

Survey items were drawn from established instruments used to measure EHR acceptance and usability, and the usefulness of three EHR functionalities, specifically computerized provider order entry (CPOE), the electronic medication administration record (eMAR), and a nursing documentation flowsheet.

Results

On average, ICU nurses were more accepting of the EHR at 12 months as compared to 3 months. They also perceived the EHR as being more usable and both CPOE and eMAR as being more useful. Multivariate hierarchical modeling indicated that EHR usability and CPOE usefulness predicted EHR acceptance at both 3 and 12 months. At 3 months postimplementation, eMAR usefulness predicted EHR acceptance, but its effect disappeared at 12 months. Nursing flowsheet usefulness predicted EHR acceptance but only at 12 months.

Conclusion

As the push toward implementation of EHR technology continues, more hospitals will face issues related to acceptance of EHR technology by staff caring for critically ill patients. This research suggests that factors related to technology design have strong effects on acceptance, even 1 year following the EHR implementation.  相似文献   

12.

Objective

To develop a generalizable method for identifying patient cohorts from electronic health record (EHR) data—in this case, patients having dialysis—that uses simple information retrieval (IR) tools.

Methods

We used the coded data and clinical notes from the 24 506 adult patients in the Multiparameter Intelligent Monitoring in Intensive Care database to identify patients who had dialysis. We used SQL queries to search the procedure, diagnosis, and coded nursing observations tables based on ICD-9 and local codes. We used a domain-specific search engine to find clinical notes containing terms related to dialysis. We manually validated the available records for a 10% random sample of patients who potentially had dialysis and a random sample of 200 patients who were not identified as having dialysis based on any of the sources.

Results

We identified 1844 patients that potentially had dialysis: 1481 from the three coded sources and 1624 from the clinical notes. Precision for identifying dialysis patients based on available data was estimated to be 78.4% (95% CI 71.9% to 84.2%) and recall was 100% (95% CI 86% to 100%).

Conclusions

Combining structured EHR data with information from clinical notes using simple queries increases the utility of both types of data for cohort identification. Patients identified by more than one source are more likely to meet the inclusion criteria; however, including patients found in any of the sources increases recall. This method is attractive because it is available to researchers with access to EHR data and off-the-shelf IR tools.  相似文献   

13.

Objective

To quantify and compare the time doctors and nurses spent on direct patient care, medication-related tasks, and interactions before and after electronic medication management system (eMMS) introduction.

Methods

Controlled pre–post, time and motion study of 129 doctors and nurses for 633.2 h on four wards in a 400-bed hospital in Sydney, Australia. We measured changes in proportions of time on tasks and interactions by period, intervention/control group, and profession.

Results

eMMS was associated with no significant change in proportions of time spent on direct care or medication-related tasks relative to control wards. In the post-period control ward, doctors spent 19.7% (2 h/10 h shift) of their time on direct care and 7.4% (44.4 min/10 h shift) on medication tasks, compared to intervention ward doctors (25.7% (2.6 h/shift; p=0.08) and 8.5% (51 min/shift; p=0.40), respectively). Control ward nurses in the post-period spent 22.1% (1.9 h/8.5 h shift) of their time on direct care and 23.7% on medication tasks compared to intervention ward nurses (26.1% (2.2 h/shift; p=0.23) and 22.6% (1.9 h/shift; p=0.28), respectively). We found intervention ward doctors spent less time alone (p=0.0003) and more time with other doctors (p=0.003) and patients (p=0.009). Nurses on the intervention wards spent less time with doctors following eMMS introduction (p=0.0001).

Conclusions

eMMS introduction did not result in redistribution of time away from direct care or towards medication tasks. Work patterns observed on these intervention wards were associated with previously reported significant reductions in prescribing error rates relative to the control wards.  相似文献   

14.

Objective

This report provides updated estimates on use of electronic medical records (EMRs) in US home health and hospice (HHH) agencies, describes utilization of EMR functionalities, and presents novel data on telemedicine and point of care documentation (PoCD) in this setting.

Design

Nationally representative, cross-sectional survey of US HHH agencies conducted in 2007.

Measurements

Data on agency characteristics, current use of EMR systems as well as use of telemedicine and PoCD were collected.

Results

In 2007, 43% of US HHH agencies reported use of an EMR system. Patient demographics (40%) and clinical notes (34%) were the most commonly used EMR functions among US HHH agencies. Only 20% of agencies with EMR systems had health information sharing functionality and about half of them used it. Telemedicine was used by 21% of all HHH agencies, with most (87%) of these offering home health services. Among home health agencies using telemedicine, greater than 90% used telephone monitoring and about two-thirds used non-video monitoring. Nearly 29% of HHH agencies reported using electronic PoCD systems, most often for Outcome and Assessment Information Set (OASIS) data capture (79%). Relative to for-profit HHH agencies, non-profit agencies used considerably more EMR (70% vs 28%, p<0.001) and PoCD (63% vs 9%, p<0.001).

Conclusions

Between 2000 and 2007, there was a 33% increase in use of EMR among HHH agencies in the US. In 2007, use of EMR and PoCD technologies in non-profit agencies was significantly higher than for-profit ones. Finally, HHH agencies generally tended to use available EMR functionalities, including health information sharing.  相似文献   

15.

Objective

Predicting patient outcomes from genome-wide measurements holds significant promise for improving clinical care. The large number of measurements (eg, single nucleotide polymorphisms (SNPs)), however, makes this task computationally challenging. This paper evaluates the performance of an algorithm that predicts patient outcomes from genome-wide data by efficiently model averaging over an exponential number of naive Bayes (NB) models.

Design

This model-averaged naive Bayes (MANB) method was applied to predict late onset Alzheimer''s disease in 1411 individuals who each had 312 318 SNP measurements available as genome-wide predictive features. Its performance was compared to that of a naive Bayes algorithm without feature selection (NB) and with feature selection (FSNB).

Measurement

Performance of each algorithm was measured in terms of area under the ROC curve (AUC), calibration, and run time.

Results

The training time of MANB (16.1 s) was fast like NB (15.6 s), while FSNB (1684.2 s) was considerably slower. Each of the three algorithms required less than 0.1 s to predict the outcome of a test case. MANB had an AUC of 0.72, which is significantly better than the AUC of 0.59 by NB (p<0.00001), but not significantly different from the AUC of 0.71 by FSNB. MANB was better calibrated than NB, and FSNB was even better in calibration. A limitation was that only one dataset and two comparison algorithms were included in this study.

Conclusion

MANB performed comparatively well in predicting a clinical outcome from a high-dimensional genome-wide dataset. These results provide support for including MANB in the methods used to predict outcomes from large, genome-wide datasets.  相似文献   

16.

Objective

To evaluate the validity of, characterize the usage of, and propose potential research applications for International Classification of Diseases, Ninth Revision (ICD-9) tobacco codes in clinical populations.

Materials and methods

Using data on cancer cases and cancer-free controls from Vanderbilt''s biorepository, BioVU, we evaluated the utility of ICD-9 tobacco use codes to identify ever-smokers in general and high smoking prevalence (lung cancer) clinic populations. We assessed potential biases in documentation, and performed temporal analysis relating transitions between smoking codes to smoking cessation attempts. We also examined the suitability of these codes for use in genetic association analyses.

Results

ICD-9 tobacco use codes can identify smokers in a general clinic population (specificity of 1, sensitivity of  0.32), and there is little evidence of documentation bias. Frequency of code transitions between ‘current’ and ‘former’ tobacco use was significantly correlated with initial success at smoking cessation (p<0.0001). Finally, code-based smoking status assignment is a comparable covariate to text-based smoking status for genetic association studies.

Discussion

Our results support the use of ICD-9 tobacco use codes for identifying smokers in a clinical population. Furthermore, with some limitations, these codes are suitable for adjustment of smoking status in genetic studies utilizing electronic health records.

Conclusions

Researchers should not be deterred by the unavailability of full-text records to determine smoking status if they have ICD-9 code histories.  相似文献   

17.

Objective

To improve identification of pertussis cases by developing a decision model that incorporates recent, local, population-level disease incidence.

Design

Retrospective cohort analysis of 443 infants tested for pertussis (2003–7).

Measurements

Three models (based on clinical data only, local disease incidence only, and a combination of clinical data and local disease incidence) to predict pertussis positivity were created with demographic, historical, physical exam, and state-wide pertussis data. Models were compared using sensitivity, specificity, area under the receiver-operating characteristics (ROC) curve (AUC), and related metrics.

Results

The model using only clinical data included cyanosis, cough for 1 week, and absence of fever, and was 89% sensitive (95% CI 79 to 99), 27% specific (95% CI 22 to 32) with an area under the ROC curve of 0.80. The model using only local incidence data performed best when the proportion positive of pertussis cultures in the region exceeded 10% in the 8–14 days prior to the infant''s associated visit, achieving 13% sensitivity, 53% specificity, and AUC 0.65. The combined model, built with patient-derived variables and local incidence data, included cyanosis, cough for 1 week, and the variable indicating that the proportion positive of pertussis cultures in the region exceeded 10% 8–14 days prior to the infant''s associated visit. This model was 100% sensitive (p<0.04, 95% CI 92 to 100), 38% specific (p<0.001, 95% CI 33 to 43), with AUC 0.82.

Conclusions

Incorporating recent, local population-level disease incidence improved the ability of a decision model to correctly identify infants with pertussis. Our findings support fostering bidirectional exchange between public health and clinical practice, and validate a method for integrating large-scale public health datasets with rich clinical data to improve decision-making and public health.  相似文献   

18.

Objective

To demonstrate the potential of de-identified clinical data from multiple healthcare systems using different electronic health records (EHR) to be efficiently used for very large retrospective cohort studies.

Materials and methods

Data of 959 030 patients, pooled from multiple different healthcare systems with distinct EHR, were obtained. Data were standardized and normalized using common ontologies, searchable through a HIPAA-compliant, patient de-identified web application (Explore; Explorys Inc). Patients were 26 years or older seen in multiple healthcare systems from 1999 to 2011 with data from EHR.

Results

Comparing obese, tall subjects with normal body mass index, short subjects, the venous thromboembolic events (VTE) OR was 1.83 (95% CI 1.76 to 1.91) for women and 1.21 (1.10 to 1.32) for men. Weight had more effect then height on VTE. Compared with Caucasian, Hispanic/Latino subjects had a much lower risk of VTE (female OR 0.47, 0.41 to 0.55; male OR 0.24, 0.20 to 0.28) and African-Americans a substantially higher risk (female OR 1.83, 1.76 to 1.91; male OR 1.58, 1.50 to 1.66). This 13-year retrospective study of almost one million patients was performed over approximately 125 h in 11 weeks, part time by the five authors.

Discussion

As research informatics tools develop and more clinical data become available in EHR, it is important to study and understand unique opportunities for clinical research informatics to transform the scale and resources needed to perform certain types of clinical research.

Conclusions

With the right clinical research informatics tools and EHR data, some types of very large cohort studies can be completed with minimal resources.  相似文献   

19.

Objective

To evaluate the safety of shilajit by 91 days repeated administration in different dose levels in rats.

Methods

In this study the albino rats were divided into four groups. Group I received vehicle and group II, III and IV received 500, 2 500 and 5 000 mg/kg of shilajit, respectively. Finally animals were sacrificed and subjected to histopathology and iron was estimated by flame atomic absorption spectroscopy and graphite furnace.

Results

The result showed that there were no significant changes in iron level of treated groups when compared with control except liver (5 000 mg/kg) and histological slides of all organs revealed normal except negligible changes in liver and intestine with the highest dose of shilajit. The weight of all organs was normal when compared with control.

Conclusions

The result suggests that black shilajit, an Ayurvedic formulation, is safe for long term use as a dietary supplement for a number of disorders like iron deficiency anaemia.  相似文献   

20.

Background

Patient portals are becoming increasingly common, but the safety of patient messages and eVisits has not been well studied. Unlike patient-to-nurse telephonic communication, patient messages and eVisits involve an asynchronous process that could be hazardous if patients were using it for time-sensitive symptoms such as chest pain or dyspnea.

Methods

We retrospectively analyzed 7322 messages (6430 secure messages and 892 eVisits). To assess the overall risk associated with the messages, we looked for deaths within 30 days of the message and hospitalizations and emergency department (ED) visits within 7 days following the message. We also examined message content for symptoms of chest pain, breathing concerns, and other symptoms associated with high risk.

Results

Two deaths occurred within 30 days of a patient-generated message, but were not related to the message. There were six hospitalizations related to a previous secure message (0.09% of secure messages), and two hospitalizations related to a previous eVisit (0.22% of eVisits). High-risk symptoms were present in 3.5% of messages but a subject line search to identify these high-risk messages had a sensitivity of only 15% and a positive predictive value of 29%.

Conclusions

Patients use portal messages 3.5% of the time for potentially high-risk symptoms of chest pain, breathing concerns, abdominal pain, palpitations, lightheadedness, and vomiting. Death, hospitalization, or an ED visit was an infrequent outcome following a secure message or eVisit. Screening the message subject line for high-risk symptoms was not successful in identifying high-risk message content.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号