首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background and objective

There is little evidence that electronic medical record (EMR) use is associated with better compliance with clinical guidelines on initiation of antiretroviral therapy (ART) among ART-eligible HIV patients. We assessed the effect of transitioning from paper-based to an EMR-based system on appropriate placement on ART among eligible patients.

Methods

We conducted a retrospective, pre-post EMR study among patients enrolled in HIV care and eligible for ART at 17 rural Kenyan clinics and compared the: (1) proportion of patients eligible for ART based on CD4 count or WHO staging who initiate therapy; (2) time from eligibility for ART to ART initiation; (3) time from ART initiation to first CD4 test.

Results

7298 patients were eligible for ART; 54.8% (n=3998) were enrolled in HIV care using a paper-based system while 45.2% (n=3300) were enrolled after the implementation of the EMR. EMR was independently associated with a 22% increase in the odds of initiating ART among eligible patients (adjusted OR (aOR) 1.22, 95% CI 1.12 to 1.33). The proportion of ART-eligible patients not receiving ART was 20.3% and 15.1% for paper and EMR, respectively (χ2=33.5, p<0.01). Median time from ART eligibility to ART initiation was 29.1 days (IQR: 14.1–62.1) for paper compared to 27 days (IQR: 12.9–50.1) for EMR.

Conclusions

EMRs can improve quality of HIV care through appropriate placement of ART-eligible patients on treatment in resource limited settings. However, other non-EMR factors influence timely initiation of ART.  相似文献   

2.
3.

Objective

Despite at least 40 years of promising empirical performance, very few clinical natural language processing (NLP) or information extraction systems currently contribute to medical science or care. The authors address this gap by reducing the need for custom software and rules development with a graphical user interface-driven, highly generalizable approach to concept-level retrieval.

Materials and methods

A ‘learn by example’ approach combines features derived from open-source NLP pipelines with open-source machine learning classifiers to automatically and iteratively evaluate top-performing configurations. The Fourth i2b2/VA Shared Task Challenge''s concept extraction task provided the data sets and metrics used to evaluate performance.

Results

Top F-measure scores for each of the tasks were medical problems (0.83), treatments (0.82), and tests (0.83). Recall lagged precision in all experiments. Precision was near or above 0.90 in all tasks.

Discussion

With no customization for the tasks and less than 5 min of end-user time to configure and launch each experiment, the average F-measure was 0.83, one point behind the mean F-measure of the 22 entrants in the competition. Strong precision scores indicate the potential of applying the approach for more specific clinical information extraction tasks. There was not one best configuration, supporting an iterative approach to model creation.

Conclusion

Acceptable levels of performance can be achieved using fully automated and generalizable approaches to concept-level information extraction. The described implementation and related documentation is available for download.  相似文献   

4.

Background

The electronic medical record (EMR)/electronic health record (EHR) is becoming an integral component of many primary-care outpatient practices. Before implementing an EMR/EHR system, primary-care practices should have an understanding of the potential benefits and limitations.

Objective

The objective of this study was to systematically review the recent literature around the impact of the EMR/EHR within primary-care outpatient practices.

Materials and methods

Searches of Medline, EMBASE, CINAHL, ABI Inform, and Cochrane Library were conducted to identify articles published between January 1998 and January 2010. The gray literature and reference lists of included articles were also searched. 30 studies met inclusion criteria.

Results and discussion

The EMR/EHR appears to have structural and process benefits, but the impact on clinical outcomes is less clear. Using Donabedian''s framework, five articles focused on the impact on healthcare structure, 21 explored healthcare process issues, and four focused on health-related outcomes.  相似文献   

5.

Objective

Quality indicators for the treatment of type 2 diabetes are often retrieved from a chronic disease registry (CDR). This study investigates the quality of recording in a general practitioner''s (GP) electronic medical record (EMR) compared to a simple, web-based CDR.

Methods

The GPs entered data directly in the CDR and in their own EMR during the study period (2011). We extracted data from 58 general practices (8235 patients) with type 2 diabetes and compared the occurrence and value of seven process indicators and 12 outcome indicators in both systems. The CDR, specifically designed for monitoring type 2 diabetes and reporting to health insurers, was used as the reference standard. For process indicators we examined the presence or absence of recordings on the patient level in both systems, for outcome indicators we examined the number of compliant or non-compliant values of recordings present in both systems. The diagnostic OR (DOR) was calculated for all indicators.

Results

We found less concordance for process indicators than for outcome indicators. HbA1c testing was the process indicator with the highest DOR. Blood pressure measurement, urine albumin test, BMI recorded and eye assessment showed low DOR. For outcome indicators, the highest DOR was creatinine clearance <30 mL/min or mL/min/1.73 m2 and the lowest DOR was systolic blood pressure <140 mm Hg.

Conclusions

Clinical items are not always adequately recorded in an EMR for retrieving indicators, but there is good concordance for the values of these items. If the quality of recording improves, indicators can be reported from the EMR, which will reduce the workload of GPs and enable GPs to maintain a good patient overview.  相似文献   

6.

Objective

Recruitment of patients into time sensitive clinical trials in intensive care units (ICU) poses a significant challenge. Enrollment is limited by delayed recognition and late notification of research personnel. The objective of the present study was to evaluate the effectiveness of the implementation of electronic screening (septic shock sniffer) regarding enrollment into a time sensitive (24 h after onset) clinical study of echocardiography in severe sepsis and septic shock.

Design

We developed and tested a near-real time computerized alert system, the septic shock sniffer, based on established severe sepsis/septic shock diagnostic criteria. A sniffer scanned patients'' data in the electronic medical records and notified the research coordinator on call through an institutional paging system of potentially eligible patients.

Measurement

The performance of the septic shock sniffer was assessed.

Results

The septic shock sniffer performed well with a positive predictive value of 34%. Electronic screening doubled enrollment, with 68 of 4460 ICU admissions enrolled during the 9 months after implementation versus 37 of 4149 ICU admissions before sniffer implementation (p<0.05). Efficiency was limited by study coordinator availability (not available at nights or weekends).

Conclusions

Automated electronic medical records screening improves the efficiency of enrollment and should be a routine tool for the recruitment of patients into time sensitive clinical trials in the ICU setting.  相似文献   

7.

Objective

Predicting patient outcomes from genome-wide measurements holds significant promise for improving clinical care. The large number of measurements (eg, single nucleotide polymorphisms (SNPs)), however, makes this task computationally challenging. This paper evaluates the performance of an algorithm that predicts patient outcomes from genome-wide data by efficiently model averaging over an exponential number of naive Bayes (NB) models.

Design

This model-averaged naive Bayes (MANB) method was applied to predict late onset Alzheimer''s disease in 1411 individuals who each had 312 318 SNP measurements available as genome-wide predictive features. Its performance was compared to that of a naive Bayes algorithm without feature selection (NB) and with feature selection (FSNB).

Measurement

Performance of each algorithm was measured in terms of area under the ROC curve (AUC), calibration, and run time.

Results

The training time of MANB (16.1 s) was fast like NB (15.6 s), while FSNB (1684.2 s) was considerably slower. Each of the three algorithms required less than 0.1 s to predict the outcome of a test case. MANB had an AUC of 0.72, which is significantly better than the AUC of 0.59 by NB (p<0.00001), but not significantly different from the AUC of 0.71 by FSNB. MANB was better calibrated than NB, and FSNB was even better in calibration. A limitation was that only one dataset and two comparison algorithms were included in this study.

Conclusion

MANB performed comparatively well in predicting a clinical outcome from a high-dimensional genome-wide dataset. These results provide support for including MANB in the methods used to predict outcomes from large, genome-wide datasets.  相似文献   

8.

Background

There is significant interest in leveraging the electronic medical record (EMR) to conduct genome-wide association studies (GWAS).

Methods

A biorepository of DNA and plasma was created by recruiting patients referred for non-invasive lower extremity arterial evaluation or stress ECG. Peripheral arterial disease (PAD) was defined as a resting/post-exercise ankle-brachial index (ABI) less than or equal to 0.9, a history of lower extremity revascularization, or having poorly compressible leg arteries. Controls were patients without evidence of PAD. Demographic data and laboratory values were extracted from the EMR. Medication use and smoking status were established by natural language processing of clinical notes. Other risk factors and comorbidities were ascertained based on ICD-9-CM codes, medication use and laboratory data.

Results

Of 1802 patients with an abnormal ABI, 115 had non-atherosclerotic vascular disease such as vasculitis, Buerger''s disease, trauma and embolism (phenocopies) based on ICD-9-CM diagnosis codes and were excluded. The PAD cases (66±11 years, 64% men) were older than controls (61±8 years, 60% men) but had similar geographical distribution and ethnic composition. Among PAD cases, 1444 (85.6%) had an abnormal ABI, 233 (13.8%) had poorly compressible arteries and 10 (0.6%) had a history of lower extremity revascularization. In a random sample of 95 cases and 100 controls, risk factors and comorbidities ascertained from EMR-based algorithms had good concordance compared with manual record review; the precision ranged from 67% to 100% and recall from 84% to 100%.

Conclusion

This study demonstrates use of the EMR to ascertain phenocopies, phenotype heterogeneity and relevant covariates to enable a GWAS of PAD. Biorepositories linked to EMR may provide a relatively efficient means of conducting GWAS.  相似文献   

9.

Objective

To demonstrate the potential of de-identified clinical data from multiple healthcare systems using different electronic health records (EHR) to be efficiently used for very large retrospective cohort studies.

Materials and methods

Data of 959 030 patients, pooled from multiple different healthcare systems with distinct EHR, were obtained. Data were standardized and normalized using common ontologies, searchable through a HIPAA-compliant, patient de-identified web application (Explore; Explorys Inc). Patients were 26 years or older seen in multiple healthcare systems from 1999 to 2011 with data from EHR.

Results

Comparing obese, tall subjects with normal body mass index, short subjects, the venous thromboembolic events (VTE) OR was 1.83 (95% CI 1.76 to 1.91) for women and 1.21 (1.10 to 1.32) for men. Weight had more effect then height on VTE. Compared with Caucasian, Hispanic/Latino subjects had a much lower risk of VTE (female OR 0.47, 0.41 to 0.55; male OR 0.24, 0.20 to 0.28) and African-Americans a substantially higher risk (female OR 1.83, 1.76 to 1.91; male OR 1.58, 1.50 to 1.66). This 13-year retrospective study of almost one million patients was performed over approximately 125 h in 11 weeks, part time by the five authors.

Discussion

As research informatics tools develop and more clinical data become available in EHR, it is important to study and understand unique opportunities for clinical research informatics to transform the scale and resources needed to perform certain types of clinical research.

Conclusions

With the right clinical research informatics tools and EHR data, some types of very large cohort studies can be completed with minimal resources.  相似文献   

10.

Objective

To formulate gentamicin liposphere by solvent-melting method using lipids and polyethylene glycol 4 000 (PEG-4 000) for oral administration.

Methods

Gentamicin lipospheres were prepared by melt-emulsification using 30% w/w Phospholipon® 90H in Beeswax as the lipid matrix containing PEG-4 000. These lipospheres were characterized by evaluating on encapsulation efficiency, loading capacity, change in pH and the release profile. Antimicrobial activities were evaluated against Escherichia coli, Pseudomonas aeruginosa, Salmonella paratyphii and Staphylococcus aureus using the agar diffusion method.

Results

Photomicrographs revealed spherical particles within a micrometer range with minimal growth after 1 month. The release of gentamicin in vitro varied widely with the PEG-4 000 contents. Moreover, significant (P>0.05) amount of gentamicin was released in vivo from the formulation. The encapsulation and loading capacity were all high, indicating the ability of the lipids to take up the drug. The antimicrobial activities were very high especially against Pseudomonas compare to other test organisms. This strongly suggested that the formulation retain its bioactive characteristics.

Conclusions

This study strongly suggest that the issue of gentamicin stability and poor absorption in oral formulation could be adequately addressed by tactical engineering of lipid drug delivery systems such as lipospheres.  相似文献   

11.

Context

Computerized drug alerts for psychotropic drugs are expected to reduce fall-related injuries in older adults. However, physicians over-ride most alerts because they believe the benefit of the drugs exceeds the risk.

Objective

To determine whether computerized prescribing decision support with patient-specific risk estimates would increase physician response to psychotropic drug alerts and reduce injury risk in older people.

Design

Cluster randomized controlled trial of 81 family physicians and 5628 of their patients aged 65 and older who were prescribed psychotropic medication.

Intervention

Intervention physicians received information about patient-specific risk of injury computed at the time of each visit using statistical models of non-modifiable risk factors and psychotropic drug doses. Risk thermometers presented changes in absolute and relative risk with each change in drug treatment. Control physicians received commercial drug alerts.

Main outcome measures

Injury risk at the end of follow-up based on psychotropic drug doses and non-modifiable risk factors. Electronic health records and provincial insurance administrative data were used to measure outcomes.

Results

Mean patient age was 75.2 years. Baseline risk of injury was 3.94 per 100 patients per year. Intermediate-acting benzodiazepines (56.2%) were the most common psychotropic drug. Intervention physicians reviewed therapy in 83.3% of visits and modified therapy in 24.6%. The intervention reduced the risk of injury by 1.7 injuries per 1000 patients (95% CI 0.2/1000 to 3.2/1000; p=0.02). The effect of the intervention was greater for patients with higher baseline risks of injury (p<0.03).

Conclusion

Patient-specific risk estimates provide an effective method of reducing the risk of injury for high-risk older people.

Trial registration number

clinicaltrials.gov Identifier: NCT00818285.  相似文献   

12.

Objective

To improve identification of pertussis cases by developing a decision model that incorporates recent, local, population-level disease incidence.

Design

Retrospective cohort analysis of 443 infants tested for pertussis (2003–7).

Measurements

Three models (based on clinical data only, local disease incidence only, and a combination of clinical data and local disease incidence) to predict pertussis positivity were created with demographic, historical, physical exam, and state-wide pertussis data. Models were compared using sensitivity, specificity, area under the receiver-operating characteristics (ROC) curve (AUC), and related metrics.

Results

The model using only clinical data included cyanosis, cough for 1 week, and absence of fever, and was 89% sensitive (95% CI 79 to 99), 27% specific (95% CI 22 to 32) with an area under the ROC curve of 0.80. The model using only local incidence data performed best when the proportion positive of pertussis cultures in the region exceeded 10% in the 8–14 days prior to the infant''s associated visit, achieving 13% sensitivity, 53% specificity, and AUC 0.65. The combined model, built with patient-derived variables and local incidence data, included cyanosis, cough for 1 week, and the variable indicating that the proportion positive of pertussis cultures in the region exceeded 10% 8–14 days prior to the infant''s associated visit. This model was 100% sensitive (p<0.04, 95% CI 92 to 100), 38% specific (p<0.001, 95% CI 33 to 43), with AUC 0.82.

Conclusions

Incorporating recent, local population-level disease incidence improved the ability of a decision model to correctly identify infants with pertussis. Our findings support fostering bidirectional exchange between public health and clinical practice, and validate a method for integrating large-scale public health datasets with rich clinical data to improve decision-making and public health.  相似文献   

13.

Objective

To determine whether a diabetes case management telemedicine intervention reduced healthcare expenditures, as measured by Medicare claims, and to assess the costs of developing and implementing the telemedicine intervention.

Design

We studied 1665 participants in the Informatics for Diabetes Education and Telemedicine (IDEATel), a randomized controlled trial comparing telemedicine case management of diabetes to usual care. Participants were aged 55 years or older, and resided in federally designated medically underserved areas of New York State.

Measurements

We analyzed Medicare claims payments for each participant for up to 60 study months from date of randomization, until their death, or until December 31, 2006 (whichever happened first). We also analyzed study expenditures for the telemedicine intervention over six budget years (February 28, 2000– February 27, 2006).

Results

Mean annual Medicare payments (SE) were similar in the usual care and telemedicine groups, $9040 ($386) and $9669 ($443) per participant, respectively (p>0.05). Sensitivity analyses, including stratification by censored status, adjustment by enrollment site, and semi-parametric weighting by probability of dropping-out, rendered similar results. Over six budget years 28 821 participant/months of telemedicine intervention were delivered, at an estimated cost of $622 per participant/month.

Conclusion

Telemedicine case management was not associated with a reduction in Medicare claims in this medically underserved population. The cost of implementing the telemedicine intervention was high, largely representing special purpose hardware and software costs required at the time. Lower implementation costs will need to be achieved using lower cost technology in order for telemedicine case management to be more widely used.  相似文献   

14.

Objective

To create a computable MEDication Indication resource (MEDI) to support primary and secondary use of electronic medical records (EMRs).

Materials and methods

We processed four public medication resources, RxNorm, Side Effect Resource (SIDER) 2, MedlinePlus, and Wikipedia, to create MEDI. We applied natural language processing and ontology relationships to extract indications for prescribable, single-ingredient medication concepts and all ingredient concepts as defined by RxNorm. Indications were coded as Unified Medical Language System (UMLS) concepts and International Classification of Diseases, 9th edition (ICD9) codes. A total of 689 extracted indications were randomly selected for manual review for accuracy using dual-physician review. We identified a subset of medication–indication pairs that optimizes recall while maintaining high precision.

Results

MEDI contains 3112 medications and 63 343 medication–indication pairs. Wikipedia was the largest resource, with 2608 medications and 34 911 pairs. For each resource, estimated precision and recall, respectively, were 94% and 20% for RxNorm, 75% and 33% for MedlinePlus, 67% and 31% for SIDER 2, and 56% and 51% for Wikipedia. The MEDI high-precision subset (MEDI-HPS) includes indications found within either RxNorm or at least two of the three other resources. MEDI-HPS contains 13 304 unique indication pairs regarding 2136 medications. The mean±SD number of indications for each medication in MEDI-HPS is 6.22±6.09. The estimated precision of MEDI-HPS is 92%.

Conclusions

MEDI is a publicly available, computable resource that links medications with their indications as represented by concepts and billing codes. MEDI may benefit clinical EMR applications and reuse of EMR data for research.  相似文献   

15.

Objectives

To characterize patterns of electronic medical record (EMR) use at pediatric primary care acute visits.

Design

Direct observational study of 529 acute visits with 27 experienced pediatric clinician users.

Measurements

For each 20 s interval and at each stage of the visit according to the Davis Observation Code, we recorded whether the physician was communicating with the family only, using the computer while communicating, or using the computer without communication. Regression models assessed the impact of clinician, patient and visit characteristics on overall visit length, time spent interacting with families, and time spent using the computer while interacting.

Results

The mean overall visit length was 11:30 (min:sec) with 9:06 spent in the exam room. Clinicians used the EMR during 27% of exam room time and at all stages of the visit (interacting, chatting, and building rapport; history taking; formulation of the diagnosis and treatment plan; and discussing prevention) except the physical exam. Communication with the family accompanied 70% of EMR use. In regression models, computer documentation outside the exam room was associated with visits that were 11% longer (p=0.001), and female clinicians spent more time using the computer while communicating (p=0.003).

Limitations

The 12 study practices shared one EMR.

Conclusions

Among pediatric clinicians with EMR experience, conversation accompanies most EMR use. Our results suggest that efforts to improve EMR usability and clinician EMR training should focus on use in the context of doctor–patient communication. Further study of the impact of documentation inside versus outside the exam room on productivity is warranted.  相似文献   

16.

Background

Studies of the effects of electronic health records (EHRs) have had mixed findings, which may be attributable to unmeasured confounders such as individual variability in use of EHR features.

Objective

To capture physician-level variations in use of EHR features, associations with other predictors, and usage intensity over time.

Methods

Retrospective cohort study of primary care providers eligible for meaningful use at a network of federally qualified health centers, using commercial EHR data from January 2010 through June 2013, a period during which the organization was preparing for and in the early stages of meaningful use.

Results

Data were analyzed for 112 physicians and nurse practitioners, consisting of 430 803 encounters with 99 649 patients. EHR usage metrics were developed to capture how providers accessed and added to patient data (eg, problem list updates), used clinical decision support (eg, responses to alerts), communicated (eg, printing after-visit summaries), and used panel management options (eg, viewed panel reports). Provider-level variability was high: for example, the annual average proportion of encounters with problem lists updated ranged from 5% to 60% per provider. Some metrics were associated with provider, patient, or encounter characteristics. For example, problem list updates were more likely for new patients than established ones, and alert acceptance was negatively correlated with alert frequency.

Conclusions

Providers using the same EHR developed personalized patterns of use of EHR features. We conclude that physician-level usage of EHR features may be a valuable additional predictor in research on the effects of EHRs on healthcare quality and costs.  相似文献   

17.

Objective

To evaluate non-response rates to follow-up online surveys using a prospective cohort of parents raising at least one child with an autism spectrum disorder. A secondary objective was to investigate predictors of non-response over time.

Materials and Methods

Data were collected from a US-based online research database, the Interactive Autism Network (IAN). A total of 19 497 youths, aged 1.9–19 years (mean 9 years, SD 3.94), were included in the present study. Response to three follow-up surveys, solicited from parents after baseline enrollment, served as the outcome measures. Multivariate binary logistic regression models were then used to examine predictors of non-response.

Results

31 216 survey instances were examined, of which 8772 or 28.1% were partly or completely responded to. Results from the multivariate model found non-response of baseline surveys (OR 28.0), years since enrollment in the online protocol (OR 2.06), and numerous sociodemographic characteristics were associated with non-response to follow-up surveys (all p<0.05).

Discussion

Consistent with the current literature, response rates to online surveys were somewhat low. While many demographic characteristics were associated with non-response, time since registration and participation at baseline played the greatest role in predicting follow-up survey non-response.

Conclusion

An important hazard to the generalizability of findings from research is non-response bias; however, little is known about this problem in longitudinal internet-mediated research (IMR). This study sheds new light on important predictors of longitudinal response rates that should be considered before launching a prospective IMR study.  相似文献   

18.

Objective

Accurate, understandable public health information is important for ensuring the health of the nation. The large portion of the US population with Limited English Proficiency is best served by translations of public-health information into other languages. However, a large number of health departments and primary care clinics face significant barriers to fulfilling federal mandates to provide multilingual materials to Limited English Proficiency individuals. This article presents a pilot study on the feasibility of using freely available statistical machine translation technology to translate health promotion materials.

Design

The authors gathered health-promotion materials in English from local and national public-health websites. Spanish versions were created by translating the documents using a freely available machine-translation website. Translations were rated for adequacy and fluency, analyzed for errors, manually corrected by a human posteditor, and compared with exclusively manual translations.

Results

Machine translation plus postediting took 15–53 min per document, compared to the reported days or even weeks for the standard translation process. A blind comparison of machine-assisted and human translations of six documents revealed overall equivalency between machine-translated and manually translated materials. The analysis of translation errors indicated that the most important errors were word-sense errors.

Conclusion

The results indicate that machine translation plus postediting may be an effective method of producing multilingual health materials with equivalent quality but lower cost compared to manual translations.  相似文献   

19.

Objective

To model how individual violations in routine clinical processes cumulatively contribute to the risk of adverse events in hospital using an agent-based simulation framework.

Design

An agent-based simulation was designed to model the cascade of common violations that contribute to the risk of adverse events in routine clinical processes. Clinicians and the information systems that support them were represented as a group of interacting agents using data from direct observations. The model was calibrated using data from 101 patient transfers observed in a hospital and results were validated for one of two scenarios (a misidentification scenario and an infection control scenario). Repeated simulations using the calibrated model were undertaken to create a distribution of possible process outcomes. The likelihood of end-of-chain risk is the main outcome measure, reported for each of the two scenarios.

Results

The simulations demonstrate end-of-chain risks of 8% and 24% for the misidentification and infection control scenarios, respectively. Over 95% of the simulations in both scenarios are unique, indicating that the in-patient transfer process diverges from prescribed work practices in a variety of ways.

Conclusions

The simulation allowed us to model the risk of adverse events in a clinical process, by generating the variety of possible work subject to violations, a novel prospective risk analysis method. The in-patient transfer process has a high proportion of unique trajectories, implying that risk mitigation may benefit from focusing on reducing complexity rather than augmenting the process with further rule-based protocols.  相似文献   

20.

Objective

To explore the feasibility of a novel approach using an augmented one-class learning algorithm to model in-laboratory complications of percutaneous coronary intervention (PCI).

Materials and methods

Data from the Blue Cross Blue Shield of Michigan Cardiovascular Consortium (BMC2) multicenter registry for the years 2007 and 2008 (n=41 016) were used to train models to predict 13 different in-laboratory PCI complications using a novel one-plus-class support vector machine (OP-SVM) algorithm. The performance of these models in terms of discrimination and calibration was compared to the performance of models trained using the following classification algorithms on BMC2 data from 2009 (n=20 289): logistic regression (LR), one-class support vector machine classification (OC-SVM), and two-class support vector machine classification (TC-SVM). For the OP-SVM and TC-SVM approaches, variants of the algorithms with cost-sensitive weighting were also considered.

Results

The OP-SVM algorithm and its cost-sensitive variant achieved the highest area under the receiver operating characteristic curve for the majority of the PCI complications studied (eight cases). Similar improvements were observed for the Hosmer–Lemeshow χ2 value (seven cases) and the mean cross-entropy error (eight cases).

Conclusions

The OP-SVM algorithm based on an augmented one-class learning problem improved discrimination and calibration across different PCI complications relative to LR and traditional support vector machine classification. Such an approach may have value in a broader range of clinical domains.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号