首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
2.
BackgroundWrong patient selection errors may be tracked by retract–reorder (RAR) events. The aim of this quality improvement study was to assess the impact of reducing the number of concurrently open electronic health records from 4 to 2 on RAR errors generated by a tele-critical care service.MethodsThe study encompassed 32 months before and 21 months after restriction. Chi-Square test of proportions and T statistical process control chart for rare events were used.ResultsThere were 156 318 orders with 57 RAR errors (36.5/100 000 orders) before restriction, and 122 587 orders with 34 errors (27.7/100 000 orders) after. Rates were not statistically different (P = .20), but analysis was underpowered. When plotted on a T control chart, random variation was detected between RAR errors.ConclusionWe found no significant difference in RAR errors in the tele-critical care setting after open record limitation. Other strategies should be studied to reduce wrong patient selection errors.  相似文献   

3.
ObjectiveThe electronic health record (EHR) data deluge makes data retrieval more difficult, escalating cognitive load and exacerbating clinician burnout. New auto-summarization techniques are needed. The study goal was to determine if problem-oriented view (POV) auto-summaries improve data retrieval workflows. We hypothesized that POV users would perform tasks faster, make fewer errors, be more satisfied with EHR use, and experience less cognitive load as compared with users of the standard view (SV).MethodsSimple data retrieval tasks were performed in an EHR simulation environment. A randomized block design was used. In the control group (SV), subjects retrieved lab results and medications by navigating to corresponding sections of the electronic record. In the intervention group (POV), subjects clicked on the name of the problem and immediately saw lab results and medications relevant to that problem.ResultsWith POV, mean completion time was faster (173 seconds for POV vs 205 seconds for SV; P < .0001), the error rate was lower (3.4% for POV vs 7.7% for SV; P = .0010), user satisfaction was greater (System Usability Scale score 58.5 for POV vs 41.3 for SV; P < .0001), and cognitive task load was less (NASA Task Load Index score 0.72 for POV vs 0.99 for SV; P < .0001).DiscussionThe study demonstrates that using a problem-based auto-summary has a positive impact on 4 aspects of EHR data retrieval, including cognitive load.ConclusionEHRs have brought on a data deluge, with increased cognitive load and physician burnout. To mitigate these increases, further development and implementation of auto-summarization functionality and the requisite knowledge base are needed.  相似文献   

4.
5.
ObjectiveTo derive 7 proposed core electronic health record (EHR) use metrics across 2 healthcare systems with different EHR vendor product installations and examine factors associated with EHR time.Materials and MethodsA cross-sectional analysis of ambulatory physicians EHR use across the Yale-New Haven and MedStar Health systems was performed for August 2019 using 7 proposed core EHR use metrics normalized to 8 hours of patient scheduled time.ResultsFive out of 7 proposed metrics could be measured in a population of nonteaching, exclusively ambulatory physicians. Among 573 physicians (Yale-New Haven N = 290, MedStar N = 283) in the analysis, median EHR-Time8 was 5.23 hours. Gender, additional clinical hours scheduled, and certain medical specialties were associated with EHR-Time8 after adjusting for age and health system on multivariable analysis. For every 8 hours of scheduled patient time, the model predicted these differences in EHR time (P < .001, unless otherwise indicated): female physicians +0.58 hours; each additional clinical hour scheduled per month −0.01 hours; practicing cardiology −1.30 hours; medical subspecialties −0.89 hours (except gastroenterology, P = .002); neurology/psychiatry −2.60 hours; obstetrics/gynecology −1.88 hours; pediatrics −1.05 hours (P = .001); sports/physical medicine and rehabilitation −3.25 hours; and surgical specialties −3.65 hours.ConclusionsFor every 8 hours of scheduled patient time, ambulatory physicians spend more than 5 hours on the EHR. Physician gender, specialty, and number of clinical hours practicing are associated with differences in EHR time. While audit logs remain a powerful tool for understanding physician EHR use, additional transparency, granularity, and standardization of vendor-derived EHR use data definitions are still necessary to standardize EHR use measurement.  相似文献   

6.
ObjectiveTo examine the effectiveness of event notification service (ENS) alerts on health care delivery processes and outcomes for older adults.Materials and methodsWe deployed ENS alerts in 2 Veterans Affairs (VA) medical centers using regional health information exchange (HIE) networks from March 2016 to December 2019. Alerts targeted VA-based primary care teams when older patients (aged 65+ years) were hospitalized or attended emergency departments (ED) outside the VA system. We employed a concurrent cohort study to compare postdischarge outcomes between patients whose providers received ENS alerts and those that did not (usual care). Outcome measures included: timely follow-up postdischarge (actual phone call within 7 days or an in-person primary care visit within 30 days) and all-cause inpatient or ED readmission within 30 days. Generalized linear mixed models, accounting for clustering by primary care team, were used to compare outcomes between groups.ResultsCompared to usual care, veterans whose primary care team received notification of non-VA acute care encounters were 4 times more likely to have phone contact within 7 days (AOR = 4.10, P < .001) and 2 times more likely to have an in-person visit within 30 days (AOR = 1.98, P = .007). There were no significant differences between groups in hospital or ED utilization within 30 days of index discharge (P = .057).DiscussionENS was associated with increased timely follow-up following non-VA acute care events, but there was no associated change in 30-day readmission rates. Optimization of ENS processes may be required to scale use and impact across health systems.ConclusionGiven the importance of ENS to the VA and other health systems, this study provides guidance for future research on ENS for improving care coordination and population outcomes.Trial RegistrationClinicalTrials.gov NCT02689076. “Regional Data Exchange to Improve Care for Veterans After Non-VA Hospitalization.” Registered February 23, 2016.  相似文献   

7.
Objective Develop and evaluate an automated case detection and response triggering system to monitor patients every 5 min and identify early signs of physiologic deterioration.Materials and methods A 2-year prospective, observational study at a large level 1 trauma center. All patients admitted to a 33-bed medical and oncology floor (A) and a 33-bed non-intensive care unit (ICU) surgical trauma floor (B) were monitored. During the intervention year, pager alerts of early physiologic deterioration were automatically sent to charge nurses along with access to a graphical point-of-care web page to facilitate patient evaluation.Results Nurses reported the positive predictive value of alerts was 91–100% depending on erroneous data presence. Unit A patients were significantly older and had significantly more comorbidities than unit B patients. During the intervention year, unit A patients had a significant increase in length of stay, more transfers to ICU (p = 0.23), and significantly more medical emergency team (MET) calls (p = 0.0008), and significantly fewer died (p = 0.044) compared to the pre-intervention year. No significant differences were found on unit B.Conclusions We monitored patients every 5 min and provided automated pages of early physiologic deterioration. This before–after study found a significant increase in MET calls and a significant decrease in mortality only in the unit with older patients with multiple comorbidities, and thus further study is warranted to detect potential confounding. Moreover, nurses reported the graphical alerts provided information needed to quickly evaluate patients, and they felt more confident about their assessment and more comfortable requesting help.  相似文献   

8.
9.

Background

A meta-analysis was conducted to assess the safety and efficacy of biodegradable polymer drug-eluting stents (BP-DESs).

Methods

PubMed, Science Direct, China National Knowledge Infrastructure, and Chongqing VIP databases were searched for randomized controlled trials comparing the safety and efficacy of BP-DESs versus durable polymer drug-eluting stents (DP-DESs). Efficacy included the prevalence of target lesion revascularization (TLR), target vessel revascularization (TVR), and late lumen loss (LLL), and safety of these stents at the end of follow-up for the selected research studies were compared.

Results

A total of 16 qualified original studies that addressed a total of 22,211 patients were included in this meta-analysis. In regard to efficacy, no statistically significant difference in TLR (odds ratio (OR) = 0.94, P = 0.30) or TVR (OR 1.01, P = 0.86) was observed between patients treated with BP-DESs and those with DP-DESs. However, there were significant differences in in-stent LLL (weighted mean difference [WMD] = −0.07, P = 0.005) and in-segment LLL (WMD = −0.03, P = 0.05) between patients treated with BP-DESs and with DP-DESs. In terms of safety, there was no significant difference in overall mortality (OR 0.97, P = 0.67), cardiac death (OR 0.99, P = 0.90), early stent thrombosis (ST) and late ST (OR 0.94, P = 0.76; OR 0.96, P = 0.73), or myocardial infarction (MI) (OR 0.99, P = 0.88) between patients treated with BP-DESs and with DP-DESs. However, there was a statistically significant difference in very late ST (OR 0.69, P = 0.007) between these two groups. In addition, the general trend of the rates of TVR and TLR of BP-DESs groups was lower than DP-DESs groups after a 1-year follow-up.

Conclusion

BP-DESs are safe, efficient, and exhibit superior performance to DP-DESs with respect to reducing the occurrence of very late ST and LLL. The general trend of the rates of TVR and TLR of BP-DESs groups was lower than DP-DESs groups after a 1-year follow-up.  相似文献   

10.

Background

Myelodysplastic syndrome (MDS) eventually transforms into acute leukemia (AL) in about 30% of patients. Hypermethylation of the inhibitor of DNA binding 4 (ID4) gene may play an important role in the initiation and development of MDS and AL. The aim of this study was to quantitatively assess ID4 gene methylation in MDS and to establish if it could be an effective method of evaluating MDS disease progression.

Methods

We examined 142 bone marrow samples from MDS patients, healthy donors and MDS-AL patients using bisulfite sequencing PCR and quantitative real-time methylation-specific PCR. The ID4 methylation rates and levels were assessed.

Results

ID4 methylation occurred in 27 patients (27/100). ID4 gene methylation was more frequent and at higher levels in patients with advanced disease stages and in high-risk subgroups according to WHO (P < 0.001, P < 0.001, respectively) and International Prognostic Scoring System (IPSS) (P = 0.002, P = 0.007, respectively) classifications. ID4 methylation levels changed during disease progression. Both methylation rates and methylation levels were significantly different between healthy donor, MDS patients and patients with MDS-AL (P < 0.001, P < 0.001, respectively). Multivariate analysis indicated that the level of ID4 methylation was an independent factor influencing overall survival. Patients with MDS showed decreased survival time with increased ID4 methylation levels (P = 0.011, hazard ratio (HR) = 2.371). Patients with ID4 methylation had shorter survival time than those without ID4 methylation (P = 0.008).

Conclusions

Our findings suggest that ID4 gene methylation might be a new biomarker for MDS monitoring and the detection of minimal residual disease.  相似文献   

11.
ObjectiveQuantify the integrity, measured as completeness and concordance with a thoracic radiologist, of documenting pulmonary nodule characteristics in CT reports and assess impact on making follow-up recommendations.Materials and MethodsThis Institutional Review Board-approved, retrospective cohort study was performed at an academic medical center. Natural language processing was performed on radiology reports of CT scans of chest, abdomen, or spine completed in 2016 to assess presence of pulmonary nodules, excluding patients with lung cancer, of which 300 reports were randomly sampled to form the study cohort. Documentation of nodule characteristics were manually extracted from reports by 2 authors with 20% overlap. CT images corresponding to 60 randomly selected reports were further reviewed by a thoracic radiologist to record nodule characteristics. Documentation completeness for all characteristics were reported in percentage and compared using χ2 analysis. Concordance with a thoracic radiologist was reported as percentage agreement; impact on making follow-up recommendations was assessed using kappa.ResultsDocumentation completeness for pulmonary nodule characteristics differed across variables (range = 2%–90%, P < .001). Concordance with a thoracic radiologist was 75% for documenting nodule laterality and 29% for size. Follow-up recommendations were in agreement in 67% and 49% of reports when there was lack of completeness and concordance in documenting nodule size, respectively.DiscussionEssential pulmonary nodule characteristics were under-reported, potentially impacting recommendations for pulmonary nodule follow-up.ConclusionLack of documentation of pulmonary nodule characteristics in radiology reports is common, with potential for compromising patient care and clinical decision support tools.  相似文献   

12.
ObjectiveThe purpose of the study was to explore the theoretical underpinnings of effective clinical decision support (CDS) factors using the comparative effectiveness results.Materials and MethodsWe leveraged search results from a previous systematic literature review and updated the search to screen articles published from January 2017 to January 2020. We included randomized controlled trials and cluster randomized controlled trials that compared a CDS intervention with and without specific factors. We used random effects meta-regression procedures to analyze clinician behavior for the aggregate effects. The theoretical model was the Unified Theory of Acceptance and Use of Technology (UTAUT) model with motivational control.ResultsThirty-four studies were included. The meta-regression models identified the importance of effort expectancy (estimated coefficient = −0.162; P = .0003); facilitating conditions (estimated coefficient = 0.094; P = .013); and performance expectancy with motivational control (estimated coefficient = 1.029; P = .022). Each of these factors created a significant impact on clinician behavior. The meta-regression model with the multivariate analysis explained a large amount of the heterogeneity across studies (R2 = 88.32%).DiscussionThree positive factors were identified: low effort to use, low controllability, and providing more infrastructure and implementation strategies to support the CDS. The multivariate analysis suggests that passive CDS could be effective if users believe the CDS is useful and/or social expectations to use the CDS intervention exist.ConclusionsOverall, a modified UTAUT model that includes motivational control is an appropriate model to understand psychological factors associated with CDS effectiveness and to guide CDS design, implementation, and optimization.  相似文献   

13.
ObjectivesElectronic health record systems are increasingly used to send messages to physicians, but research on physicians’ inbox use patterns is limited. This study’s aims were to (1) quantify the time primary care physicians (PCPs) spend managing inboxes; (2) describe daily patterns of inbox use; (3) investigate which types of messages consume the most time; and (4) identify factors associated with inbox work duration.Materials and MethodsWe analyzed 1 month of electronic inbox data for 1275 PCPs in a large medical group and linked these data with physicians’ demographic data.ResultsPCPs spent an average of 52 minutes on inbox management on workdays, including 19 minutes (37%) outside work hours. Temporal patterns of electronic inbox use differed from other EHR functions such as charting. Patient-initiated messages (28%) and results (29%) accounted for the most inbox work time. PCPs with higher inbox work duration were more likely to be female (P < .001), have more patient encounters (P < .001), have older patients (P < .001), spend proportionally more time on patient messages (P < .001), and spend more time per message (P < .001). Compared with PCPs with the lowest duration of time on inbox work, PCPs with the highest duration had more message views per workday (200 vs 109; P < .001) and spent more time on the inbox outside work hours (30 minutes vs 9.7 minutes; P < .001).ConclusionsElectronic inbox work by PCPs requires roughly an hour per workday, much of which occurs outside scheduled work hours. Interventions to assist PCPs in handling patient-initiated messages and results may help alleviate inbox workload.  相似文献   

14.

Background

Tumors of the pancreatic head often involve the superior mesenteric and portal veins. The purpose of this study was to assess perioperative outcomes after pancreaticoduodenectomy (PD) with concomitant vascular resection using the inferior mesenteric vein (IMV) as a guide for transection of the pancreatic body (Whipple at IMV, WATIMV).

Methods

One hundred thirty-seven patients had segmental vein resection during PD between January 2006 and June 2013. Depending on whether the standard approach of creating a tunnel anterior to the mesenterico-portal vein (MPV) axis was achieved for pancreatic transection, patients were subjected to a standard PD with vein resection procedure (s-PD + VR, n = 75) or a modified procedure (m-PD + VR, n = 62). Within the m-PD + VR group, 28 patients underwent the WATIMV procedure, while 34 patients underwent the usual procedure of transection, or ‘central pancreatectomy’ (c-PD + VR).

Results

The volume of intraoperative blood loss and the blood transfusion requirements were significantly greater, and the venous wall invasion and neural invasion frequency were significantly higher in the m-PD + VR group compared with the s-PD + VR group. There were no significant differences in the length of hospitalization, postoperative morbidity, and grades of complications between the two groups. Multivariate logistic regression identified intraoperative blood transfusion (P = 0.004) and vascular invasion (P = 0.008) as the predictors of postoperative morbidity. Further stratification of the entire cohort of 62 (45%) patients who underwent m-PD + VR showed a higher rate of negative resection margins (96.4%) in the WATIMV group compared with the c-PD + VR group (76.5%) (P = 0.06). The volume of intraoperative blood loss (P = 0.013), and intraoperative blood transfusion requirements (P = 0.07) were significantly greater in the c-PD + VR group compared with the WATIMV group. Furthermore, high intraoperative blood loss and tumor stage were predictive of a positive resection margin.

Conclusions

‘Whipple at the IMV (WATIMV)’ has comparable postoperative morbidity with standard PD + VR. If IMV runs into the splenic vein, it could serve as an alternative guide for transection of the pancreatic body during PD + VR.  相似文献   

15.
Background:There were few studies on real-world data about autologous hematopoietic stem cell transplantation (auto-HSCT) or allogeneic HSCT (allo-HSCT) in peripheral T-cell lymphoma (PTCL). This study aimed to investigate the clinical outcomes of patients who received auto-HSCT or allo-HSCT in China.Methods:From July 2007 to June 2017, a total of 128 patients who received auto-HSCT (n= 72) or allo-HSCT (n= 56) at eight medical centers across China were included in this study. We retrospectively collected their demographic and clinical data and compared the clinical outcomes between groups.Results:Patients receiving allo-HSCT were more likely to be diagnosed with stage III or IV disease (95% vs. 82%, P = 0.027), bone marrow involvement (42% vs. 15%, P = 0.001), chemotherapy-resistant disease (41% vs. 8%, P = 0.001), and progression disease (32% vs. 4%, P < 0.001) at transplantation than those receiving auto-HSCT. With a median follow-up of 30 (2–143) months, 3-year overall survival (OS) and progression-free survival (PFS) in the auto-HSCT group were 70%(48/63) and 59%(42/63), respectively. Three-year OS and PFS for allo-HSCT recipients were 46%(27/54) and 44%(29/54), respectively. There was no difference in relapse rate (34%[17/63] in auto-HSCT vs. 29%[15/54] in allo-HSCT, P = 0.840). Three-year non-relapse mortality rate in auto-HSCT recipients was 6%(4/63) compared with 27%(14/54) for allo-HSCT recipients (P = 0.004). Subanalyses showed that patients with lower prognostic index scores for PTCL (PIT) who received auto-HSCT in an upfront setting had a better outcome than patients with higher PIT scores (3-year OS: 85% vs. 40%, P = 0.003). Patients with complete remission (CR) undergoing auto-HSCT had better survival (3-year OS: 88% vs. 48% in allo-HSCT, P = 0.008). For patients beyond CR, the outcome of patients who received allo-HSCT was similar to that in the atuo-HSCT group (3-year OS: 51% vs. 46%, P = 0.300).Conclusions:Our study provided real-world data about auto-HSCT and allo-HSCT in China. Auto-HSCT seemed to be associated with better survival for patients in good condition (lower PIT score and/or better disease control). For patients possessing unfavorable characteristics, the survival of patients receiving allo-HSCT group was similar to that in the auto-HSCT group.  相似文献   

16.

Background

Although many epidemiologic studies have investigated the CYP1A1 MspI gene polymorphisms and their associations with esophageal cancer (EC), definite conclusions cannot be drawn. To clarify the effects of CYP1A1 MspI polymorphisms on the risk of EC, a meta-analysis was performed in Chinese population.

Methods

Related studies were identified from PubMed, Springer Link, Ovid, Chinese Wanfang Data Knowledge Service Platform, Chinese National Knowledge Infrastructure (CNKI), and Chinese Biology Medicine (CBM) till October 2014. Pooled ORs and 95% CIs were used to assess the strength of the associations.

Results

A total of 13 studies including 1,519 EC cases and 1,962 controls were involved in this meta-analysis. Overall, significant association was found between CYP1A1 MspI polymorphism and EC risk when all studies in the Chinese population pooled into this meta-analysis (C vs. T: OR = 1.25, 95% CI = 1.04 to 1.51; CC + CT vs. TT: OR = 1.35, 95% CI = 1.06 to 1.72; CC vs. TT + CT: OR = 1.35, 95% CI = 1.03 to 1.76). When we performed stratified analyses by geographical locations, histopathology type, and source of control, significantly increased risks were found in North China (C vs. T: OR = 1.38, 95% CI = 1.12 to 1.70; CC vs. TT: OR = 1.72, 95% CI = 1.16 to 2.56; CC + CT vs. TT: OR = 1.52, 95% CI = 1.14 to 2.02; CC vs. TT + CT: OR = 1.55, 95% CI = 1.17 to 2.06), in the population-based studies (C vs. T: OR = 1.22, 95% CI = 1.05 to 1.42; CC vs. TT: OR = 1.38, 95% CI = 1.02 to 1.88; CC + CT vs. TT: OR = 1.36, 95% CI = 1.10 to 1.69; CC vs. TT + CT: OR = 1.43, 95% CI = 1.13 to 1.81) and ESCC (C vs. T: OR = 1.17, 95% CI = 1.04 to 1.32; CC + CT vs. TT: OR = 1.28, 95% CI = 1.08 to 1.52).

Conclusions

This meta-analysis provides the evidence that CYP1A1 MspI polymorphism may contribute to the EC development in the Chinese population.  相似文献   

17.
目的了解高等医学院校教师对目前医德教育现状和对医德医风的看法以及在医德教育教学中存在的问题,为今后进一步更好的加强医德医风教育提供参考依据。方法采用国际通用的视觉模拟的等级标度评分法对目前医德教育现状、医德医风的看法和在医德教育教学中存在的问题进行评分。结果57.4%的教师(68.9%的临床教师)认为社会对医生的看法是批评多于赞扬;71.3%的教师认为不应收受病人的礼品、金钱等;63.7%的教师认为这将会严重损害了医务人员的社会形象;78%的教师认为造成出现少数的不良医德医风现象与医院的管理体制和对医院的补偿机制不完善有关,52.5%的教师(66.7%临床教师)认为应重视医务工作者的物质利益;临床教师对当前医务人员医德医风的评价得分明显高于非临床教师,副教授及以上教师的评价得分明显高于讲师及以下教师,不同性别及不同民族的高校教师的评价得分差异无统计学意义。有67.7%的教师认为医务人员存在着不良医德医风行为,以收受药品等回扣和医疗服务态度恶劣居多;有88.8%的教师认为开展医德教育很有必要;多数教师认为医德教育教学中突出的问题是师生缺乏足够互动、教学手段单一呆板和教学内容陈旧过时,更为感兴趣的教学方式是讨论式教学,最有效的医德教育活动是临床教师言传身教和社会实践。结论加强教帅队伍的医德医风教育十分必要,应采取多种生动有效的医德教育的形式和方法,注重医德教育的效果。  相似文献   

18.
ObjectiveArtificial intelligence (AI) and machine learning (ML) enabled healthcare is now feasible for many health systems, yet little is known about effective strategies of system architecture and governance mechanisms for implementation. Our objective was to identify the different computational and organizational setups that early-adopter health systems have utilized to integrate AI/ML clinical decision support (AI-CDS) and scrutinize their trade-offs.Materials and MethodsWe conducted structured interviews with health systems with AI deployment experience about their organizational and computational setups for deploying AI-CDS at point of care.ResultsWe contacted 34 health systems and interviewed 20 healthcare sites (58% response rate). Twelve (60%) sites used the native electronic health record vendor configuration for model development and deployment, making it the most common shared infrastructure. Nine (45%) sites used alternative computational configurations which varied significantly. Organizational configurations for managing AI-CDS were distinguished by how they identified model needs, built and implemented models, and were separable into 3 major types: Decentralized translation (n = 10, 50%), IT Department led (n = 2, 10%), and AI in Healthcare (AIHC) Team (n = 8, 40%).DiscussionNo singular computational configuration enables all current use cases for AI-CDS. Health systems need to consider their desired applications for AI-CDS and whether investment in extending the off-the-shelf infrastructure is needed. Each organizational setup confers trade-offs for health systems planning strategies to implement AI-CDS.ConclusionHealth systems will be able to use this framework to understand strengths and weaknesses of alternative organizational and computational setups when designing their strategy for artificial intelligence.  相似文献   

19.

Background

The metalloproteinase family of a disintegrin and metalloproteinase with thrombospondin motifs (ADAMTS) degrades extracellular matrix. However, the relevance of the ADAMTS family to cardiovascular diseases remains largely unknown. The study aimed to examine plasma ADAMTS-7 levels in patients with acute myocardial infarction (AMI) and the relationship between plasma ADAMTS-7 levels and heart function.

Methods

This was a prospective study performed in 84 patients with ST-elevation myocardial infarction (STEMI), 70 patients with non-STEMI (NSTEMI), and 38 controls. Enzyme-linked immunosorbent assay (ELISA) was used to measure plasma ADAMTS-7 levels. Cardiac structure and function were assessed using two-dimensional transthoracic echocardiography. Patients were stratified according to left ventricular ejection fraction (LVEF) ≤35% or >35%.

Results

Plasma ADAMTS-7 levels were higher in patients with LVEF ≤35% compared with those with LVEF >35% (6.73 ± 2.47 vs. 3.22 ± 2.05 ng/ml, P < 0.05). Plasma ADAMTS-7 levels were positively correlated with brain natriuretic peptide (BNP), left ventricular mass index (LVMI), left ventricular end-diastolic diameter (LVEDD), and left ventricular end-systolic diameter (LVESD) and negatively correlated with the 6-min walk test (P < 0.05). According to the receiver operating characteristic (ROC) curve, using a cutoff value of plasma ADAMTS-7 of 5.69 ng/ml was associated with a specificity of 61.0% and a sensitivity of 87.6% for the diagnosis of heart failure after AMI. Logistic regression analysis indicated that the association between ADAMTS-7 and heart failure after AMI was independent from traditional cardiovascular risk factors and other biomarkers (odds ratio = 1.236, 95% confidence interval: 1.023 to 1.378, P = 0.021).

Conclusions

Elevated ADAMTS-7 level may be involved in ventricular remodeling after AMI.  相似文献   

20.

Background

Bosentan is a dual endothelin receptor antagonist initially introduced for the treatment of pulmonary arterial hypertension and recently approved for the treatment of digital ulcers in patients with systemic sclerosis (SSc). Our clinical observations indicate that bosentan therapy may be associated with an increased frequency of centrofacial telangiectasia (TAE). Here, we sought to analyze the frequency of TAE in patients with SSc who were treated with either bosentan or the prostacyclin analog iloprost.

Methods

We conducted a retrospective analysis in 27 patients with SSc undergoing therapy with either bosentan (n = 11) or iloprost (n = 16). Standardized photodocumentations of all patients (n = 27) were obtained at a time point ten months after therapy initiation and analyzed. A subgroup of patients (bosentan: n = 6; iloprost: n = 6) was additionally photodocumented prior to therapy initiation, enabling an intraindividual analysis over the course of therapy.

Results

After ten months of therapy patients with SSc receiving bosentan showed a significantly (P = 0.0028) higher frequency of centrofacial TAE (41.6 ± 27.8) as compared to patients with SSc receiving iloprost (14.3 ± 13.1). Detailed subgroup analysis revealed that the frequency of TAE in the bosentan group (n = 6 patients) increased markedly and significantly (P = 0.027) by 44.4 after ten months of therapy (TAE at therapy initiation: 10.8 ± 5.1; TAE after ten months of therapy: 55.2 ± 29.8), whereas an only minor increase of 1.9 was observed in the iloprost group (n = 6 patients; TAE at therapy initiation: 18.3 ± 14.5; TAE after ten months of therapy: 20.2 ± 15.5), yet without reaching statistical significance (P = 0.420).

Conclusions

The use of bosentan may be associated with an increased frequency of TAE in patients with SSc. Patients should be informed about this potential adverse effect prior to therapy. Treatment options may include camouflage or laser therapy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号