首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background/Objective:

Medication reconciliation at transitions of care decreases medication errors, hospitalizations, and adverse drug events. We compared inpatient medication histories and reconciliation across disciplines and evaluated the nature of discrepancies.

Methods:

We conducted a prospective cohort study of patients admitted from the emergency department at our 760-bed hospital. Eligible patients had their medication histories conducted and reconciled in order by the admitting nurse (RN), certified pharmacy technician (CPhT), and pharmacist (RPh). Discharge medication reconciliation was not altered. Admission and discharge discrepancies were categorized by discipline, error type, and drug class and were assigned a criticality index score. A discrepancy rating system systematically measured discrepancies.

Results:

Of 175 consented patients, 153 were evaluated. Total admission and discharge discrepancies were 1,461 and 369, respectively. The average number of medications per participant at admission was 8.59 (1,314) with 9.41 (1,374) at discharge. Most discrepancies were committed by RNs: 53.2% (777) at admission and 56.1% (207) at discharge. The majority were omitted or incorrect. RNs had significantly higher admission discrepancy rates per medication (0.59) compared with CPhTs (0.36) and RPhs (0.16) (P < .001). RPhs corrected significantly more discrepancies per participant than RNs (6.39 vs 0.48; P < .001); average criticality index reduction was 79.0%. Estimated prevented adverse drug events (pADEs) cost savings were $589,744.

Conclusions:

RPhs committed the fewest discrepancies compared with RNs and CPhTs, resulting in more accurate medication histories and reconciliation. RPh involvement also prevented the greatest number of medication errors, contributing to considerable pADE-related cost savings.Key Words: admission, evaluation study, discharge, medication reconciliationObtaining medication histories and conducting medication reconciliation are challeng ing tasks with the advent of new molecular entities and orphan drugs.1 As Franklin reported, “Patients who once came into the [physician] office carrying their medications in a purse, or pocket, now need a shopping bag.”2 The importance of accurate medication histories cannot be overemphasized; nearly 27% of all hospital prescribing errors originate from incorrect admission medication histories, over 70% of drug-related problems are only discovered through patient interview, and more than 50% of discharge discrepancies are associated with admission discrepancies.36In 2010, an Institute of Medicine report estimated that if hospitals prevented adverse drug events (pADEs) and redundant tests, the associated cost savings would be nearly $25 billion annually.7 One organization decreased inpatient care costs by 30% when no medication reconciliation errors were reported over 24 months. 7Multiple organizations have supported medication reconciliation to improve quality of care, reduce preventable hospital admissions and readmissions, and decrease the incidence of adverse health care- associated conditions.811 Although The Joint Commission does not indicate the discipline to perform this role, evidence supports the role of registered pharmacists (RPhs), pharmacy students, and pharmacy technicians in collecting accurate medication histories. RPhs should be involved when high-risk medications are identified, more than 5 medications are reported, or patients are elderly.6,8,1140 Therefore, our primary study objective was to compare inpatient medication histories and reconciliation processes across disciplines and to evaluate the nature of discrepancies using a novel method.  相似文献   

2.

Background:

It is unknown whether coagulation properties differ between renal transplant and nontransplant patients.

Objective:

To assess whether renal transplant patients on intravenous (IV) heparin, titrated to therapeutic activated partial thromboplastin times (aPPT; 56-93 seconds), experienced a higher rate of bleeding compared to nontransplant patients.

Methods:

Twenty-nine renal transplant and 29 nontransplant patients receiving IV heparin for a deep vein thrombosis, pulmonary embolism, atrial fibrillation, or acute coronary syndrome were randomly identified through a retrospective chart review.

Results:

Renal transplant patients had higher bleeding rates on IV heparin therapy compared to nontransplant patients (31% vs 6.9%, respectively; P = .041). Renal transplant patients experienced a drop in hemoglobin of at least 1 g/dL or the need for a transfusion more often then nontransplant patients (69% vs 45%, respectively; P = .111), although the difference was not statistically significant.

Conclusions:

Further research is necessary to identify the factors contributing to increased rates of bleeding in renal transplant patients on IV heparin and to determine the ideal aPTT to appropriately balance anticoagulation in renal transplant patients.In patients who need anticoagulation, it is a challenge to provide the optimal balance between enough anticoagulant to prevent the formation of a thrombus and too much, which may cause a bleeding event.1 As many as 10% of adult patients experience thrombotic events following renal transplantation.2 Most thrombotic events occur in the initial 48 hours after surgery, but they can occur up to 14 days after renal transplantation.2 It is especially important in this population to achieve that balance in anticoagulation therapy, because immediate graft loss may occur if patients experience thrombosis of the renal artery or vein.2 Heparin may be used in the perioperative phase in an attempt to prevent thrombotic events, especially in patients with hypercoagulable states.3-5In the general population, major bleeding occurs in up to 7% of patients who receive therapeutic intravenous (IV) heparin.1,6 Because one of the risk factors for heparin-induced bleeding is recent surgery, it would be expected that there would be increased bleeding risk in the early postoperative transplantation period.6Patients with chronic renal failure may have impaired hemostasis. Platelet production may be disturbed due to the accumulation of protein biodegradation products. Bleeding tendencies may be further increased due to clotting factor deficiencies and vascular defects. Conversely, in uremic patients, clotting factors VII and XIII and fibrinogen may be increased, leading to an increased thrombosis risk. The clotting inhibitors protein C and S, antithrombin III, and heparin cofactor II activity may also be impaired. Unfortunately, complete improvement in hemostasis does not occur after successful renal transplantation.7A previous study by Mathis et al2 evaluated bleeding events due to therapeutic IV heparin in renal transplant patients to prevent perioperative thrombosis. They found no link between the immunosuppressive agents used in the study (primary agents: cyclosporine, mycophenolate, prednisone; alternatives: tacrolimus and rapamycin) and risk of bleeding. However, there was a trend toward increased rates of bleeding in patients who received antibiotic prophylaxis for surgery for longer periods of time (P = .053); cefotetan was used more frequently in patients who experienced bleeding (P = .091).A literature search regarding bleeding rates in renal transplant patients found trials in the early postoperative transplantation period, with bleeding occurring in 60% to 64.3% of patients.2,5,8 No literature was found regarding bleeding rates in renal transplant patients who were receiving therapeutic IV heparin at any time beyond the early transplantation period. The perceived increase in susceptibility to bleeding in renal transplant patients receiving IV heparin (any time after transplantation) led to our assessment of renal transplant patients’ bleeding rates on IV heparin, titrated to a therapeutic activated partial thromboplastin time (aPTT; 56-93 seconds; 1.5 to 2 times normal, institution specific) compared to nontransplant patients.  相似文献   

3.

Purpose:

Few studies have explored the impact of using different methods for obtaining accurate medication histories on medication safety. This study was conducted to compare the accuracy and clinical impact of pharmacist medication histories obtained by electronic medical record review (EMRR) alone with those obtained by direct interviews combined with EMRR.

Method:

This 18-week prospective study included patients who were admitted to the Inpatient Medicine Service at the study institution and who had a pharmacist-conducted medication reconciliation EMRR within 48 hours of hospital admission. A chart review was performed to collect data to determine whether differences existed in the number of discrepancies, recommendations, and medication errors between the EMRR alone group compared to the EMRR combined with the patient interview group.

Results:

Five hundred thirteen discrepancies were identified with the EMRR group compared to 986 from the combined EMRR and patient interview group (P < .001). Significantly more recommendations were made in the combination interview group compared to the EMRR alone group (260 vs 97; P < .001). Fewer medication errors were identified for the EMRR alone group compared to the combination interview group (55 vs 134; P < .001). The most common errors were omitted medications followed by extra dose/failure to discontinue therapy and wrong dose/frequency errors.

Conclusion:

Pharmacist-conducted admission medication interviews combined with EMRR can potentially identify harmful medication discrepancies and prevent medication errors.Key Words: medication reconciliation, pharmacist medication interviewsPatient safety is a national priority for The Joint Commission and the Institute of Medicine.13 It has been estimated that 25% of medication-related injuries are related to preventable medication errors.46 Many of these medication errors are related to unintentional medication discrepancies that occur during transitional points of health care, including hospital admissions, transfers, and discharge.1,3,68 According to The Joint Commission, medication reconciliation is defined as “…the process of identifying the medications currently being taken by an individual.” 3 These medications are compared to newly ordered medications, and discrepancies are identified and resolved. Medication reconciliation is an essential process that health care systems need to implement to avoid unnecessary harm to patients related to medication errors. Approximately 46% of all medication errors and 20% of adverse drug events (ADEs) have been attributed to a lack of medication reconciliation.2,7 As a result, The Joint Commission mandated that institutions comply with the National Safety Goal 8 to “accurately and completely reconcile medications across the continuum of care” to prevent drug omissions, duplications, and drug interactions.3 Recently, The Joint Commission revised its Hospital National Patient Safety goals related to medication reconciliation and currently requires hospitals to “maintain and communicate accurate patient medication information.”3 To accomplish this standard, a current list of the patient’s outpatient medications will be obtained upon admission and then compared with the patient’s hospital medication orders in efforts to identify and resolve discrepancies.1,3 At discharge, The Joint Commission recommends that patients should receive “written information on the medications” that the patients will be taking following discharge from the hospital and should receive patient education on the “importance of managing” their medication information.3Although The Joint Commission recommends that medication reconciliation should be performed at admission, the agency does not provide guidance for how health care institutions should effectively conduct this process. One strategy is to follow the Institute of Medicine’s recommendations to implement information technologies, including the use of electronic medical records and computerized physician order entry systems.9,10 Ideally, the use of these technologies would facilitate the effectiveness and efficiency of performing chart reviews and, thereby, the medication reconciliation process. Another strategy is to obtain a medication history by directly interviewing the patient and/or the patient’s caregiver.9Studies have revealed that obtaining an accurate and complete medication history is an important step for initiating the medication reconciliation process.621 Results from a review of 22 studies demonstrated that 27% to 54% of patients had at least one medication error on hospital admission.6 In particular, several studies have described the value of pharmacist-obtained medication histories.6,13,15,16 These studies have demonstrated that pharmacists identified a higher number of medications or medication discrepancies compared to physicians and other nonphysician providers when obtaining medication histories. Other studies also demonstrate that pharmacist-initiated histories resulted in fewer medication errors15,18 and ADEs.1113Despite these benefits, many health care institutions do not require that pharmacists routinely perform medication interviews as part of the medication reconciliation process, because of workload concerns and lack of pharmacy manpower.6,15 Moreover, with the use of information technologies, the need to have pharmacists conduct interviews may not be necessary if pharmacists can obtain a complete and accurate medication list through electronic medical chart review. Few studies have explored the impact of using different methods for obtaining accurate medication histories on medication safety.9,22, 23 This study was conducted to compare the accuracy and clinical impact of pharmacist medication histories obtained by electronic medical record review (EMRR) alone with those obtained by direct interviews combined with EMRR.  相似文献   

4.

Background:

For beta-lactams, the parameter that best predicts bacterial killing is the length of time the antibiotic concentration exceeds the minimum inhibitory concentration (MIC). Studies have demonstrated improved outcomes with extended infusion (4-hour) piperacillin-tazobactam (P-TZ) compared with traditional immediate infusions.

Objectives:

To describe how one institution made the conversion from immediate infusion of P-TZ to a 4-hour extended infusion utilizing an approved automatic therapeutic substitution, staff education, and smart pump technology, and to examine the impact of this conversion on patient length of stay and pharmacy costs.

Methods:

With approval from the Pharmacy and Therapeutics (P&T), Antimicrobial Stewardship, and Medical Executive Committees, the decision was made to automatically convert all P-TZ orders to a standardized 4-hour infusion given every 8 to 12 hours depending on renal function. The medical records of all adult patients receiving P-TZ during 12 months pre implementation and 24 months post implementation of a 4-hour extended infusion of P-TZ were retrospectively analyzed for length of stay and mortality. The cost of P-TZ was also assessed during these time periods.

Results:

With the help of smart pump technology, our institution successfully completed a conversion to 4-hour extended infusion P-TZ. Through this conversion, pharmacy expenditure of P-TZ was reduced by 38%; the total cost savings was $387,980.62 for the 24-month postintervention phase. Extended infusion P-TZ reduced hospital length of stay by 0.6 days (P < .05), resulting in an additional cost savings of $1,689,480 for the 24-month postintervention phase. A conservative estimate of total cost savings to the hospital in the first 24 months, including the reduction in P-TZ expenditures, was $2,077,460.Key Words: antibiotics, length of stay, piperacillin-tazobactam, smart pump technologyInfections caused by antibiotic-resistant bacteria are a great concern to public health, as growing resistance among both gram-positive and gram-negative pathogens is observed.1 Data from the Centers for Disease Control and Prevention (CDC) show rapidly rising rates of infection due to methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococcus faecium (VRE), and fluoroquinolone-resistant Pseudomonas aeruginosa.2 Even more concerning is the fact that the number of new antibacterial drugs approved for marketing in the United States is dramatically decreasing.3The increase in resistance of organisms and the availability of fewer effective antibiotics have led infectious disease specialists to reevaluate optimal methods of administering intravenous (IV) antibiotics.4,5The relationship between the pharmacokinetics and pharmacodynamic activity of beta-lactam antibiotics influences their dose and effectiveness in eradicating pathogens, and the time above the minimum inhibitory concentration (MIC) is an important marker for efficacy.610 Studies have shown that bactericidal effects of penicillins are maximized when the drug concentrations are higher than the MIC for greater than 50% of the dosing interval.11  相似文献   

5.
Background: Opioid utilization for acute pain has been associated with numerous adverse events, potentially resulting in longer inpatient stays and increased costs.Objective: To examine the effect of intravenous (IV) acetaminophen administered intraoperatively on postoperative opioid consumption in adult subjects who underwent hip or knee replacement.Methods: This retrospective cohort study evaluated postoperative opioid consumption in 176 randomly selected adult subjects who underwent hip or knee replacement at Duke University Hospital (DUH). Eighty-eight subjects received a single, intraoperative, 1 g dose of IV acetaminophen. The other subjects did not receive any IV acetaminophen. This study evaluated mean opioid consumption (in oral morphine equivalents) during the 24-hour postoperative period in the 2 groups. Other endpoints included length of stay in the postanesthesia care unit (PACU), incidence of oversedation, need for acute opioid reversal, and adjunctive analgesic utilization.Results: Subjects who were given a single dose of intraoperative acetaminophen received an average of 149.3 mg of oral morphine equivalents during the 24 hours following surgery compared to 147.2 mg in participants who were not exposed to IV acetaminophen (P = .904). The difference in average length of PACU stay between the IV acetaminophen group (163 minutes) and those subjects not exposed to IV acetaminophen (169 minutes) was not statistically significant (P = .588). No subjects in the study experienced oversedation or required acute opioid reversal.Conclusion: There was not a statistically significant difference in postoperative opioid consumption between patients receiving and not receiving IV acetaminophen intraoperatively.Key Words: analgesia, intravenous acetaminophen, orthopedic surgeryAcute postoperative pain management continues to be a difficult issue for health care providers and their patients. It has previously been estimated that up to 80% of patients experience acute pain after surgery, with 86% of those patients reporting moderate, severe, or extreme pain.1 Unrelieved or inadequately managed postoperative pain can adversely impact both patients and health care institutions. Patients with inadequately managed acute postoperative pain are at increased risk of developing chronic pain.2 Additionally, immunosuppression from unrelieved pain delays wound healing, slows recovery, and increases risk of infection.3 Another patient-related consequence of intense postoperative pain is delayed ambulation, which increases risk of thromboembolism and delays discharge. Traditionally, the consequences of undertreating acute postoperative pain for hospitals have been extended length of stay, increased risk of readmission, and increased cost of care.3Although postoperative pain management has always been a challenge for health care institutions, the need to meet this challenge has been further amplified in recent years. In 2001, The Joint Commission (TJC) developed standards requiring hospitals to focus on appropriate pain management, monitoring, and education.4 Even more recently, patient satisfaction surveys related to inpatient stays are being reported via Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS).5 This 27-question survey includes 2 questions specifically inquiring about pain management during the inpatient stay. Under the Affordable Care Act of 2010, the Centers for Medicare and Medicaid (CMS) have established hospital reimbursement based on HCAHPS scores. This policy took effect at the start of fiscal year 2013 (all patient discharges beginning October 1, 2012).6 Although there have always been negative repercussions for hospitals for undermanaging postoperative pain, these recent measures from TJC and CMS further incentivize hospitals to appropriately treat patients who experience acute postoperative pain.Traditionally, opioid analgesics have served as the foundation for the management of moderate-to-severe acute postoperative pain.7 However, their utilization is associated with a myriad of adverse effects, including pruritus, gastrointestinal effects (nausea, vomiting, constipation), central nervous system effects (somnolence, dizziness, oversedation), and respiratory depression.8 There have been multiple reports linking increased opioid utilization to a greater likelihood of suffering opioid-related adverse events.9-11 Previous literature has associated opioidrelated adverse events with increased length of stay and hospitalization costs in a postoperative patient population.9The potential consequences of over-reliance on opioids is one of the reasons why recent guidelines published by the American Society of Anesthesiologists (ASA) on acute perioperative pain management advocate for a multimodal approach to acute analgesia (targeting different mechanisms of postoperative pain) whenever possible.12 The rationale for utilization of nonopioid adjunctive analgesics in conjunction with opioids for acute surgical pain is to maximize patient pain control while avoiding excessive opioid consumption in the postoperative period. Potential nonopioid agents to be utilized in this setting include nonsteroidal anti-infl ammatory drugs (NSAIDs), cyclooxygenase-2 (COX-2)–selective medications, regional blockade with local anesthetics, pregabalin, gabapentin, or acetaminophen. Furthermore, ASA members recommend that patients receive an around-the-clock regimen of NSAIDs, COX-2 medications, or acetaminophen. In addition to the ASA guidelines, other sources have advocated for a multimodal approach to postoperative analgesia.13,14Intravenous (IV) acetaminophen is one of the medications utilized in a multimodal analgesia regimen.15 Acetaminophen has long been available in the United States as an oral or rectal formulation. In November 2010, the US Food and Drug Administration (FDA) approved an IV formulation for the reduction of fever, management of mild-to-moderate pain, and management of moderate-to-severe pain with adjunctive opioid analgesics. IV acetaminophen reaches a 70% higher maximum concentration compared to the same dose of oral acetaminophen.16 In addition to having a higher maximum concentration (Cmax) than oral acetaminophen, the IV dosage form reaches its Cmax more quickly than its oral counterpart. However, overall exposure as measured by total area under the concentration time curve (AUC) is very similar to the oral formulation. A study published by Singla and colleagues17 in 2012 showed that a single dose of 1 g IV acetaminophen achieved earlier and higher plasma and cerebrospinal fl uid levels than equivalent doses of both oral and rectal acetaminophen in a single dose study.Two pivotal studies were conducted to assess the impact of IV acetaminophen on postoperative pain.18,19 Sinatra and colleagues18 compared IV acetaminophen 1 g every 6 hours to placebo in adults who underwent total hip or knee replacement. In this study, subjects who received IV acetaminophen reported decreased pain intensity at 6 (P < .05) and 24 hours (P < .01). Additionally, subjects in the IV acetaminophen group consumed significantly less IV morphine at 6 (-46%) and 24 (-33%) hours after randomization. Wininger and colleagues19 evaluated the use of IV acetaminophen (1 g every 6 hours or 650 mg every 4 hours for 24 hours) versus placebo in adults who underwent abdominal laparoscopic surgery. Patients receiving either of the IV acetaminophen regimens experienced decreased pain intensity (P < .007 for 1 g every 6 hours; P < .019 for 650 mg every 4 hours). However, there were also no differences noted in opioid consumption during 0 to 12 hours after randomization or 12 to 24 hours after randomization. These are the 2 procedure-related pain studies cited in the prescribing information for IV acetaminophen, but it should be noted that these studies did not randomize subjects until the morning following procedural completion and they do not address the intraoperative or immediate postoperative periods.Duke University Hospital (DUH) added IV acetaminophen to formulary in March 2011, with restrictions associated with its use. These restrictions included limitation of the agent to patients unable to tolerate an oral diet and placement of an automatic stop time of 24 hours on each order. A medication use evaluation (MUE) conducted after IV acetaminophen was added to formulary revealed that it was most frequently utilized in the orthopedic surgical population. Furthermore, the MUE showed that many patients were receiving a single dose intraoperatively before transitioning to oral nonopioid adjunctive analgesics. The purpose of IV acetaminophen administration intraoperatively at DUH was to maximize the pain-reducing potential of the medication based on its route of administration, with the patient experiencing the benefit upon arrival in the postanesthesia care unit (PACU). It is policy at DUH that the patient must have a pain score of 4/10 or less before leaving the PACU, so the use of IV acetaminophen prior to surgical completion may result in patients being eligible to leave the PACU more quickly.Because clinical trials investigated the use of IV acetaminophen postoperatively, the intraoperative use of IV acetaminophen was investigated based on historical DUH records. The main objectives of this study were to assess changes in postoperative opioid consumption, length of PACU stay, incidence of oversedation, and the need for acute opioid reversal in adults who underwent hip or knee replacement and received IV acetaminophen intraoperatively.  相似文献   

6.

Background:

Alcohol withdrawal symptoms can be difficult to manage and may lead to an intensive care unit (ICU) admission. Patients experiencing severe alcohol withdrawal often require high doses of sedatives, which can lead to respiratory depression and the need for endotracheal intubation. Dexmedetomidine, an alpha-2 adrenoreceptor agonist, provides adequate sedation with little effect on respiratory function when compared to other sedatives.

Objective:

To evaluate sedation with a continuous infusion of dexmedetomidine versus propofol and/or lorazepam in critically ill patients experiencing alcohol withdrawal.

Methods:

A retrospective chart review was conducted on ICU admissions between March 2002 and April 2009 for alcohol withdrawal patients who necessitated treatment with a continuous infusion of dexmedetomidine, propofol, and/or lorazepam. Primary outcomes included the incidence of mechanical ventilation, length of mechanical ventilation (if applicable), and ICU and hospital length of stay.

Results:

Fifteen patients were treated with a continuous infusion of dexmedetomidine, and 17 were treated with an infusion of propofol and/or lorazepam. Two patients (13.3%) required intubation and mechanical ventilation in the dexmedetomidine group versus 10 (58.8%) in the propofol and/or lorazepam group (P = .006). Length of stay in the ICU was 53 hours for patients treated with dexmedetomidine versus 114.9 hours in the propofol and/or lorazepam group (P = .016). Hospital length of stay was less for the dexmedetomidine group, 135.8 hours versus 241.1 hours in the propofol and/or lorazepam group (P = .008).

Conclusions:

Dexmedetomidine use was associated with a decrease in the incidence of endotracheal intubation when used to sedate patients experiencing alcohol withdrawal. Patients transferred to a lower level of care faster and were discharged from the hospital sooner when treated with dexmedetomidine.Key Words: alcohol, dexmedetomidine, lorazepam, propofol, withdrawalThere are approximately 18.3 million people in the United States dependent on or abusing alcohol and 2.9 million people requiring treatment for problems related to alcohol use.1 The impact of alcohol withdrawal syndrome can be devastating, both physically and neurologically. The syndrome can include headache, anxiety, hallucinations, nausea and vomiting, sweating, seizures, irritability, and the most severe form of alcohol withdrawal, delirium tremens. Patients experiencing delirium tremens have a mortality rate of up to 5%.2 The American Society of Addiction Medicine guidelines for the management of alcohol withdrawal delirium recommend sedative-hypnotic drugs, such as benzodiazepines, as the primary agents for managing alcohol withdrawal syndrome.3The goal of alcohol withdrawal treatment is to relieve the patients’ agitation and prevent the further development of more severe symptoms. Some patients may experience symptoms such as increased levels of anxiety, hallucinations, and delirium tremens. In these severe cases, escalating benzodiazepine doses (to include initiation of a continuous infusion) or initiation of another sedative, such as propofol or phenobarbital, becomes necessary to control agitation. The use of sedatives can cause a decrease in respiratory drive, which can lead to patients requiring transfer to a higher level of care with the potential for intubation and mechanical ventilator support.At North Colorado Medical Center (NCMC), patients undergoing alcohol withdrawal are initially treated with benzodiazepines. If escalating doses of benzodiazepines are unable to control agitation and other alcohol withdrawal symptoms, patients are evaluated by the physician for transfer to the intensive care unit (ICU). In the past, the standard of care in the NCMC ICU for patients experiencing severe alcohol withdrawal not controlled by intermittent benzodiazepines was the initiation of a benzodiazepine and/or propofol infusion based on the physician’s assessment and preference. Often these patients required intubation and mechanical ventilation. Recently, however, the sedation of patients experiencing severe alcohol withdrawal is increasingly being managed with dexmedetomidine in the ICU at NCMC.Clonidine has historically been used for treatment and prophylaxis of the symptoms of alcohol withdrawal.39 Dexmedetomidine is a centrally acting, relatively selective, alpha2-adrenergic agonist similar to clonidine with sedative and analgesic properties. Dexmedetomidine reduces the stress response, decreases norepinephrine and epinephrine levels, and attenuates increases in heart rate and blood pressure without depressing the respiratory drive.10,11The use of dexmedetomidine has been noted in multiple case reports, case series, and one small randomized controlled trial as a possibly effective agent for the management of alcohol withdrawal.1219 The case reports and case series primarily reported on safety, reduced benzodiazepine doses, and reduced delirium scores in the use of dexmedetomidine in alcohol withdrawal patients. The one randomized, blinded, placebo-controlled trial published to date by Mueller et al compared dexmedetomidine to placebo in patients with severe alcohol withdrawal. The primary endpoint was benzodiazepine requirements in the first 24 hours and cumulative dose over the first 7 days of hospitalization. They reported a reduced 24-hour benzodiazepine dose in the dexmedetomidine group and no difference in the 7-day cumulative dose between groups.12One of the main advantages of dexmedetomidine is that it does not cause respiratory depression.11 This is especially important in patients admitted to the ICU for severe alcohol withdrawal. Studies have demonstrated that patients admitted to the ICU with severe alcohol withdrawal have a high rate of intubation, reportedly 22% to 65%.20 Ventilator-associated pneumonia (VAP) can occur in 10% to 20% of patients receiving greater than 48 hours of mechanical ventilation. Patients who contract VAP have increased hospital costs of more than $10,000 per day, increased ICU length of stay by 5 to 7 days, and, in some reports, increased mortality.21 Furthermore, intubation and mechanical ventilation on ICU day 1 has been recognized as a predictor of a longer length of hospital stay.22Assessment and documentation of the effectiveness of dexmedetomidine for treatment of alcohol withdrawal, while growing rapidly, is still lacking in the medical literature. The purpose of this retrospective observational study was to evaluate the incidence and duration of mechanical ventilation and the length of ICU and hospital stay in alcohol withdrawal patients treated with dexmedetomidine, propofol, and/or lorazepam continuous infusions.  相似文献   

7.
Delirium is highly prevalent in the critically ill population and has been associated with numerous negative outcomes including increased mortality. The presentation of a delirious patient in the intensive care unit (ICU) is characterized by a fluctuating cognitive status and inattention that varies dramatically among patients. Delirium can present in 3 different motoric subtypes: hyperactive, hypoactive, and mixed. Two tools, the Intensive Care Delirium Screening Checklist and Confusion Assessment ICU, are validated and recommended for the detection of delirium in critically ill patients. The identification of delirium in a critically ill patient should be facilitated using one of these tools. An intermediate form of delirium known as subsyndromal delirium also exists, although the significance of this syndrome is largely unknown. Another phenomenon known as sedation-related delirium has been recently described, although more research is needed to understand its significance. Patients in the ICU are exposed to many risk factors for developing delirium; controlling these risk factors is essential for preventing delirium development in critically ill patients. Nonpharmacologic interventions have been shown to prevent patients from developing delirium. Prevention is crucial because once delirium develops pharmacologic therapy is limited.Delirium is highly prevalent in critically ill patients and has been reported to occur in over 80% of mechanically ventilated patients.13 A host of negative outcomes have been associated with delirium including increased intensive care unit (ICU) mortality, increased inpatient mortality, increased ICU length of stay, increased inpatient length of stay, and long-term cognitive impairment.2,48 Unfortunately, the pathophysiology of this syndrome is not well understood. Proposed mechanisms for pathogenesis include neuroinflammation and neurotransmitter imbalances.9 The limited knowledge of delirium pathogenesis contributes to the difficulties encountered in managing this common, burdensome syndrome.  相似文献   

8.

Background:

Preventing intravenous (IV) preparation errors will improve patient safety and reduce costs by an unknown amount.

Objective:

To estimate the financial benefit of robotic preparation of sterile medication doses compared to traditional manual preparation techniques.

Methods:

A probability pathway model based on published rates of errors in the preparation of sterile doses of medications was developed. Literature reports of adverse events were used to project the array of medical outcomes that might result from these errors. These parameters were used as inputs to a customized simulation model that generated a distribution of possible outcomes, their probability, and associated costs.

Results:

By varying the important parameters across ranges found in published studies, the simulation model produced a range of outcomes for all likely possibilities. Thus it provided a reliable projection of the errors avoided and the cost savings of an automated sterile preparation technology. The average of 1,000 simulations resulted in the prevention of 5,420 medication errors and associated savings of $288,350 per year. The simulation results can be narrowed to specific scenarios by fixing model parameters that are known and allowing the unknown parameters to range across values found in previously published studies.

Conclusions:

The use of a robotic device can reduce health care costs by preventing errors that can cause adverse drug events.Key Words: drug contamination, medication errors, patient safetyAdverse events associated with errors in compounding sterile medications have again become an issue of public concern.1 Pharmacists must renew their vigilance in ensuring the accuracy and sterility of compounded sterile medications if they are to maintain the public trust and avoid onerous regulation. Pharmacists are also focusing more on the clinical use of medications by delegating technical tasks, including drug preparation, to qualified technicians or by using technology.2 This transformation of the pharmacy profession will require a careful transition in critical tasks, such as compounding sterile preparations, for quality to be ensured, if not improved.This is not the first time that errors associated with compounding sterile preparations have been reported. Previous incidences prompted the US Food and Drug Administration (FDA) to consider regulating compounding as manufacturing. Concerns within the profession of pharmacy about impending regulations prompted the publication of the Draft Guidelines on Quality Assurance for Pharmacy-Prepared Sterile Products and the American Society of Health-System Pharmacists (ASHP) Technical Assistance Bulletin on Quality Assurance for Pharmacy-Prepared Sterile Products in 1993.3,4 In 2002, the United States Pharmacopeia (USP) published an enforceable standard for pharmaceutical compounding of sterile preparations in Chapter .5 Most recently, increasing concerns about the quality of compounded sterile preparations resulted in legislation that authorized the FDA to register “compounding outsourcing facilities” under Section 503B of the Federal Food, Drug, and Cosmetic Act.6Compounding sterile preparations in a traditional way is an error-prone method, because it relies on the performance of humans.710 Robotics have the potential to improve accuracy and quality, as they are efficient in performing redundant tasks, such as preparing large numbers of sterile doses of medication. This study used a predictive modeling approach to estimate the potential cost savings based on reduced errors resulting from the use of an automated robotic technology. The robotic device evaluated in this project was RIVA (Intelligent Hospital Systems, Winnipeg, Manitoba, Canada).The RIVA system prepares medications for syringes and intravenous (IV) bags in an aseptic environment using high-efficiency particle absorption (HEPA)–filtered air flow, positive air pressure containment, and continuous air quality monitoring within the compounding chamber of the robot situated in hospital central pharmacies. Negative air pressure configurations are used in the preparation of chemotherapy in order to prevent staff exposure to potentially dangerous cytotoxic medications. The system can be configured for general hospitals, cancer hospitals, infusion centers, and pediatric hospitals. Technicians load drug and diluent vials of inventory into the compounding chamber; the drugs are transferred within the chamber via sterile syringes that are pulsed with high-energy ultraviolet (UV) light to ensure disinfection of vial puncture sites. Drug identity is monitored with barcoding and volumetric weight checks are performed with electronic scales throughout the process. Final preparations are labeled in human- and machine-readable form to eliminate any re-labeling errors and to create a complete electronic audit trail.11,12  相似文献   

9.
10.

Objective:

To evaluate the appropriate dose of enoxaparin for venous thromboembolism (VTE) prophylaxis in patients with extreme obesity.

Methods:

A literature search was performed using MEDLINE (1950-April 2013) to analyze all English-language articles that evaluated incidence of VTE and/or anti-Xa levels with enoxaparin for thromboprophylaxis in patients with extreme obesity.

Results:

Eight studies were included in the analysis. Six of the studies were done in patients undergoing bariatric surgery. Mean body mass index ranged from 44.9 to 63.4 kg/m2 within studies. Studies done with bariatric surgery patients utilized doses of enoxaparin that ranged from the standard dose of 30 mg subcutaneous (SQ) every 12 hours to 60 mg SQ every 12 hours. Other studies evaluated doses ranging from 40 mg SQ every 24 hours to 0.5 mg/kg/day. Only 3 studies evaluated the incidence of VTE as the primary endpoint; the other studies evaluated anti-Xa levels. The studies showed that appropriate anti-Xa levels were achieved more often with higher than standard doses of enoxaparin. One study showed that enoxaparin 40 mg SQ every 12 hours decreased the incidence of VTE in patients undergoing bariatric surgery compared to standard doses. Overall risk of bleeding was similar between study groups.

Conclusions:

Higher than standard doses of enoxaparin may be needed for patients with extreme obesity. Patients undergoing bariatric surgery may benefit from enoxaparin 40 mg SQ every 12 hours. Additional large randomized, controlled trials are needed to determine the efficacy and safety of higher than standard doses of enoxaparin for VTE prophylaxis in patients with extreme obesity.Key Words: enoxaparin, extreme obesity, prophylaxis venous thromboembolismProphylaxis of venous thromboembolism (VTE), including deep vein thrombosis (DVT) and its extension pulmonary embolism (PE), is a mainstay of modern hospital care for many patients. In the United States, VTE is estimated to occur in up to 1 million people and accounts for over 200,000 deaths annually; more than half of VTE cases occur in the hospital setting.1When enoxaparin, one of the most commonly used low-molecular-weight heparins (LMWH), is chosen over viable alternatives for VTE prophylaxis, it is most often dosed at 40 mg subcutaneous (SQ) every 24 hours or 30 mg SQ every 12 hours.2 For most patients, this has been proven to be a safe and effective dose.2 However, these fixed doses do not take into consideration that the distribution of LMWH is weight based, and the efficacy of standard doses in obese and extreme obese patients may be decreased, putting these patients at a higher risk of thromboembolism.35The Centers for Disease Control and Prevention (CDC) defines extreme obesity as a body mass index (BMI) of greater than or equal to 40.0 kg/m2. With the CDC citing obesity and extreme obesity rates of 35.7% and 6.3% in the United States, an appropriate dosing strategy for enoxaparin thromboprophylaxis in these patients is needed.6The most common laboratory test for monitoring the anticoagulation effect of enoxaparin is plasma anti-Xa level. This is an expensive test that is rarely performed due to the predictable pharmacodynamic profile of enoxaparin. Some patients may benefit from measuring anti-Xa levels. These include obese patients, patients with renal dysfunction, pediatric patients, and pregnant women. However, this recommendation is limited to patients receiving treatment doses of enoxaparin.2 Measuring anti-Xa levels in patients receiving exonaparin for thromboprophylaxis is not a proven predictor of outcomes, and there is no consensus about an appropriate therapeutic range. Sanofi-aventis reports that mean peak anti-factor Xa activity was found to be 0.16 IU/mL and 0.38 IU/mL after the 20 mg and 40 mg SQ doses, respectively, were clinically tested.7 It has been suggested in the literature that peak anti-Xa levels between 0.2 and 0.5 U/mL obtained 3 to 5 hours following SQ injections of enoxaparin will be appropriate for thromboprophylaxis.8,9Some studies have found a strong negative correlation between body weight and anti-Xa activity after a 40 mg SQ injection of enoxaparin.4,10 In addition, other studies have shown that obese and extremely obese patients may not achieve suggested anti-Xa levels when standard doses of enoxaparin are used for VTE prophylaxis.3,5,11The 2012 American College of Chest Physicians guidelines recommend the use of a LMWH such as enoxaparin or low-dose unfractionated heparin (LDUH) in 3 distinct categories of patient for VTE prophylaxis.2 These include acutely ill patients with an increased risk of thrombosis and a low risk of bleeding, critically ill patients with a low risk of bleeding, and postsurgical patients with a moderate to high risk of VTE and a low risk of bleeding. The guidelines state, “It may be prudent to consult with a pharmacist regarding dosing in bariatric surgery patients and other patients who are obese who may require higher doses of LDUH or LMWH.”2(pp270-271)Some institutions, including Temple University Hospital, are currently using higher than standard doses of enoxaparin for thromboprophylaxis in hospitalized patients with extreme obesity. Given the rise in the prevalence of extreme obesity, the purpose of this article is to provide an overview of the literature to better inform clinicians on the appropriate thromboprophylactic dose of enoxaparin in patients with extreme obesity.  相似文献   

11.
Cardiovascular disease is a leading cause of morbidity and mortality. Individuals with underlying cardiovascular disease are at high risk for adverse outcomes from influenza infections. Although additional studies are needed, current evidence suggests the influenza vaccine may reduce the risk of cardiovascular death and coronary events. In addition to their overall efforts to encourage influenza vaccination for all eligible persons, pharmacists should pay special attention to these high-risk individuals.Cardiovascular disease (CVD) continues to be a major public health problem in the United States affecting an estimated 1 in 3 adults.1 Approximately 17 million individuals have coronary heart disease (CHD), resulting in over 1.5 million acute myocardial infarctions (AMI) annually. The prevalence increases with advancing age such that 25% of men and 16% of women aged 60 years or older have CHD.2 Each year an estimated 380,000 individuals die in the United States from CHD, which continues to be the number one cause of death for both men and women.Influenza results in more than 35,000 deaths and 225,000 hospitalizations in the United States annually.3 For persons with underlying CVD or diabetes, the risk of death and serious complications from influenza is especially high.4,5 Additionally, cardiovascular-related death is the leading cause of mortality during influenza season.6-8 Numerous studies have suggested a link between influenza and increased risk of cardiovascular events. For example, a systematic review of 39 studies found evidence that influenza may serve as a trigger for AMI.9 These studies have led to greater efforts to promote influenza vaccine for persons with known CVD. This article will discuss recent studies examining the possible cardioprotective effect of the influenza vaccine and the role of the pharmacist.  相似文献   

12.
Omega-3 fatty acids play an important role in cardiovascular health. Although it is suggested that individuals obtain these nutrients through diet, many prefer to rely on supplements. Fish oil supplements are widely used, yet large capsule sizes and tolerability make them less than ideal. Recently, krill oil has emerged as a potential alternative for omega-3 supplementation. This article will discuss what is known about krill oil and its potential use in cardiovascular risk prevention.Consumption of the omega-3 fatty acids, eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), has generally demonstrated numerous health benefits including lower rates of cardiovascular disease (CVD).1 These findings have been observed in studies using both increased dietary omega-3 fatty acids and supplementation.1 It should be noted, however, that not all studies support the use of omega-3 fatty acids. Controversy has been generated by recent studies indicating neutral effects on cardiovascular events among individuals receiving optimal drug therapy.2,3 In an effort to explain these discordant findings, it has been suggested that the ability to demonstrate cardiovascular benefit with omega-3 fatty acids may now be more challenging as the standard of care for CVD prevention has improved. Experts speculate that higher doses of EPA+DHA and longer study durations may be needed to provide benefit in an era of optimal medical therapy.4Despite this ongoing debate, major professional organizations continue to support the use of omega-3 fatty acids from fish or fish oil supplementation to prevent CVD. Common dietary sources of these essential fatty acids include “oily” fish such as salmon, sardines, herring, and albacore tuna.1 As an alternative, EPA+DHA can also be obtained in various preparations of fish oil and krill oil. Fish oil has been extensively studied and remains one of the most commonly utilized supplements in the United States.5 However, despite the paucity of well-designed studies examining krill oil, the market share of this omega-3 source continues to increase. Recent data indicate that krill oil presently represents 14% of total omega-3 sales.6 This finding is likely the result of consumers seeking the potential health benefits of omega-3 fatty acids, while searching for an alternative to fish oil. Krill oil has been promoted to the public through media advertisements and consumer publications suggesting that one small krill oil pill a day provides cardioprotection similar to that of fish oil. In this article, we take a look at the current evidence and provide a perspective for pharmacists having to differentiate and recommend omega-3 products to patients for CVD prevention.  相似文献   

13.

Background:

Tardive dyskinesia (TD) is a potentially irreversible, chronic syndrome related to antipsychotic medication use characterized by hyperkinetic abnormal involuntary movements. Various studies have shown that development of TD is possible with both first- and second-generation antipsychotics. Regular monitoring for emergence of TD symptoms is recommended in clinical practice for early recognition and intervention.

Methods:

This is a retrospective, single-center, observational study of the effectiveness of a pharmacist-driven monitoring database for TD assessment. Subjects were adult inpatients at a state psychiatric hospital who received antipsychotic treatment for at least 3 or 6 months between January 2006 and December 2011. The primary objective was to assess compliance rates with TD monitoring based on facility policy before and after implementation of the database at 3 and 6 months following initiation of antipsychotic therapy.

Results:

A significant improvement in compliance with TD monitoring policy was seen after implementation of the database (2.9% vs 66.7%; P < .001). Compliance with TD monitoring did not differ between classes of antipsychotic medication, hospital units, or age groups.

Conclusion:

The results of this study demonstrate that pharmacists can help improve compliance with TD assessment and that monitoring databases may be useful for similar extended or long-term care settings to ensure timely assessment of patients for the development or progression of TD.Key Words: antipsychotics, monitoring, tardive dyskinesiaTardive dyskinesia (TD) is a potentially irreversible, chronic syndrome related to certain medications such as antipsychotics and phenothiazine antiemetics and is characterized by hyperkinetic abnormal involuntary movements.1,2 Most commonly, symptoms of TD are seen within the oral-facial region, but they can involve neuromuscular function in any body region. The Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.) research criteria for neuroleptic-induced TD describes involuntary movements of the tongue, jaw, trunk, or extremities that have developed after the use of a neuroleptic medication.3 These choreiform (ie, rapid, jerky, nonrepetitive), athetoid (ie, slow, sinuous, continual), or rhythmic (ie, sterotypies) movements must be present for at least 4 weeks. The involuntary movements can occur after exposure to a new neuroleptic medication (at least 3 months for most patients, but within 1 month for patients age 60 years or older), within 4 weeks of withdrawal from an oral neuroleptic medication, or within 8 weeks of withdrawal from a depot neuroleptic medication. The majority of patients who develop TD will experience mild symptoms and may be unaware of the movements.1 Approximately 10% of patients will develop moderate to severe symptoms that can lead to significant functional impairment.2TD can be a severe and debilitating side effect of antipsychotic medications. It was anticipated that second-generation antipsychotics (SGA) would be associated with a lower risk of TD development than first-generation antipsychotics (FGA). Various studies have been conducted in the past 2 decades evaluating the incidence of extrapyramidal side effects, comparing the SGA to FGA and placebo.47 In all studies, development of TD was seen in both classes of antipsychotics, leading to the recommendation that patients should be continually monitored while receiving all types of antipsychotic agents. A prospective cohort study evaluating the incidence of TD in an outpatient population with no previous history of TD and prior antipsychotic exposure demonstrated that new onset TD occurred at a rate of 0.056 versus 0.059 per year in those receiving a FGA or SGA, respectively.4 Further, it was found that those patients receiving both a FGA and SGA had a rate of 0.096 per year. The study concluded that the incidence of TD associated with SGA was similar to FGA and was relatively unchanged since the 1980s. A recent survey of psychiatrists (n = 124) conducted in the United Kingdom described the current knowledge and practice relating to monitoring of TD.8 The study found a disparity among psychiatrists in terms of monitoring patients for TD, with 89% (n = 110) of respondents fully agreeing that psychiatrists should monitor for abnormal movements in patients on antipsychotics but only 66% (n = 82) reporting that they routinely complete the monitoring.The debilitation of TD can be significant, with some cases leading to decreased social and occupational functionality; therefore, it is important that clinicians monitor for the appearance of symptoms.1 Two widely used rating scales that monitor for the emergence or progression of TD are the Abnormal Involuntary Movement Scale (AIMS) and the Dyskinesia Identification System: Condensed User Scale (DISCUS).9,10 The AIMS was developed by the National Institute of Mental Health in the 1970s to provide assessment of abnormal movements within a clinical and research setting.9,11 The exam is composed of 12 items that are divided into facial and oral movements, extremity movements, trunk movements, global judgments, and dental status categories.9,12 The DISCUS was developed and validated through a series of studies in the 1980s.10 The exam contains 15 items categorized by different sections based on body location: face, eyes, oral, lingual, head/neck/trunk, upper limbs, and lower limbs.Consensus statements from the American Psychiatric Association (APA) for monitoring of TD state that patients should be evaluated for extrapyramidal side effects and TD before initiation of any antipsychotic medication with regular follow-up monitoring after starting an antipsychotic medication.13 Guidelines do not specify which rating scale is preferred by the organization, but the APA recommends that patients be evaluated for TD every 6 months while receiving a FGA and every 12 months while receiving a SGA. Those patients at high risk of developing TD, including the elderly and those having significant extrapyramidal side effects such as acute dystonic reactions or akathisia, should be examined every 3 months while receiving a FGA and every 6 months while taking a SGA.Less than optimal monitoring practices have been observed in various practice settings. Therefore, in October 2009, a pharmacist-driven antipsychotic monitoring database was created at Center for Behavioral Medicine (CBM, formally known as Western Missouri Mental Health Center), a 65-bed state psychiatric inpatient facility. The database was set up to provide reminders for monitoring of various parameters such as fasting lipid profiles, fasting glucose, waist circumference, and TD assessment. The electronic database was created using Microsoft Access and requires manual input of patients along with due dates for their next exam. Each month, a pharmacist generates a report listing patients due for TD assessment follow-up. The database prompts for reassessment when patients are receiving any FGA or SGA. At CBM, TD assessment is conducted by trained physicians and pharmacists. After assessments are completed, the patient profiles and due dates are updated within the monitoring database. The DISCUS was utilized for TD assessment until July 2010 when the facility switched to using the AIMS.CBM has a policy for TD monitoring that follows the APA recommendations for high-risk patients. All patients must be screened for TD within 48 to 72 hours of admission to the facility regardless of current medication regimen in order to record a baseline assessment.14 CBM requires that patients receiving a scheduled FGA or with a diagnosis or history of TD regardless of type of antipsychotic drug be monitored every 3 months and those taking a scheduled SGA be evaluated every 6 months. The facility policy also states that any patient receiving metoclopramide or prochlorperazine on a routine or scheduled basis be monitored every 12 months. This study was designed to evaluate the effectiveness of a pharmacist-driven TD monitoring database to meet facility policy goals.  相似文献   

14.
An integral part of providing effective feedback to pharmacy residents occurs during the evaluation process. Residency evaluation involves measuring and documenting performance as it relates to standardized residency outcomes, goals, and learning objectives. Evaluations may be formative or summative and include the preceptor’s evaluation of the resident’s performance, the resident’s self-assessments, and the resident’s evaluation of the preceptor and learning experience. Evaluations are more structured than feedback, and they involve documentation of the verbal feedback that was provided throughout the learning experience. This article will focus on the preceptor’s role in providing effective resident evaluations based on specific learning activities.Key Words: evaluation, feedback, formative, learning activities, residency precepting, self-assessment, summativeMany factors impact preceptors’ ability to provide quality feedback.1 In verbal feedback and written evaluation, communication is critical. Effective communication is a key component to a successful pharmacy practice,2 and much of the residency year involves training residents to develop communication proficiency in multiple practice situations. Because the intention in residency training is for learners to model the preceptors’ skills, attitudes, and behaviors, the way that preceptors communicate in evaluations is important in the residents’ overall development.Along with communication, professionalism, role-modeling, coaching, and evaluation skills have been identified as key elements of effective pharmacy precepting.3,4 Colleagues in medicine and nursing have also identified preceptor characteristics associated with proficiency in providing feedback.57 These include completing preceptor training, showing empathy for learners, establishing mutually agreed upon learning goals, challenging learners, managing conflict, and documenting and reviewing professional goals annually. Imparting these skills to residents, including the skills needed for effective evaluation, is not only important for individual resident growth, but is also important for the profession’s growth as new and future residency graduates will need to be confident and competent preceptors to advance residency training.8  相似文献   

15.
Advanced experiential education represents the culmination of a pharmacy student’s training, where students can apply the knowledge they have learned in the classroom to real patients. Unfortunately, opportunities for students to provide the direct patient care recommended by pharmacy organizations and accrediting bodies are lacking. Additionally, academic health systems that can provide these experiences for students are experiencing hardships that have stalled the expansion of postgraduate training programs and services. Formal cooperation between unaffiliated colleges of pharmacies and academic health systems has the potential to increase the number of experiential students completing rotations in an academic environment, expand postgraduate education training programs, enhance the development of resident educators, increase research and scholarly opportunities, and expand clinical pharmacy services. This article describes the formation of a unique joint initiative between a private academic health system without a college of pharmacy and a private college of pharmacy without a hospital. The successful cultivation of the relationship has resulted in professional growth at both institutions and can be implemented at other sites around the country to synergize the efforts of academic health systems and colleges of pharmacy.Key Words: academic medical center, clinical pharmacy services, experiential education, scholarshipThe Accreditation Council for Pharmacy Education (ACPE) Accreditation Standards and Guidelines for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree Guidelines 2.0 emphasize the need for experiential education that provides students with direct patient care experience in a variety of settings.1 Similarly, the American College of Clinical Pharmacy (ACCP) issued a white paper offering recommendations for quality experiential education for students,2 which includes ensuring that students are not simply observing pharmacy practice but, instead, are directly impacting patient care.3 These educational opportunities are often best met at hospitals with a strong history of providing clinical pharmacy services.Colleges of pharmacy struggle to secure opportunities for their students to provide quality direct patient care in the inpatient setting. One solution to this problem is an affiliation with a large academic health system. At the time of writing, there are 129 colleges and schools accredited by ACPE.4 Of these, approximately one-third are located at an academic health center with a teaching hospital.5 Most of those are public universities with a comprehensive medical center campus. In contrast, the majority of private colleges of pharmacy, as well as some public universities, operate outside of an academic health center. In these situations, colleges of pharmacy must rely heavily on outside institutions to fulfill the program’s experiential education requirements.To provide pharmacy students with experiences at large teaching facilities, some colleges of pharmacy have developed formal relationships with otherwise unaffiliated hospitals and health systems.610 The partnership between an academic health system and college of pharmacy can be mutually beneficial.11 We describe our experience with a joint initiative between a private academic health system and a private college of pharmacy.  相似文献   

16.
Nearly 2 centuries have passed since the use of intravenous fluid became a foundational component of clinical practice. Despite a steady stream of published investigations on the topic, questions surrounding the choice, dose, timing, targets, and cost-effectiveness of various fluid options remain insufficiently answered. In recent years, 2 of the most debated topics reference the role of albumin in acute care and the safety of normal saline. Although albumin has a place in therapy for specific patient populations, its high cost relative to other fluids makes it a less desirable option for hospitals and health systems with escalating formulary scrutiny. Pharmacists bear responsibility for reconciling this disparity and supporting the rational use of albumin in acute care through a careful evaluation of recently published literature. In parallel, it has become clear that crystalloids should no longer be considered a homogenous class of fluids. The past reliance on normal saline has been questioned due to recent findings of renal dysfunction attributable to the solution’s supraphysiologic chloride concentration. These safety concerns with 0.9% sodium chloride may result in a practice shift toward more routine use of “balanced crystalloids,” such as lactated Ringer’s or Plasma-Lyte, that mimic the composition of extracellular fluid. The purpose of this review is to summarize the evidence regarding these 2 important fluid controversies that are likely to affect hospital pharmacists in the coming decades — the evidence-based use of human albumin and the rising role of balanced salt solutions in clinical practice.Over 30 million patients receive intravenous (IV) fluid each year in the United States.1,2 This practice is utilized most commonly in the intensive care unit (ICU), where more than one-third of all critically ill patients are resuscitated with IV fluid.3 The ubiquitous use of fluids in acute care is perpetuated by the need to replace volume loss, maintain organ perfusion, and achieve hemodynamic goals. Furthermore, early and aggressive IV fluid administration improves patient outcomes in many syndromes and is emphasized in consensus recommendations for sepsis, cirrhosis, hypovolemia, burns, and hemorrhage.411 However, after the initial resuscitation phase, a balancing act between “adequate” hydration and “over” hydration ensues to avoid the harmful sequelae associated with volume overload such as respiratory failure, peripheral edema, increased cardiac demand, and acute kidney injury (AKI).12Nearly 2 centuries have passed since IV fluid became a foundational component of the hemodynamic resuscitation strategy.13 Despite a steady stream of literature on the subject, questions surrounding the choice of fluid, dose, timing, targets, and cost-effectiveness remain insufficiently answered. Ambiguity in the literature is due to a lack of rigorous head-to-head trials, questionable selection of primary outcomes, and faint signals of efficacy or safety in study subgroups.As the controversial topic of IV fluid utilization is expansive,1315 we sought to summarize 2 key issues with recent updates to the literature. The 2 topics discussed in this review are of great importance to pharmacists now and will continue to be in the coming years: the evidence-based use of human albumin and the rising role of balanced salt solutions in clinical practice.  相似文献   

17.

Objective:

To report a case of amlodipine overdose successfully treated with intravenous lipid emulsion (ILE).

Case Summary:

A 47-year-old, 110 kg female ingested at least 350 mg of amlodipine with an unknown amount of ethanol. Initial blood pressure was 103/57 mm Hg, mean arterial pressure (MAP) 72 mm Hg, and heart rate 113 beats per minute. In the early clinical course, activated charcoal, intravenous fluid, and calcium boluses were administered. Worsening hypotension prompted a 100 mL bolus of 20% ILE. Stable hemodynamics were maintained for 2 hours. Subsequently, profound hypotension and shock developed (MAP 38 mm Hg), which failed to fully respond to 3 vasopressor agents, calcium, and glucagon. With continuing shock despite optimized vasopressors, an infusion of 2,300 mL 20% ILE was administered over 4.5 hours (20.9 mL/kg infusion total). By completion of the infusion, 2 vasopressors were tapered off and MAP remained above 70 mm Hg; within 12 hours, no further interventions were required. Possible adverse events of ILE, lipemia and hypoxia, were experienced but quickly resolved. The patient survived to hospital discharge within 8 days.

Discussion:

Toxicity of amlodipine presents similar to distributive shock as both are due to marked peripheral vasodilation. There are numerous interventions in the management of amlodipine overdose, despite which many patients continue to suffer life-threatening shock as observed with this patient. ILE has been used with promising preliminary results as salvage therapy in case reports of other lipophilic molecules. This is the first report of lone amlodipine overdose treated with ILE.

Conclusion:

ILE is a novel antidote for overdoses of lipophilic substances and demonstrated efficacy in this case of amlodipine overdose without the use of hyperinsulinemic euglycemia.Cardiovascular medications were the second leading cause of death from medication overdose in 2010, with amlodipine contributing to 24 of those fatalities.1 Calcium channel blocker (CCB) toxicity presents a particular clinical challenge due to the negative cardiac conduction and contractility properties, along with potent peripheral vasodilatory effects. All CCBs act through blockade of voltage gated L-type calcium channels.2-4 At therapeutic doses, dihydropyridine CCBs (amlodipine, nifedipine, felodipine, nimodipine, and nicardipine) act by relaxation of smooth muscle surrounding blood vessels in the periphery and heart.3 Non-dihydropyridine CCBs (diltiazem and verapamil) work in the cardiac nodal tissue to slow conduction and demonstrate negative chronotropic, dromotropic, and inotropic effects.3 However, at toxic levels this difference in site of effect between classes is less pronounced,2,5 thus all dihydropyridine and non-dihydropyridine CCBs may cause hypotension, bradycardia, heart block, and shock.There are a number of therapeutic modalities used in the management of CCB overdose, including fluid resuscitation, intravenous calcium, glucagon, atropine, hyperinsulinemic euglycemia, and vasopressor agents. Despite these interventions, CCB toxicity continues to cause significant mortality. We report the successful use of an emerging treatment strategy, intravenous lipid emulsion (ILE), in the management of amlodipine toxicity, despite hyperinsulinemic euglycemia not being provided.  相似文献   

18.

Purpose:

The polymerase chain reaction (PCR) test has higher sensitivity and a faster turnaround time than the enzyme immunoassay (EIA) for identification of Clostridium difficile, although the clinical implications of these variables are not well described.

Methods:

Inpatients with a negative EIA (n = 79) or PCR (n = 87) test were retrospectively evaluated. Patients were excluded if they had a positive EIA or PCR test during the same hospitalization or if they were currently receiving treatment for C. difficile infection (CDI) prior to admission. The primary outcome was empiric CDI antibiotic duration of therapy associated with each test method.

Results:

Empiric CDI antibiotic duration of therapy was 2.31 (95% confidence interval [CI], 1.48-3.15) days for the EIA group and 0.88 (0.45-1.33) days for the PCR group (P = .007). Number of diagnostic laboratory tests performed per patient were 2.73 (2.64-2.83) and 1.16 (1.04-1.28) tests, respectively (P < .001).

Conclusion:

Use of the PCR test to rule out CDI was associated with reduced duration of empiric CDI antibiotic therapy and fewer diagnostic laboratory tests performed per patient. When combined with fewer diagnostic laboratory tests performed per patient and shorter duration of contact isolation, the higher acquisition cost of the PCR test was offset, resulting in cost neutrality. These findings provide additional data to support the routine use of the PCR test.Key Words: antibiotic, Clostridium difficile, enzyme immunoassay, polymerase chain reaction, stewardshipClostridium difficile infection (CDI) is one of the most important health-care associated infectious diseases in the United States. It is defined by the presence of diarrhea (the passage of 3 or more unformed stools in 24 or fewer consecutive hours) and either a stool test positive for toxigenic C. difficile or colonoscopic or histopathologic findings revealing pseudomembranous colitis.1 It is an increasingly common disease; the national rate of C. difficile hospitalizations per 1,000 adult discharges increased from 5.6 in 2001 to 11.5 in 2010.2,3 Additionally, CDI is also responsible for significant morbidity, which can range from mild gastrointestinal symptoms to severe complications, including pseudomembranous colitis, toxic megacolon, colonic perforation, and even death.4,5 The cost of CDI has been estimated to carry an economic burden of $1.1 to $3.2 billion per year in the United States.68 Timely, accurate diagnosis and appropriate treatment are essential to combat this disease effectively.Several tests are available for the detection of C. difficile.1 Because of its high sensitivity, toxigenic culture has traditionally been regarded as the gold standard for the diagnosis of CDI, but it is not clinically practical due to slow turnaround. The toxin A/B enzyme immunoassay (EIA) is largely limited by its lack of sensitivity, rendering it a suboptimal alternative approach for diagnosis, and therefore it is no longer recommended to be used alone.1 In an attempt to improve diagnostic sensitivity, the EIA is often carried out in combination with glutamate dehydrogenase detection or performed daily for 3 consecutive tests, yielding final results in 3 days.1,9 The polymerase chain reaction (PCR) to detect toxin A/B genes is rapid, sensitive, and specific, yielding results in as little as 1 hour.1 To date, most research has focused on epidemiology, utility, specificity, and sensitivity of the different tests; they have not been evaluated with regard to their impact on hospital resources, such as drug costs, isolation costs, and laboratory costs.We hypothesized that a change from EIA to the PCR test would be associated with reduced empiric CDI antibiotic duration of therapy for patients with negative tests. Our primary outcome is empiric CDI antibiotic duration of therapy. Secondary outcomes include the number of CDI diagnostic laboratory tests (EIA or PCR) performed per patient, duration of contact isolation, and estimated total cost of treatment per patient as secondary outcomes.  相似文献   

19.

Purpose:

To evaluate the efficacy and economic impact of a maximum of 2 doses of intraluminal volume 1 mg/1 mL dose alteplase for the clearance of occluded peripherally inserted central catheter (PICC) lines at a long-term acute care hospital (LTACH).

Methods:

Open-label, nonrandomized quasi-experimental trial taking place over a 3-month period from December 2013 to March 2014. Patients had a standing order of either standard (2 mg/2 mL) or intraluminal volume (1 mg/1 mL) dose alteplase entered for any potential occlusions. The primary efficacy outcome was restored line patency after a maximum of 2 doses of alteplase. Secondary efficacy outcomes included restored patency after 1 dose of alteplase, reocclusion rate, mean time to reocclusion, and mean number of occlusions per patient.

Results:

A total of 168 patients were enrolled into the study (intraluminal volume, n = 54; standard, n = 114) and a total of 270 occlusions were recorded; 90 received intraluminal volume dose alteplase and 180 received the standard dose. The primary efficacy endpoint was 93.3% for the intraluminal volume dose group and 94.4% for the standard dose group. Secondary outcomes were similar between groups. The average cost per dose was $123.77 and $60.62 for the standard and intraluminal volume dose alteplase groups, respectively.

Conclusion:

For the clearance of occluded PICC lines at our LTACH, there was no statistical difference in the efficacy of a maximum of 2 doses of intraluminal volume dose alteplase versus the standard dose. Use of intraluminal volume dose alteplase was found to be significantly more cost-effective.Key Words: alteplase, intraluminal volume, peripherally inserted central catheter line, long-term acute care hospitalPeripherally inserted central catheter (PICC) lines, a distinct type of central venous access device (CVAD), have become an essential component of intravenous therapy. Insertion of PICC lines is less invasive, poses fewer risks, and has a lower likelihood of complications than insertion of traditional central venous catheters (CVCs).1 Despite the advances that have been made in vascular access technology, all intravenous lines continue to be associated with a risk of thrombosis and occlusion.2 Replacing dysfunctional CVADs, including PICC lines, is expensive and uncomfortable for patients.3 As such, prevention or resolution of thrombotic occlusions and avoidance of line replacement are of critical importance with regard to these devices.Alteplase, also known as tissue plasminogen activator (tPA), is a fibrinolytic agent indicated for the restoration of thrombotically occluded CVADs as assessed by the ability to successfully withdraw blood from the line.4 Alteplase’s mechanism of action involves prolonged contact with occlusions via line dwell. It is commercially available as a single dose vial of lyophilized powder for reconstitution to a concentration of 2 mg/2 mL, or it can be aliquotted from a larger vial into unit dose syringes that can be frozen for up to 45 days and thawed immediately prior to use.5 The adult dose is 2 mg/2 mL, whereas the package insert states that patients weighing 30 kg or less should receive a dose of 110% of the occluded line’s intraluminal volume (at the same concentration as the adult dose).The safety and efficacy of the approved 2 mg/2 mL dose of alteplase in CVCs has been well studied,6 and there are some data regarding its use in PICC lines specifically.1 Additionally, some studies have been done on the efficacy of the intraluminal volume dose used for patients weighing 30 kg or less.7,8 Considering the fact that alteplase exerts its mechanism of action via prolonged line dwell, it stands to reason that administering any volume of drug sufficient to completely fill the occluded line should be noninferior to the standard approved dose. To this point, there have been very few studies investigating this idea of “intraluminal volume” alteplase dosing, and the limited data available come almost exclusively from studies involving hemodialysis catheters.9,10 The intraluminal volume of most commercially available PICC lines is less than 0.8 mL.1113 As a result, approximately half of each standard alteplase dose is wasted as it enters the patient’s systemic circulation, at significant cost to institutions.8Although there are some data regarding the efficacy of standard (2 mg/2 mL) dose alteplase for the clearance of occluded PICC lines, very little research has been done regarding the efficacy of an intraluminal volume 1 mg/1 mL dose. The purpose of this study is to determine the efficacy and cost-effectiveness of a maximum of 2 doses of 1 mg/1 mL intraluminal volume alteplase as compared to a maximum of 2 doses of standard 2 mg/2 mL dose alteplase for the restoration of patency of occluded PICC lines at our long-term acute care hospital (LTACH; HealthEast Bethesda Hospital, St. Paul, MN). It was hypothesized that dosing alteplase intraluminally at 1 mg/1 mL as opposed to the standard 2 mg/2 mL dose would provide equal efficacy in restoring PICC line patency at a decreased cost.  相似文献   

20.

Purpose:

Hazardous drug residue on the exterior surface of drug vials poses a potential risk for exposure of health care workers involved in handling these products. The purpose of this article is to heighten the awareness of this serious issue and to evaluate a commercial manufacturing process for removing and containing hazardous drug (HD) residue on exterior vial surfaces. Additionally, findings from this study are interpreted, incorporated into the current body of evidence, and discussed by experts in this field.

Methods:

This study includes separate evaluations for the presence or absence of surface drug contamination on the vials of 3 HD products: 5-fluorouracil, cisplatin, and methotrexate. The drug products were packaged in vials using a patented prewashing/decontamination method, application of a polyvinylchloride (PVC) base, and use of clear glass vials. An additional step of encasing the vial in a shrink-wrapped sheath was used for 5-fluorouracil and cisplatin.

Results:

Of all 5-fluorouracil (110 vials), methotrexate (60 vials), and cisplatin (60 vials) tested, only 2 had detectable amounts of surface residue. One 5-fluorouracil vial was found to have approximately 4 mg of 5-fluorouracil on the surface of the vial. The second contaminated vial was cisplatin, which was discovered to have 131 ng of platinum, equal to 200 ng of cisplatin or 0.2 μL of cisplatin solution, on the vial sheath.

Conclusion:

Using validated extraction and analytic methods, all but 2 of the 230 tested vials were found to be free of surface drug contamination. Pharmacy leaders need to take an active role in promoting the need for clean HD vials. Manufacturers should be required to provide their clients with data derived from externally validated analytic studies, reporting the level of HD contamination on the exterior of their vial products.Key Words: clean vials, hazardous drugs, safe handling, vial contaminationThe term hazardous drugs was first introduced more than 20 years ago by the American Society of Health-System Pharmacists (ASHP) as a way to more accurately depict therapeutic drugs with adverse effects that could endanger health care workers.1 This term and the proposed criteria used to evaluate drugs were adopted by the Occupational Safety and Health Administration (OSHA) in 1995.2 The National Institute for Occupational Health and Safety (NIOSH) modified the criteria in 2004 to reflect a hierarchy of concerns that would encompass future drug modalities.3 The United States Pharmacopeia (USP) also adopted the term in its 2007 revision to USP General Chapter <797>.4A hazardous drug (HD) is generally defined as any agent for which “studies in animals or humans indicate that exposures to them have a potential for causing cancer, developmental or reproductive toxicity, or harm to organs.”3 Drugs are characterized as hazardous due to their inherent toxicity.Concern with occupational exposure to HDs has been expressed after reports and studies of adverse effects in health care workers. Acute symptoms, such as rashes, have been reported primarily due to inadvertent skin contact.5,6 A systematic review and meta-analysis conducted in 2005 examined reports of increased risks of cancer, reproductive complications, and acute toxic events in health care workers who were exposed to HDs and identified an association between exposure to chemotherapy and spontaneous abortions.7 Reports of liver damage, bladder cancer, and breast cancer were not found to be suitable for statistical pooling in this 2005 study, which thereby limited the study of cancer risks.7 A 2010 study, however, described evidence of drug uptake and chromosomal changes in oncology workers.8 The damaged chromosomes in which changes were discovered are the same as those that are associated with therapy-related myelodysplastic syndrome (t-MDS) and therapy-related acute myeloid leukemia (t-AML); this points to a relationship between HD exposure in workers and an increased possibility of cancer.8 Similar to the 2005 meta-analysis, a 2012 study reported adverse reproductive events in nurses exposed to HDs in the workplace and noted a 2-fold increased risk of spontaneous abortion.7,9  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号