首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In a recent issue of Critical Care, Poole and colleagues found that plasma citrulline concentration and glucose absorption were reduced in 20 critically ill patients compared with 15 controls; however, the authors found no correlation between these two variables. This study highlights the question of the accuracy of plasma citrulline for assessing small bowel function in critically ill patients. Future studies should take into account the type of intestinal failure considered, the particular metabolism of citrulline, the time of plasma citrulline measurement, as well as the range of citrullinemia considered.In a recent issue of Critical Care, Poole and colleagues [1] evaluated the link between plasma citrulline concentration and glucose absorption in critically ill patients. This is an important contribution to the study of acute intestinal failure in the ICU.Evaluating small bowel function and integrity is challenging in the critically ill [2]. Small bowel damage might be central in the development of systemic inflammatory response syndrome, bacterial translocation, and multiple organ failure [3]. In addition, small bowel damage might make it difficult or impossible to use the enteral route for feeding or medication intake. Citrulline is an amino acid not incorporated into proteins that is mainly synthesized by small bowel enterocytes [4]. Fifteen years ago, Crenn and colleagues [5,6] demonstrated that plasma citrulline concentration was a precise biomarker of small bowel function in patients with short bowel syndrome and villous atrophy-associated small bowel diseases, reflecting small bowel length and villi size, respectively. To date, numerous studies have confirmed these results in non-critically ill patients.The question is to know whether plasma citrulline is also a valid biomarker in critically ill patients [7]. The paper of Poole and colleagues [1] is therefore welcomed. They confirmed that plasma citrulline concentration and glucose absorption were lower in 20 critically ill patients compared with 15 controls. Despite a trend toward a positive correlation (R = 0.28, P = 0.12), the link between plasma citrulline concentration and glucose absorption was not statistically significant. The authors concluded that plasma citrulline concentration was not a biomarker of glucose absorption in critically ill patients. As noted by the authors, however, the study was possibly underpowered to explore the link between plasma citrulline concentration and glucose absorption.All in all, this apparently disappointing study highlights the need to clarify the interest and limits of plasma citrulline concentration in the critically ill. On the one hand, it is reasonable to assume that plasma citrulline concentration is related to small bowel function in the critically ill. First, the plasma citrulline concentration is low in more than half of critically ill patients [1,8,9]. Second, low plasma citrulline concentrations are associated with a poor prognosis [8,9]. Third, plasma citrulline and C-reactive protein concentrations are inversely correlated [8-10], and low plasma citrulline is associated with bacterial translocation [10,11]. Fourth, low plasma citrulline concentrations were found to be associated with elevated intestinal fatty acid binding protein concentration, a specific biomarker of enterocyte damage [11,12]. Fifth, plasma citrulline concentration was found to be lower among critically ill patients presenting with signs of intestinal dysfunction, such as feeding intolerance, ileus, diarrhea, or gastrointestinal bleeding [13]. On the other hand, plasma citrulline concentration might lack accuracy among critically ill patients. First, the metabolism of citrulline is complex. It depends on glutamine availability, its main precursor, on kidney function for the transformation of citrulline into arginine, on the level of systemic inflammation because nitric oxide synthase transforms arginine into nitric oxide plus citrulline, and on arginine bioavailability because citrulline might be overmetabolized as a source of arginine [4,14]. Second, the time of plasma citrulline measurement might be crucial. Indeed, plasma citrulline concentration describes a U-curve during ICU stay [8]. Even if plasma citrulline concentration is often low at the time of ICU admission, it is even lower 1 or 2 days later, whereas it tends to increase in survivors after 1 week. Because of this intra-individual variability, the time of measurement should be taken into account. Third, the present study suggests that plasma citrulline does not reflect correctly the absorptive function in the critically ill. Only few and preliminary studies have evaluated the links between plasma citrulline concentration, absorptive function, and barrier function in the ICU. Fourth, the range of normality for plasma citrulline concentration, which is between 20 and 40 μmol.L−1 in the stable patient, might not be adapted in the critically ill. Whereas a very low plasma citrulline concentration, ≤10 μmol.L−1, is likely to indicate an altered small bowel function, a plasma citrulline concentration between 10 and 20 μmol.L−1 might be in a grey zone for the interpretation [15].In conclusion, we need additional data in this field. Due to its metabolism and particular kinetics in the critically ill, plasma citrulline concentration might not be as accurate as in stable patients for determining small bowel function. On the other hand, plasma citrulline is probably an indicator of small bowel function that should be integrated into a clinical context.  相似文献   

2.
3.
Zhang and colleagues have recently published a systematic review and meta-analysis of six studies and conclude that ‘gastric tonometry guided therapy can reduce total mortality in critically ill patients’. So why did gastric tonometry come and go, and what can we learn from this piece of modern history?Gastric tonometry measures the balance between alveolar ventilation, gastric blood flow, and metabolism [1,2]. In the 1990s, gastric tonometry was a fashionable clinical monitor and was incorporated into numerous laboratory and clinical trials [3-6]. Then, soon after a small randomised controlled trial (RCT) of just over 200 patients reported no impact on ICU mortality when gastric tonometry was used to guide therapy, it seemed to disappear as a clinical tool [7]. However, Zhang and colleagues have recently published a systematic review and meta-analysis of these six studies and conclude that ‘gastric tonometry guided therapy can reduce total mortality in critically ill patients’ [1]. So why did gastric tonometry come and go, and what can we learn from this piece of modern history?Hollow viscus tonometry is a long-established technique. Lavaging a hollow viscus such as the gall bladder or gastrointestinal tract allows an estimate of the partial pressure of gas tension in the wall of the viscus by analysis of the lavage. It was deployed in the stomach over decades, evolving from sampling gastric juice to the use of condoms attached to nasogatric tubes and eventually bespoke modified nasogatric tubes that incorporated a silicone balloon and sampling line. Manual saline tonometry required the balloon to be filled with 2.5 mL saline and, following a dwell time of up to 90 minutes, sampling and analysis using a blood-gas analyser [3-6]. Attention was initially focused on the calculation of ‘gastric intra-mucosal pH’ (or ‘pHi’) by using the gut lumen carbon dioxide (CO2) measured by tonometry and the calculated arterial bicarbonate concentration from an arterial sample drawn at the same time. The theory was that, during periods of reduced gastric blood flow, a critical level would be reached below which anaerobic metabolism would be the dominant metabolic pathway for the generation of energy. Anaerobic metabolism generates lactic acid and causes the accumulation of CO2.The first bespoke gastric tonometer was probably launched prematurely as a number of technical glitches, such as the impact of poor sampling technique and temperature on CO2 tension, needed to be resolved post-launch. Despite these glitches, ‘pHi’ measurement became popular in clinical observational studies and was demonstrated in major surgery, trauma, and the ICU to be a highly sensitive but less specific predictor of a poor outcome [3-6]. Doubt was cast on the utility of ‘pHi’ as it incorporated both global acid–base balance and regional partial pressure of CO2 (PCO2) [1,8]. Thus, a metabolic acidosis without an excess accumulation of gastric CO2 could result in a low ‘pHi’ that was simply a repackaging of base excess [2,8]. Finally, automated air tonometry was launched [9]. The bespoke tonometer tube was unchanged but now air rather than saline was used to fill the balloon. This facilitated quicker full equilibration and automated sampling and measurement by using a modified end-tidal CO2 infra-red analyser [9]. The calculation of ‘pHi’ was abandoned and interest turned to the rise in gastric partial pressure of CO2 compared with either the arterial partial pressure of CO2 or end-tidal partial pressure of CO2, referred to as the PCO2 ‘gap’ or ‘gradient’. This again proved to be highly predictive of a poor outcome, particularly in major surgery [9]. So now, at last, we thought we had a user-friendly, automated, robust surrogate measure of ‘end-organ perfusion’ and a growing understanding of the technique and the separation between global haemodynamic variables and splanchnic blood flow. It was demonstrated, for example, that haemorrhage in adult volunteers could be detected by gastric tonometry when commonly measured haemodynamic variables remained unchanged [10] and that if critically ill patients had an abnormal PCO2 ‘gap’ they failed to produce gastric acid following pentagastrin stimulation [11]. Furthermore, gut-directed therapy could maintain or correct PCO2 ‘gap’ [4,12]. So where did it all go wrong?I think there were a number of factors. Gastric tonometry was made commercially available before all of the methodological issues had been resolved and this resulted in negative press. Furthermore, evidence-based medicine and the demand for ‘proof’ of safety and efficacy from large RCTs were just emerging. How one should apply these standards to monitors of physiological variables was not and has probably still not been completely resolved. Where should the burden of proof lie? With manufacturers or the clinical community? What would be the cost implications of demanding the equivalent of phase III level of evidence for monitors? Gastric tonometry was caught up in this emerging debate and came off second best. Perhaps the burden lies with the clinical trials, although noble efforts in their day would now be regarded as inadequately designed to answer the question ‘does gastric tonometry guided resuscitation improve ICU survival?’ [1]. The largest of the six studies randomly assigned just 260 patients—some 10- to 20-fold fewer than the numbers one might expect to have to recruit today to answer the same question [1,4]. The recent meta-analysis by Zhang and colleagues concludes (among other things) that ‘in critical care patients, gastric tonometry guided therapy can reduce total mortality’ [1]. On reviewing the results, one can see that six small RCTs were conducted on a diverse range of populations (surgery, trauma, and the ICU). All of the trials were grossly underpowered to determine a possible impact on mortality. However, the point estimates for impact on mortality (Figure three [1]) all favour the intervention, but the confidence intervals are large and cross the line of unity.I suggest that if we were starting from this point today, we would conclude that there is equipoise, significant uncertainty, and enough evidence to justify asking the question ‘does gastric tonometry-guided therapy reduce total mortality in critically ill patients?’ This question could be answered by a pragmatic, high-quality RCT with patient-centred outcomes, but I doubt it will be.  相似文献   

4.
The sepsis-induced intramyocardial inflammatory response results in decreased ventricular function and myocardial damage. Chemokines such as monocyte chemoattractant protein-1 causally contribute to retention of intramyocardial mononuclear leukocytes and subsequent ventricular dysfunction during endotoxemic shock in mice and, importantly, this effect is age dependent. It is therefore useful to consider where monocyte chemoattractant protein-1 fits in the complex pathway leading to ventricular dysfunction during sepsis, why this might be an age-dependent effect, and what this implies for care of older sepsis patients.Slimani and colleagues report that monocyte chemoattractant protein-1 (MCP-1) causally contributes to ventricular dysfunction during endotoxemic shock in mice and, importantly, this effect is age dependent [1]. It is therefore useful to consider where MCP-1 fits in the complex pathway leading to ventricular dysfunction during sepsis, why this might be an age-dependent effect, and what this implies for care of older sepsis patients.Sepsis results in an intramyocardial inflammatory response that leads to decreased left ventricular contractility and diastolic dysfunction [2]. If the degree of ventricular dysfunction is minimal then the heart is able to increase cardiac output in response to sepsis-induced decreased systemic vascular resistance and increased venous return, which result in the familiar hyperdynamic septic circulation. Worsening septic shock is characterized by a progressive decrease in cardiac output despite aggressive volume resuscitation and decreased left ventricular afterload. In this circumstance – hypodynamic septic shock – it follows that ventricular dysfunction is a major causal contributor to adverse outcomes. Greater understanding of the intramyocardial inflammatory response is important when considering mitigating strategies.Pathogen-associated molecular patterns, such as lipopolysaccharide (endotoxin) from Gram-negative organisms and peptidoglycan from Gram-positive organisms, bind innate immune receptors including Toll-like receptors (TLRs) [3]. TLRs are expressed on many cell types but particularly on cell lines involved in the earliest response to infecting pathogens, such as monocytes/macrophages. Subsequent intracellular signaling via nuclear factor-κB results in expression of an array of inflammatory cytokines [3].Interestingly, cardiomyocytes express TLRs [4] (notably TLR2 and TLR4) and receptors for a variety of inflammatory cytokines so that circulating pathogen-associated molecular patterns and inflammatory cytokines [5] result in an intramyocardial inflammatory response. Cardiomyocytes respond with their own internal nuclear factor-κB signaling [4,6], production of cytokines such as interleukin-6, chemotactic cytokines (chemokines) such as MCP-1 [7] and keratinocyte chemoattractant (KC), production of nitric oxide [8], and upregulation of cell surface adhesion molecules such as intracellular adhesion molecule-1 [9]. Adhesion molecules are also expressed on activated coronary endothelial cells and contribute to retention of circulating leukocytes in the coronary circulation [10]. Leukocytes are then drawn into the myocardium [11] down chemokine gradients. Ligation of cardiomyocyte intracellular adhesion molecule-1 by leukocytes [9,12], fibrinogen [13], and other inflammatory molecules adversely impacts cardiomyocyte calcium handling, which results in decreased contractility [9]. This intramyocardial inflammatory response also triggers apoptotic pathways [14] that, even in the absence of significant end-stage cardiomyocyte apoptosis [15], can result in mitochondrial damage and dysfunction [16]. Thus, all aspects of the septic intramyocardial inflammatory response contribute to ventricular dysfunction. Whether chemokines such as MCP-1 and KC play a causal role or whether they are simply upregulated bystanders has not previously been fully elucidated.A key observation by Slimani and colleagues is that increased myocardial MCP-1 expression resulted in increased numbers of intramyocardial mononuclear leukocytes and a greater decrease in ventricular contractility [1]. Fundamentally important is the further observation that this pathway causally contributes to decreased ventricular contractility because neutralization of MCP-1 abrogated these effects. The observation that similar neutralization of KC did not have the same effect provides an important negative control. These results suggest that MCP-1 attracts intramyocardial mononuclear leukocytes that then decrease contractility, as steps in a causal pathway. These investigators present left ventricular pressure and volume data so we can surmise that end-systolic elastance, as a load-independent measure of ventricular contractility, decreased with endotoxemia and was lowest in old mice.Nuclear factor-κB signaling leading to cytokine and chemokine production increases with aging [17]. The consequences of this age-related exaggerated septic inflammatory response have not been fully explored, yet initial observations suggest that age-dependent increases in signaling via TLR4 induce greater ventricular dysfunction [18]. Slimani and colleagues have confirmed and extended these observations by considering cytokines and chemokines that have previously been found to be increased in the myocardium after a septic inflammatory stimulus (tumor necrosis factor alpha, interleukin-1β, interleukin-6, MCP-1, KC, macrophage inflammatory protein-1α), and they nicely demonstrate that increased MCP-1 does decrease contractility [1]. Neutralization of MCP-1 was approximately twice as effective in improving left ventricular ejection fraction in old mice compared with young adult mice. Age-dependent changes in inflammatory cytokine and chemokine production thus have important downstream functional effects.These observations arise from a murine model so a number of limitations must be recognized. The peripheral blood leukocyte differential is quite different in mice, where most of the circulating leukocyte count is made up of mononuclear leukocytes. Neutrophils are under-represented compared with humans. Thus, the intramyocardial mononuclear infiltration, driven by the monocyte chemokine MCP-1, makes sense in this murine setting while lack of effect of KC, a granulocyte/neutrophil chemokine, also fits with the murine model. Nevertheless, the current results emphasize the functional importance of intramyocardial mononuclear leukocytes that are indeed observed in humans [19]. This murine model used endotoxin administration, which is not a realistic model of human sepsis but is an excellent tool to isolate and study specific pathways induced by pathogen-associated molecular patterns. We should thus be circumspect in extrapolating and interpreting the importance of this degree of ventricular dysfunction, but we can be confident in concluding that intramyocardial chemokines attracting leukocytes contribute to ventricular dysfunction and this response is exaggerated with aging.In sum, these novel findings further support the notion that the inflammatory response and its cardiovascular consequences importantly change with age. Early intervention and resuscitation aimed at reducing the proinflammatory stimulus must continue to be emphasized in patients of all ages.  相似文献   

5.
Endothelial glycocalyx degradation induced by fluid overload adds to the concern of a detrimental effect of uncontrolled fluid resuscitation and the risk of unnecessary fluid infusion. As a consequence, the use of new tools for monitoring response to fluids appears promising. From that perspective, the monitoring of plasma concentration of glycocalyx degradation markers could be useful.Fluid resuscitation is common practice for patients with hypovolemia and is recommended as first-line treatment for septic shock. However, accumulating evidence suggests that uncontrolled fluid loading may be detrimental. In that respect, the study by Chappell and colleagues [1] in the previous issue of Critical Care adds an important piece of information to the debate over the need for careful monitoring of patients in order to avoid unnecessary fluids.During peri-operative management, Chappell and colleagues prospectively explored the consequences of acute volume loading on endothelial functions and hypothesized that high-volume infusion could alter endothelial glycocalyx. The glycocalyx consists of a variety of endothelial membrane-bound molecules, including glycoproteins and proteoglycans that form a negatively charged barrier to circulating cells and macromolecules [2]. Numerous animal studies have demonstrated that destruction of the glycocalyx (using enzymatic approaches, for example) leads to increased capillary permeability [3]. In vivo, degradation of the glycocalyx with heparinase favors tight contacts between circulating leukocytes and the endothelium through denudation of the protective barrier that normally prevents this effect [4]. In addition, the glycocalyx participates in the coagulation process as heparan sulfate and dermatan sulfate, two glycocalyx glycosaminoglycans, potentiate the activity of two anticoagulant enzymes: antithrombin III (by a factor of 100) and heparin cofactor II, respectively (reviewed in [5]).Chappell and colleagues reported that volume loading (20 mL/kg of hydroxyethyl starch (HES) 130/0.4) in elective surgery induced a release of atrial natriuretic peptide (ANP) associated with an increase of serum glycocalyx components (syndecan-1 and hyaluronan) but that there was no effect of acute normovolemic hemodilution. The augmentation of these plasma components suggests a degradation of endothelial glycocalyx that could promote vascular leakage, leukocyte adhesion, and procoagulant status. However, this is a speculative and physiopathological conclusion, and the authors did not provide any direct evidence of glycocalyx functional alteration. The twofold increase of serum syndecan-1 concentration after volume overload is statistically significant but was very far from the 10-fold increase reported during septic shock [6] and the 40-fold increase after aortic surgery [7].The mechanisms of glycocalyx shedding have not been directly investigated, but the authors proposed that ANP could be implicated. This hypothesis is based on a previous experimental study by the same group [8] and the observation of a simultaneous plasma increase of ANP and glycocalyx components during volume overload. Other mechanisms of shedding have been proposed [5], such as the activation of circulating enzymes (heparinase, neuraminidase, and pronase), the release of pro-inflammatory cytokines (tumor necrosis factor-alpha), or the direct effect of increased shear stress on the arterial endoluminal wall. Finally, reactive oxygen species could also induce glycosaminoglycans release. As albumin has anti-oxidant properties [9], one can speculate that volume resuscitation using human albumin could be less harmful than HES. A recent trial suggests that albumin could reduce mortality in the most severe cases of septic shock [10]. The mechanisms of the potential beneficial effect of albumin remain unclear. An interaction of albumin with the glycocalyx leading to a reduction of the capillary leak is a seductive hypothesis but should be further investigated [11].This translational study gives interesting insights into the deleterious effects of fluid infusion in critically ill patients. Indeed, accumulative clinical studies pointed out that positive fluid balance and weight gains are associated with a worse prognosis [12] but the mechanisms of this deleterious effect remain unknown. Fluid accumulation within the tissue could alter the diffusion of oxygen, promoting hypoxic cellular injury. Fluid resuscitation is recommended as a first-line treatment, particularly during the first hour [13], but the volume and type of fluid that should be used remain controversial. The choice of fluid should be integrated in a dynamic process (rescue, optimization, stabilization, and de-escalation) [14]. Recent trials have added to the confusion regarding the usefulness of an algorithm guiding hemodynamic management of shock [15] and the mean arterial pressure that should be targeted [16]. Three criteria should be present: evidence of fluid responsiveness such as pulse pressure variation; presence of signs of tissue hypoperfusion; and absence of fluid overload. Stopping rules are either no fluid responsiveness or absence of signs of tissue hypoperfusion or presence of fluid overload. We need to set clinically relevant goals. From that perspective, an integrative approach looking at simple indices of tissue perfusion and microcirculation could be useful at the bedside [17,18], although so far no interventional study has proven that such a strategy is able to improve the outcome. The protection or restoration of glycocalyx might be considered an important goal [19]. The place of glycocalyx degradation components remains to be assessed, but they could represent a surrogate marker for physicians to identify vascular endothelial injury and ultimately to limit fluid overload. A respectful strategy for the glycocalyx may decrease the capillary leak and improve tissue oxygenation just by giving the right amount of fluids and maybe by choosing fluids with no impact on glycocalyx degradation.  相似文献   

6.
Pain is experienced by many critically ill patients. Although the patient’s self-report represents the gold-standard measure for pain, many patients are unable to communicate in the ICU. In this commentary, we discuss the study findings comparing three objective scales for the assessment of pain in non-verbal patients and the importance of the tool selection process.In the previous issue of Critical Care, Chanques and colleagues [1] evaluate the psychometric properties of three behavioral pain scales validated for use in non-communicative critically ill patients. The authors compare two scales recommended in the practice guidelines for pain management of adult ICU patients by the Society of Critical Care Medicine [2] - that is, the Behavioral Pain Scale (BPS) [3] and the Critical-Care Pain Observation Tool (CPOT) [4] - and a routine scale in use at the host institution, the Non-Verbal Pain Scale (NVPS) [5].Assessing pain in non-communicative adult patients in the ICU must rely on the observation of behavioral indicators of pain. Selection of pain assessment tools in clinical practice must be done with rigor. Indeed, an assessment tool can be shown to be valid only for a specific purpose and a given group of respondents and context of care. All steps of scale development are important. The first step, selection of items and scale scoring, can be done by using a combination of various strategies, including an in-depth literature review, consultation of end users (for example, ICU clinicians and patients), and direct clinical observation and other sources. Content validation is a method of examining the content and relevance of the items that are useful for selecting or revising them. Once developed, reliability and validity of the scale use must be tested with the targeted patient group [6]. Reliability refers to the overall reproducibility of scale scores. The examination of inter-rater reliability is crucial to determine whether two or more trained raters reach similar scores using the same scale for the same patient and at the same time. Validity refers to the interpretation of the pain scale scores and its ability to indicate that the individual is actually in pain. As behavioral pain scales aim to detect the presence of significant pain, the examination of criterion and discriminant validation is necessary. Criterion validation allows the comparison between behavioral scores and the gold standard (that is, the patient’s self-report of pain). Discriminant validation refers to the ability of the pain scale to discriminate between conditions or procedures known to be painful or not and its ability to detect significant changes over time (responsiveness). Because validation is an ongoing process, it is imperative that its use be evaluated by independent groups of caregivers who were not involved in its development, with various ICU patient groups or with a translated version of the scale. Finally, the ease of their implementation in ICU settings and the impact of their use on pain management practices and patient outcomes must be evaluated.Evaluation of the psychometric properties of behavioral pain scales in ICU patients unable to self-report has been recently performed [7,8]. Of the eight pain scales developed for adult ICU patients, the BPS and the CPOT were found to be the most valid and reliable. The present study [1] is the first to compare psychometric properties of these two pain scales in addition to the NVPS, at rest and during noxious (for example, turning and endotracheal suctioning) and non-noxious (for example, simple repositioning) procedures. Both the BPS and the CPOT showed the strongest psychometric properties in both intubated and non-intubated patients in comparison with the NVPS. These findings add arguments to the recommendations for the use of these two pain scales [2].What are the next steps in relation to pain assessment in the ICU? First, there is a clear need to better evaluate the impact of pain assessment and management on patient outcomes. Few studies have shown that evaluating pain was associated with positive outcomes such as a shorter duration of mechanical ventilation and ICU length of stay and reduced adverse events [9-11]. Whether better management in the ICU may lead to reducing long-term negative consequences such as chronic pain and symptoms of post-traumatic stress disorder remains largely unknown. Second, there is a need for valid physiologic measures of pain, especially in ICU patients too sedated or paralyzed in whom behavioral responses cannot be observed. The use of pupillary reflex dilation has shown some promising findings [12-14]. Meanwhile, the best alternative measure to assess pain in non-verbal patients remains the use of behavioral scales.Assessing pain in non-communicative ICU patients is challenging. The BPS and the CPOT have shown the strongest psychometric properties for this purpose. These scales should be incorporated into pain management protocols to target the desired levels of analgesia in order to optimize inter-professional practices and to achieve better patient outcomes.  相似文献   

7.
While avoiding hypoxemia has long been a goal of critical care practitioners, less attention has been paid to the potential for excessive oxygenation. Interest has mounted recently in understanding the clinical effects of hyperoxemia during critical illness, in particular its impact following cardiac arrest. In this issue of Critical Care, Dell’Anna and colleagues review available animal and human data evaluating the impact of hyperoxemia after cardiac arrest. They conclude that while hyperoxemia during cardiopulmonary resuscitation is probably desirable, it should probably be avoided during post-resuscitation care. These conclusions are in line with two broader themes in contemporary critical care: that less may be more; and that it is time to look beyond simply preventing short-term mortality towards longer-term outcomes.Interest has mounted recently in understanding the clinical effects of hyperoxemia during critical illness, particularly its impact following cardiac arrest. In this issue of Critical Care, Dell’Anna and colleagues review available animal and human data evaluating the impact of hyperoxemia after cardiac arrest [1].When patients are acutely critically ill, initial interventions must be directed towards immediately life-threatening issues. Significant hypoxemia can quickly lead to cardiac arrest, so early aggressive supplemental oxygen is frequently provided either in response to or for prevention of dangerous reductions in the arterial partial pressure of oxygen. Once a patient is stabilized and the focus appropriately turns to urgent diagnosis and treatment, however, often little effort is made to minimize the amount of supplemental oxygen delivered. In fact, the majority of mechanically ventilated patients continue to receive excess supplemental oxygen throughout their ICU stay [2,3]. Adverse effects of excess oxygen are best understood in the brain and the systemic circulation. Hyperoxemia can induce cerebral vasoconstriction [4], neuronal cell death [5], and seizures [6,7]. In addition, hyperoxemia reduces the cardiac index and heart rate while increasing peripheral vascular resistance [8,9]. Given the major neurologic and hemodynamic challenges faced by many critically ill patients, hyperoxemia may be especially concerning in this population.In their article, Dell’Anna and colleagues provide an excellent perspective on hyperoxemia following cardiac arrest [1]. After exploring the pathophysiology of hyperoxemia in the setting of ischemia–reperfusion brain injury, they detail animal and human studies of hyperoxemia following cardiac arrest. They conclude that while hyperoxemia is probably prudent during resuscitation, avoiding hyperoxemia is probably desirable in the post-resuscitation phase. Most importantly, however, they highlight the limits of our current knowledge – for example, is there a safe upper limit for arterial partial pressure of oxygen? Is even a single episode of hyperoxemia detrimental? What is the role of carbon dioxide? – and wisely call for further study. Of note, a recent meta-analysis based on many of the same studies reviewed by Dell’Anna and colleagues found that while hyperoxemia following cardiac arrest was associated with an increased risk of in-hospital mortality (odds ratio: 1.40, 95% confidence interval: 1.02 to 1.93), the heterogeneity among studies precluded firm conclusions about the practice [10].In focusing on the potential downsides of post-arrest hyperoxemia, Dell’Anna and colleagues hit on two important themes of current critical care research and practice. First, they implore us to consider the idea that doing more is not necessarily in our patients’ best interest – a concept that has taken hold recently in the critical care community. In the United States, the Critical Care Societies Collaborative’s contribution to the Choosing Wisely Campaign suggests consideration of the merits of doing less [11]. For example, influential studies have suggested benefits associated with less aggressive transfusion practices [12] or use of less sedation during mechanical ventilation [13]. While blood transfusion and sedation are clearly needed for some cases of anemia and agitation, respectively, too much of either can cause harm. Critical care has evolved to include doing less, in many cases, as a thoughtful alternative to doing more – not only when goals of care are palliation, but also when the goal is to increase chances of survival or to improve other clinical outcomes.A second theme addressed by this perspective is the focus on long-term goals versus short-term goals. In evaluating the impact of hyperoxemia after initial resuscitation, Dell’Anna and colleagues shift focus beyond return of spontaneous circulation to include the impact on post-arrest outcomes. More broadly, this shift in focus can be seen in the critical care community as interest moves from solely evaluating short-term (in-hospital or 30-day) survival to longer-term survival (months rather than weeks or days) and alternative patient-centered outcomes such as quality of life and functional recovery [14,15]. Having improved our ability to achieve the traditional primary mission in critical care – keeping people alive– we now turn our knowledge, insights and attention to optimizing what it means to be a survivor.Hyperoxemia in the post-resuscitation phase following cardiac arrest is probably detrimental, yet the nuances of this association are as yet unknown. As Dell’Anna and colleagues state, further study is certainly needed to fine-tune our understanding. With such insight we will hopefully learn at what point ‘enough’ oxygen becomes ‘too much’ and what impact ‘too much’ has on short-term survival, long-term survival, and quality of life.  相似文献   

8.
The study by Hincker and colleagues indicated that the perioperative use of rotational thrombelastometry (ROTEM™) could predict thromboembolic events in 90% of the cases in non-cardiac surgery. Viscoelastic tests (VETs) - ROTEM™ and thrombelastography (TEG™) - are used mainly to predict bleeding complications. Most conventional coagulation tests, like prothrombin time and activated partial thromboplastin time, can identify a disturbance in plasmatic hemostasis. However, the relevance of these assays is limited to the initiation phase of coagulation, whereas VETs are designed to assess the whole clotting kinetics and strength of the whole blood clot and reflect more the interaction between procoagulants, anticoagulants, and platelets. The first reports about VET and hypercoagulable state were published more than 25 years ago. Since then, several studies with different quality and sample size have been published, sometimes with conflicting results. A systematic review about hypercoagulable state and TEG™ indicated that further studies are needed to recommend VETs as a screening tool to predict postoperative thrombosis.In a previous issue of Critical Care, Hincker and colleagues [1] identified with preoperative rotational thrombelastometry (ROTEM™, TEM International, München, Germany) analysis patients at high risk for postoperative thromboembolic events. Viscoelastic tests (VETs) were developed primarily to detect coagulopathy rather than thrombosis. Hartert [2] established thrombelastography (TEG™, Haemonetics, Braintree, MA, US) in 1948. Since that time, TEG™ has had different periods of popularity but has never been routinely used for perioperative coagulation management, except for liver transplantation in Pittsburgh in the 1980s [3].ROTEM™ is a computerized point-of-care system, similar to TEG™ technologies, but its measurements are more robust than those of TEG™, which enables ROTEM™ for a mobile bedside testing (for example, in the operation theatre or intensive care unit).Bleeding and blood transfusion are associated with increased mortality and morbidity [4]. VETs are able to predict bleeding complications and to provide a goal-directed coagulation treatment with fibrinogen concentrate, crypoprecipitate, prothrombin complex, platelets, and antifibrinolytic therapy instead of blind fresh frozen plasma (FFP) transfusions, and this treatment avoids negative side effects of FFPs, like transfusion-associated lung injury, transfusion-associated circulation overload, or infections [5]. Other benefits of ROTEM™ are the shorter turnaround time (10 to 15 minutes [5]) compared with conventional coagulation tests (45 to 90 minutes [6,7]).Akay and colleagues [8] evaluated the efficacy of ROTEM™ to detect hypercoagulopathy in cancer patients compared with healthy controls. The authors indicated that in all four tests - extrinsic thrombelastometry, intrinsic thrombelastometry, fibrinogen thrombelastometry, and aprotinin thrombelastometry - the clot formation time was significantly shorter and maximum clot formation was significant higher compared with healthy controls, indicating a risk for thrombosis. However, there were some problems putting these findings into a clinical context; for example, no data about the incidence of thromboembolic events were provided.In a cohort study, McCrath and colleagues [9] investigated 240 consecutive patients scheduled for non-cardiac surgery, to identify patients with increased risk for thrombosis with TEG™. The patients were stratified in two groups; those with a maximum amplitude (MA) of greater than 68 mm were assigned as hypercoagulable, and those with an MA of not more than 68 mm were assigned as normal. Thromboembolic complications in patients with an MA of greater than 68 mm were significantly higher compared with those with an MA of not more than 68 mm (8.4% versus 1.4%, P = 0.0157). Myocardial infarction occurred only in patients with an increased MA of greater than 68 mm.Cerutti and colleagues [10] described a TEG™ detected hypercoagulable state in adult living donors, despite decreased platelet count, increased international normalized ratio, and normal activated partial thromboplastin time.However, some other reports did not find a correlation between hypercoagulability identified by TEG™ and postoperative thrombotic complication [11,12]. Dai and colleagues [13] conducted a meta-analysis comparing several studies performed with TEG™, supposing that TEG™ may be useful to predict thromboembolic events postoperatively. However, because TEG™ technologies have changed over the last 30 years, there is wide variability in TEG™ results in the different studies. As opposed to TEG™ measurements, ROTEM™ measurements are more robust and have an automated pipette, resulting in more reproducible and precise results [14].  相似文献   

9.
In the previous issue of Critical Care, Yu and colleagues report increased morbidity and mortality in patients after myocardial infarction undergoing prophylactic intra-aortic balloon pump support before coronary artery bypass graft surgery. The impact of prophylactic intra-aortic balloon pump implantation before coronary artery bypass graft therapy still is controversially debated. However, Yu and colleagues emphasize further discussion and substantiate the need for a prospective randomized controlled trial on this subject.Yu and colleagues investigate in their well-conducted retrospective analysis the impact of preoperative insertion of an intra-aortic balloon pump (IABP) on postoperative outcome in patients undergoing coronary artery bypass graft (CABG) after acute myocardial infarction [1]. The rationale for prophylactic IABP is to decrease the left ventricular afterload and to improve coronary perfusion, and thus to reduce the risk of perioperative low cardiac output syndrome. In this study, preoperative IABP was associated with increased morbidity, transfusion requirements and longer stay on the ICU postoperatively.The use of prophylactic IABP in patients undergoing CABG is based on data generated by Christenson and colleagues in the late 1990s and early 2000s showing positive effects of prophylactic IABP insertion on postoperative short-term and long-term survival [2-5]. However, these positive results have been challenged in the past few years by a number of retrospective analyses showing conflicting results [6-9]. A recent prospective randomized study by Ranucci and colleagues showed no positive effect of prophylactic IABP in patients with severely reduced left ventricular function undergoing CABG [10]. However, this study had several limitations we have addressed elsewhere [11].The current study by Yu and colleagues revealed an association between preoperative IABP and a prolonged stay on the ICU. This result is not surprising, as postoperative weaning from an IABP, which usually takes 24 to 48 hours, has to be performed under intensive care monitoring. The composite morbidity endpoint was reached more often in the IABP group. The authors also observed increased transfusion requirements in the IABP group that might, as the authors state, be related to mechanical thrombocyte consumption and hemolysis rather than bleeding complications from IABP insertion. Increased transfusion of erythrocytes has been associated with poor short-term and long-term outcome after cardiac surgery in previous studies [12,13]. Interestingly, neither increased transfusion nor the other parameters investigated in this study influenced the overall length of hospital stay and in-hospital mortality. Contrarily, the excellent short-term mortality rates in both groups (control group, 1.0%; IABP group, 2.5%) suggest that this study population did not represent a true high-risk patient collective.On the one hand, Yu and colleagues’ study adds some interesting results; on the other, numerous issues about prophylactic IABP use still remain unanswered. The current uncertainty about prophylactic IABP use in patients undergoing CABG results from different aspects. In all studies so far conducted, the criteria for prophylactic IABP insertion were not well defined and were based on subjective decisions by the treating physicians. Accordingly, the term high-risk patient was based on very individual criteria rather than on established tools for perioperative risk estimation (for example, EuroSCORE, STS Risk Score). The optimal timing of IABP insertion still is not known, although some authors showed benefit from early insertion [3,9]. However, the ideal length of temporization of patients with myocardial infarction before CABG is also subject to individual perception rather than to clinical evidence [14-16]. Finally, the value of prophylactic IABP has to be judged in the context of the best-valued alternative therapy.From a pathophysiological point of view, we suppose that the effects of IABP (increased coronary perfusion, reduction of left ventricular afterload) might induce the greatest effect in patients with critically reduced coronary perfusion and (temporarily) severely reduced left ventricular function; that is, in patients with acute myocardial infarction who need to be temporized prior to CABG. In some institutions, including our clinic, prophylactic IABP in these patients is initiated in the catheter laboratory or, at the latest, after admission to the ICU. The patients, if hemodynamically stable and without ongoing symptoms, are then temporized until cardiac enzymes are recurrent, which usually takes 2 to 3 days. However, regarding the paucity of good and contemporary data on the value of prophylactic IABP, this clinical routine lacks evidence.The best way to obtain evidence and to generate reliable guidelines for the prophylactic use of IABP in patients undergoing CABG is an adequately powered prospective randomized controlled trial with a well-defined study population, with a standardized protocol for perioperative care and IABP handling and with a clinically relevant and appropriate primary endpoint (for example, 30-day mortality).  相似文献   

10.
Acute kidney injury (AKI) is a common problem, especially in critically ill patients. In Critical Care, Kolhe and colleagues report that 6.3% of 276,731 patients in 170 intensive care units (ICUs) in the UK had evidence of severe AKI within the first 24 hours of admission to ICU. ICU and hospital mortality as well as length of stay in hospital were significantly increased. In light of this serious burden on individuals and the health system in general, the following commentary discusses the current state of knowledge of AKI in ICU and calls for more attention to preventive strategies.Acute kidney injury (AKI) has been the focus of numerous publications and research projects in the past 5 years [1-4], including the study by Kolhe and colleagues [1] in Critical Care. Interestingly, as facts about AKI and its impact on prognosis emerged, areas of uncertainty and controversy became apparent [5,6]. It is now well known that AKI affects a large number of patients (although the exact incidence is variable), that AKI per se is associated with an increased risk of death, and that patients who need renal replacement therapy (RRT) have a higher risk of dying [2-4,7,8]. There is also evidence that AKI is a dynamic process, with many patients progressing through different stages of severity, and that early AKI appears to have a better prognosis than late AKI [7]. Numerous studies have identified factors that influence the prognosis of patients with AKI, including inherent patient characteristics as well as modifiable factors (ie, nephrotoxic drugs, fluid status, haemodynamics) and non-patient related aspects like size of ICU and type of hospital [2-4].Despite this progress, several areas in the field of AKI remain uncertain, the issue of RRT being a particularly controversial one [5]. There is wide variation in clinical practice regarding mode, indication, timing, dose and provision of RRT [9]. Despite a widely held perception that a continuous mode may be better for critically ill patients with AKI, especially those with haemodynamic instability, clinical studies have failed to show a consistent survival advantage for patients on continuous RRT compared to intermittent haemodialysis [10]. The Hemodiafe Study (randomized controlled trial comparing intermittent haemodialysis with continuous haemodiafiltration in 21 centres in France) not only showed similar mortality rates in both groups but also confirmed that nearly all patients with AKI as part of multiple-organ dysfunction syndrome could be treated with intermittent haemodialysis provided strict guidelines were used to achieve tolerance and metabolic control [11].In a landmark study, Ronco and colleagues [12] made a strong case for dosing RRT (the more the better). However, when challenged in subsequent studies, this conclusion could not always be confirmed. Most recently, the Acute Renal Failure Trial Network study demonstrated in a randomized controlled multicenter fashion that intensive renal support in critically ill patients with AKI did not decrease mortality, improve recovery of kidney function or reduce the rate of non-renal organ failure compared to less intensive therapy [13].In view of these uncertainties about ''best clinical practice'' it is not surprising that the mortality associated with AKI in critically ill patients has not substantially changed during the past few decades despite increasing international efforts and advances in medical knowledge [14]. Lack of a uniform definition for AKI and lack of evidence-based guidelines have been blamed for some of the inconsistencies and poor progress. Formation of the international AKI network group, design of the RIFLE criteria and later the AKI classification and plans for streamlined focussed research are major steps in the right direction to tackle the problems associated with established AKI [6].The study by Kolhe and colleagues in Critical Care illustrates that we may need to focus our attention also on the time before AKI has developed. Kolhe and colleagues show that 6.3% of 276,731 patients admitted to 170 ICUs in the UK during a 10 year period had evidence of severe AKI (serum creatinine ≥ 300 μmol/L and/or urea ≥ 40 mmol/L) during the first 24 hours in ICU [1]. Their ICU and hospital mortality as well as stay in hospital were significantly increased. Moreover, among survivors, requirement for in-hospital care was even higher. The study also showed that a perfect mortality prediction model is still missing. As addressed by the authors, the study has some weaknesses (arbitrary definition of severe AKI, potential risk that some patients classified as AKI in fact had advanced chronic kidney disease, and no information on the number of patients treated with RRT). However, there are important messages: 6.3% of all ICU patients were admitted with severe derangement of renal function. The exact reasons for renal dysfunction are not given and may not be known but the question remains whether AKI could have been prevented prior to transfer to ICU. Chertow and colleagues [15] previously showed that even small changes in serum creatinine by ≥ 0.3 mg/dL (≥ 26 μmol/L) whilst in hospital were independently associated with an increased risk of dying. Given the serious implications of any degree of AKI on the individual and the health system, and the lack of curative therapies for AKI, it may be necessary to shift our attention more to the actual way we look after patients at risk of AKI, that is, how we recognise high-risk patients and prevent AKI. This call for ''attention to basics'' includes general measures like education and training of nursing and medical staff, emphasis on the importance of the clinical examination, attention to drugs, drug dosing and nutrition, and early consultation with specialists in the field. The success of these simple non-technical steps depends on combined efforts by anybody looking after patients in hospital. The overall action plan to reduce the burden of AKI needs to incorporate these preventive strategies as well as regular review of clinical practice, in parallel with international collaboration and focussed research into drug therapies and technologies.  相似文献   

11.
12.
Kao et al. have reported in Critical Care the histological findings of 101 patients with acute respiratory distress syndrome (ARDS) undergoing open lung biopsy. Diffuse alveolar damage (DAD), the histological hallmark of ARDS, was present in only 56.4 % of cases. The presence of DAD was associated with higher mortality. Evidence from this and other studies indicates that the clinical criteria for the diagnosis of ARDS identify DAD in only about half of the cases. On the contrary, there is evidence that the clinical course and outcome of ARDS differs in patients with DAD and in patients without DAD. The discovery of biomarkers for the physiological (increased alveolocapillary permeability) or histological (DAD) hallmarks of ARDS is thus of paramount importance.Kao et al. [1] have made an important contribution to the knowledge of the histological changes associated with acute respiratory distress syndrome (ARDS) and the potential role of open lung biopsy (OLB) in the diagnosis and management of ARDS. The authors studied the histological findings in OLB from 101 patients with a diagnosis of ARDS over 15 years. Indications for OLB included a suspicion of noninfectious cause that could benefit from corticosteroid treatment. Notwithstanding the obvious selection bias, histological information from OLB or autopsy tissue samples is of cardinal importance for a better understanding of the pathogenesis and management of ARDS. Kao et al.’s [1] main findings are that diffuse alveolar damage (DAD) was present in only 56.4 % of patients with ARDS, and that the presence of DAD was associated with a worse outcome in patients with ARDS. The results of OLB, in accordance with other studies [25], changed management in a substantial proportion of patients.ARDS is a syndrome of acute respiratory failure due to pulmonary inflammation developing after a known risk factor, leading to increased endothelial and epithelial permeability, pulmonary edema, hypoxemia, loss of aerated tissue, decreased lung compliance, and bilateral opacities in the chest X-ray image. The histological correlate of ARDS is DAD, characterized by lung edema, inflammation, hemorrhage, hyaline membranes, and alveolar epithelial cell injury [68].The agreement between the clinical diagnosis of ARDS according to commonly accepted criteria [7, 8] and the presence of DAD at histological examination is poor, ranging from 13 to 58 % in studies using OLB [25, 9, 10] and from 45 to 88 % in autopsy studies [1116]. In two autopsy studies using the Berlin definition of ARDS [8], DAD was present in only 45 % of patients diagnosed with ARDS [11, 12].Conditions identified in patients without DAD include, among others, organizing pneumonia, eosinophilic pneumonia, pulmonary embolism, drug-induced pneumonitis, alveolar hemorrhage, lymphangitis, malignancy, or vasculitis. Of note, in one study [12] 14 % of patients with ARDS did not have pathological changes, probably representing cases with diffuse atelectasis that appear clinically as ARDS but resolve as the lungs are inflated at high pressure prior to fixation.Many of the conditions identified at the histological examination do not share the same pathogenesis, treatment, and biomarkers as DAD. The failure of previous studies into the treatment of ARDS has thus been attributed in part to a lack of a reliable definition of ARDS designating a homogeneous phenotype.The importance of identifying a homogeneous phenotype in ARDS is highlighted by the finding in the study by Kao et al. [1] of different mortality rates in patients with DAD and in patients without DAD (71.9 % versus 41.5 %). In a recent study, Guerin et al. [9] reported in 83 patients with ARDS undergoing OLB a higher airway plateau pressure, worse oxygenation, and (not reaching statistical significance) higher mortality in patients with DAD versus patients without DAD. These findings [1, 9] together suggest that the histological finding of DAD defines a specific population of patients within the syndrome of ARDS.The heterogeneity of conditions designated with the same clinical diagnostic criteria as well as data suggesting that the clinical course differs in patients with DAD and in patients without DAD thus underline the importance of identifying patients with specific clinicopathological phenotypes within those diagnosed with ARDS.Another challenge for our understanding of ARDS is the identification of the mechanisms explaining why some patients with a clinical risk factor go on to develop ARDS whereas others do not. Hopefully, the recognition of these patients at risk should be accomplished early in their course, before the requirement of ventilatory support. Biomarkers should thus be determined in the blood rather than in bronchoalveloar lavage fluid.In conclusion, after almost five decades of research [6], ARDS continues to pose challenges for physicians and scientists. The discovery of markers for the physiological (e.g., alveolocapillary hyperpermeability) or histological (hyaline membrane) hallmarks of ARDS is of great import for the identification of a specific phenotype within ARDS.  相似文献   

13.
Iron as an element is a double-edged sword, essential for living but also potentially toxic through the generation of oxidative stress. The recent study by Chen and colleagues in Critical Care reminds us of this elegantly. In a mouse model of acute lung injury, they showed that silencing hepcidin (the master regulator of iron metabolism) locally in airway epithelial cells aggravates lung injury by increasing the release of iron from alveolar macrophages, which in turn enhances pulmonary bacterial growth and reduces the macrophages’ killing properties. This work underscores that hepcidin acts not only systematically (as a hormone) but also locally for iron metabolism regulation. This opens areas of research for sepsis treatment but also for iron deficiency or anaemia treatment, since the local and systemic iron regulation appear to be independent.Iron is essential for living - this is the first lesson from the interesting work from Chen and co-workers published in Critical Care [1]. The second and most important lesson is that hepcidin, the master regulator of iron metabolism at the scale of the whole organism [2,3], also plays an important role locally, at the organ level, more specifically in the lung in this study.Iron is crucial for every kind of living organism, including plants, bacteria, animals and humans, to transport oxygen (through the haemoglobin in animals and humans) and to produce energy (through electron transfer in the mitochondrial respiratory chain). In addition, iron is essential for many metabolic processes, including DNA repair and replication, regulation of gene expression and so on. Many pathogens are thus highly dependent on iron supply and use different pathways to acquire or even steal iron from the environment or from their host [4,5]. These functions of iron are mainly based on its ability to donate electrons, which also allows for the production of free radicals, which are potentially toxic for the host or the invaders. Iron is thus both essential for living and potentially toxic. This is why its metabolism is finely tuned by the hormone hepcidin. Hepcidin was discovered in the early 2000s when searching for new antimicrobial peptides [6,7]. It is now clear that hepcidin plays a key role in the homeostasis of iron: it is synthesized by the liver and acts as a hormone, at different sites (mainly the duodenal enterocytes, where it prevents dietary iron absorption, and the macrophages of the reticulo-endothelial system, where it prevents iron release from stores) [2,3]. Hepcidin acts through binding to ferroportin, the sole known iron exporter [8].Besides its role in iron metabolism regulation, hepcidin may also have a role locally, as pointed out by Chen and colleagues in their work [1]. They used an elegant mouse model to silence hepcidin expression in airway epithelial cells. This knockdown of hepcidin induced a decrease in hepcidin expression (at the RNA and protein levels) in these cells and higher ferroportin and lower iron content in alveolar macrophages. This indicates that hepcidin produced by the airway epithelial cells acts locally on alveolar macrophages as a paracrine hormone. Indeed, no difference was observed in serum iron or in spleen iron stores, indicating no modification of systemic iron. Using a model of acute lung injury secondary to peritonitis, Chen and colleagues show that the knockdown of hepcidin aggravates the acute lung injury (with increased mortality) probably through two main factors: an exacerbation of lung sepsis, thanks to an impaired phagocytic activity of alveolar macrophages (the reduced iron content of macrophages may prevent their ability to produce oxidative stress) and possibly thanks to an increase of iron availability for pathogens leading to higher bacterial load in the lungs [9]; and an increase in alveolar damage due to iron toxicity secondary to its release from macrophages (but this remains hypothetical).The increase in pulmonary bacterial colonization in the hepcidin knockdown mice may also indicate that hepcidin has a direct antimicrobial effect. Indeed, hepcidin is a member of the β-defensin peptide family. Defensins are small peptides, rich in cysteine and active in host defence against bacteria but also against some viruses and fungi. They are produced by cells of the immune system and act by binding to the pathogen membrane to form pores that induce the killing of the cell. They play an important role in immunity in the lung [10].This work underscores the role of hepcidin in lung protection against pathogens and may also help in understanding that iron metabolism regulation is compartmentalized. Locally, in this case in the lung, the organism prevents iron release and produces hepcidin to help fight invaders [11]. But iron is needed for erythropoiesis and systemic iron infusion may be useful and not deleterious, even in the presence of sepsis, thanks to regulation of hepcidin synthesis at the systemic level [12]. This separated regulation could allow iron treatment of critically ill patients, even in the presence of inflammation and or sepsis [13].In the future, regulation of iron metabolism through hepcidin manipulation could be carried out at two levels: locally to enhance host defence (through hepcidin synthesis induction), and systemically to prevent anaemia of inflammation (through hepcidin inhibition). Iron and hepcidin are really essential for living!  相似文献   

14.
The clinical presentation of severe infection with generalized inflammation is similar, if not identical, to systemic inflammation induced by sterile tissue injury. Novel models and unbiased technologies are urgently needed for biomarker identification and disease profiling in sepsis. Here we briefly review the article of Kamisoglu and colleagues in this issue of Critical Care on comparing metabolomics data from different studies to assess whether responses elicited by endotoxin recapitulate, at least in part, those seen in clinical sepsis.Our inability to differentiate sepsis from non-infectious inflammatory states has negatively impacted research developments in diagnosis, prognosis and treatment of sepsis. Compounding the problem, biomarker identification and disease profiling is hampered by our reliance on a theoretical construct that assumes disseminated infection stimulates pattern recognition receptors, such as Toll like receptor (TLR)4 in response to lipopolysaccharide, to generate clinically recognizable, biochemically defined common pathways of response in the host. The same mediators that cause general inflammation and harm in sepsis are required for host defence, numerous pathways are highly redundant, and receptors that distinguish self from non-self are also needed for the recognition of danger signals. The result is that many of the biomarkers used in the ICU are neither sensitive nor specific enough to inform regarding specific pathophysiologic processes.Recent advances in ‘omic’ technologies have opened new opportunities for sepsis research. In a recent article published in Critical Care, Kamisoglu and colleagues [1] used metabolomics to assess whether responses elicited by endotoxin recapitulate, at least in part, those seen in clinical sepsis [2]. The study is primarily a retrospective in silico analysis of metabolomes obtained from subjects who participated in an experimental endotoxemia study [3] and from patients enrolled in the Community Acquired Pneumonia and Sepsis Outcome and Diagnostics (CAPSOD) study who after independent audit fulfilled criteria for sepsis and outcomes [4]. Patients in the CAPSOD cohort were classified as uncomplicated sepsis, severe sepsis, septic shock, and non-infected systemic inflammatory response syndrome (‘ill’ controls with non-infectious SIRS). Metabolic profiling of plasma in both studies was performed using non-targeted mass spectrometry by the same commercial provider. In contrast to targeted approaches that profile a small number of known metabolites, untargeted approaches (without restriction to particular compounds) have the advantage of allowing for identification of metabolic fingerprints (that is, multiple biomarkers that form a biopattern) associated with particular endophenotypes [5, 6]. The important insights of the study are two: the clinical relevance of endotoxemia in sepsis, and the applicability of metabolomics as an analytical tool in sepsis. Because lipopolysaccharide acts through the TLR4, endotoxin challenge is a model of TLR4 agonist-induced SIRS [2]. The issue of the contribution of TLR signaling in sepsis will be difficult to unravel as these receptors are likely to be activated by both primary (pathogen-related) and secondary (host-related) events. A discussion of the merits and limitations of comparing high throughput data from different studies is fundamental to the use of genomic technologies in critical illness.Metabolomics is heavily supported by mass spectrometry (MS) and nuclear magnetic resonance (NMR) as parallel technologies that provide an overview of the complete set of small-molecule chemicals found within a biological sample (metabolome) [4]. The main advantage of MS is sensitivity - it can detect analytes routinely in the femtomolar to attomolar range. Coupling MS with separation techniques - liquid chromatography (LC) or gas chromatography (GC) - enhances the detection ability of MS. The major weakness of MS is quantification. In contrast, in NMR the peak area of a compound is directly related to the concentration of specific nuclei, making quantification very precise. NMR is, however, much less sensitive. Therefore, a single analytical tool is unlikely to detect all possible metabolites, suggesting a combination of techniques will be required to assign metabolites and patients to specific classes [3]. Moreover, of all the systems biology disciplines, metabolomics is closest to the phenotype, is profoundly affected by environmental factors, and dynamic changes suggest selection of appropriate time points for biomarker identification will be critical [4, 7].In the study by Kamisoglu and colleagues biochemical profiles obtained by GC-MS and LC-tandem MS provided information on a total of 366 metabolites. Since no significant differences were identified for plasma metabolites between subgroups of sepsis survivors, different groups were collapsed into a single group. While loss of resolution (small number of metabolites), and discordant time points, makes it difficult to classify clinically relevant endophenotypes, the approach selected by Kamisoglu and colleagues to pool ‘similar’ patients into a single group is extensively used in systems biology to increase the statistical power to detect differences between groups [8, 9]. Cluster analysis, pooling metabolite groups, also enhances the likelihood of finding clinically relevant class-specific signatures [10, 11]. Metabolic data from sepsis survivors and non-survivors were also pooled to compare patients with sepsis and non-infected SIRS. Despite individual variability, the metabolic responses to endotoxin are similar to those seen in sepsis survivors. The authors rationalized that similar metabolomes may reflect TLR4 agonist-induced SIRS or common processes of recovery. They were also able to identify specific features that differentiate patients with SIRS from both endotoxin and sepsis patients. While one of the strengths of this study is the combination of clinical data, severity assignment and metabolomics, an important limitation is in the assumption that absence of detectable differences indicates groups are comparable. Also, restricting the intra-study comparative analyses to metabolites that significantly change between different conditions maximizes the likelihood of detecting overall correlations between studies [9].After stratification of sepsis patients based on 28-day survival, the direction of change of 21 of 23 metabolites was the same in endotoxemia and sepsis survivors (compared with non-survivors). Similar to other studies, the metabolite group that differentiated surviving versus non-surviving CAPSOD patients was acylcarnitines [12]. In the study by Kamisoglu and colleagues, comparison between studies was possible because the proprietary extraction protocol used for sample preparation was the same in both studies - minimizing the variability associated with sample preparation. Standard procedures consistent across labs will be required if we want to deposit and compare raw (meta)data across studies. This is the goal of the human metabolome project, which aims to identify, quantify, and catalogue all metabolites in human tissues and biofluids [13]. In the future, integration networks using different types of ‘omic’ data will be combined to allow more thorough and comprehensive modeling of complex traits [14]. Overall, studies such as the one conducted by Kamisoglu and colleagues are pioneering in that they are setting the precedent for how will we integrate, compare, analyze and generalize results from high throughput technologies.  相似文献   

15.
In trauma patients, TEG® and ROTEM® allow prediction of massive transfusion requirement and mortality, and creation of goal-directed, individualized coagulation algorithms that may improve patient outcome. This outcome benefit has been shown for cardiac surgery in prospective randomized trials. For trauma, only non-randomized studies have been performed. Nevertheless, TEG® and ROTEM® are highly promising monitoring techniques to guide coagulation management in all types of major bleeding, including trauma.The review by Da Luz and colleagues in this issue of Critical Care highlights the progress provided by thrombelastography (TEG®) and thrombelastometry (ROTEM®) in diagnosing and monitoring the coagulation system in trauma patients [1]. Their systematic review includes 55 studies and over 12,000 patients and they find that early abnormalities in TEG®/ROTEM® predict massive transfusion need and mortality. However, they conclude that ''Effects on blood product transfusion, mortality and other patient-important outcomes remain unproven in randomized trials'' [1]. Formalistically this is of course correct; however, which monitoring system or laboratory value in medicine has ever improved relevant patient outcomes such as length of ICU or hospital stay, complications, treatment costs or even mortality? The answer is none, simply because a monitoring system or a laboratory value provides information allowing risk assessment but lacks any therapeutic potential.TEG®/ROTEM® information, however, allows creation of goal-directed, individualized treatment algorithms that may improve patient outcome. This has been shown for TEG®/ROTEM®-based algorithms in prospective randomized studies in cardiac surgery [2,3]. In liver transplantation such algorithms have even become standard. And the benefits are impressive: reduced transfusion needs, less complications, shorter length of ICU and hospital stay, better survival and reduced treatment costs [2,3]. TEG®/ROTEM®-based algorithms have also been successful in improving patient outcome in trauma, although these studies were not prospective and randomized [4-6]. Nevertheless, the recommendation to use TEG®/ROTEM® in the treatment of severely injured trauma patients was upgraded in the 2013 European Trauma Treatment Guidelines from 2C to 1C with a plea to implement goal-directed, individualized treatment algorithms and to monitor treatment adherence [4].TEG®, and even more so ROTEM®, point to the most critical element of coagulation within approximately 5 to 10 minutes [7] (compared with traditional laboratory analyses with turnaround times of consistently 60 minutes or more [1]), and allow diagnosis of even mild forms of hyperfibrinolysis that are not detectable by standard laboratory tests but are associated with increased mortality [8]. In the modern emergency room the first blood from a severely injured patient thus goes immediately into a ROTEM® device, the blood gas analyser, the central laboratory and the blood bank, and 1 gram of tranexamic acid is administered immediately thereafter. This provides within 10 minutes a baseline analysis of the coagulation situation and allows goal-directed specific treatment of the most critical deficit with coagulation factor concentrates [4-6]. The ROTEM® analysis is repeated again soon after to assess treatment success and to capture the dynamic evolution of the coagulation situation. This concept allows the patient to be treated sufficiently with the lowest dose of coagulation factors aiming at low normal coagulability and avoiding hypercoagulability. This is important for limiting treatment costs as well as avoiding thrombotic complications.The success of the above concept has been shown in many studies [1-6,9], although a ''perfectly'' designed prospective randomized double-blind multicentre study has not yet been performed to formalistically ''prove'' its superiority. Is this a problem? My personal answer is: maybe. Sure, it would be nice to have such a study that would satisfy experts on a theoretical level. However, having served on numerous study-design committees of such studies, I have to admit that the ideal study design is extremely difficult to find. One crucial question is the definition of the control group. Is the control group also treated according to an individualized goal-directed algorithm or simply by a 1:1(:1) (red blood cell:plasma(:platelet)) transfusion regimen? Does the control group receive only labile blood products or also factor concentrates to treat coagulation abnormalities if present? The choice between a simple 1:1:1 transfusion regimen versus any individualized goal-directed algorithm has become particularly difficult after the recent prospective randomized study showing a more than two-fold increased mortality (32%) in the 1:1:1 transfusion regimen compared with a traditional laboratory-based individualized goal-directed treatment algorithm (14%) [9].In addition, even if such a ''perfect'' study were to be performed, its interpretation would be extremely difficult. In the case of lack of a significant outcome difference, the discussion would be that too few patients were included and thus the study would have been underpowered or the two treatment regimens were not sufficiently different from one another. In the case of a significant difference, the interpretation would be even more difficult and controversial: is the difference due to a different delay in the specific treatment of coagulopathy, or the fact that in one of the arms coagulation factors were used whereas in the other more blood products were used, or very generally because one of the algorithms was not good enough to provide a good outcome. This is by no means to say that we should stop doing outcomes research on coagulation management in severely injured patients, but that we should not dismiss existing evidence in favour of TEG®/ROTEM®-based goal-directed individualized coagulation algorithms on the basis that we lack the ultimate ''perfect'' study. As a matter of fact, today all hospitals should have an individualized and goal-directed coagulation algorithm [4]: don’t wait - act now!  相似文献   

16.
Brain injuries caused by stroke are common and costly in human and resource terms. The result of stroke is a cascade of molecular and physiological derangement, cell death, damage and inflammation in the brain. This, together with infection, if present, commonly results in patients having an increased temperature, which is associated with worse outcome. The usual clinical goal in stroke is therefore to reduce temperature to normal, or below normal (hypothermia) to reduce swelling if brain pressure is increased. However, research evidence does not yet conclusively show whether or not cooling patients after stroke improves their longer-term outcome (reduces death and disability). It is possible that complications of cooling outweigh the benefits. Cooling therapy may reduce damage and potentially improve outcome, and head cooling targets the site of injury and may have fewer side effects than systemic cooling, but the evidence base is unclear.The recent study by Poli and colleagues [1] is part of a suite of iCOOL studies in ischaemic and haemorrhagic stroke, conducted at Heidelberg and linked to EuroHYP-1 (European multicentre, randomised, phase III clinical trial of therapeutic hypothermia plus best medical treatment versus best medical treatment alone for acute ischaemic stroke [2]). Altogether the studies tested four different methods of inducing hypothermia for speed of brain cooling, feasibility and safety. Rhinochill and the Sovika head and neck cooling device [1] and cold infusions compared with Rhinochill (Rhinochill, Wallisellen, Switzerland, EU) (iCOOL 1 NCT01573117 [1]), EMCOOLS Flex.Pads (EMCOOLS, Brucknerstrasse 6/7a, 1040 Vienna, Austria) (iCOOL 2 NCT01584167) and EMCOOLS Brain.Pad (EMCOOLS, Brucknerstrasse 6/7a, 1040 Vienna, Austria) (iCOOL 3 NCT01584180). The methods of inducing hypothermia that are now included in the protocol for EuroHYP-1 are cold infusion (20 ml/kg 4°C isotonic sodium chloride or Ringer’s lactate over 30 to 60 minutes) with optional use of EMCOOLs Brain.Pad [3].There has been an ongoing quest for methods of therapeutic cooling that reduce temperature rapidly, are portable and easily instigated and/or have the least side effects. Cold infusion as a method of inducing cooling has most often been studied in cardiac arrest but is currently being used in clinical trials of therapeutic hypothermia in stroke (EuroHYP-1 [3]) and traumatic brain injury (Eurotherm3235Trial [4] and POLAR (NCT00987688)). The attraction is that it is a relatively low-tech, readily available, portable method, requiring only a means of keeping the infusion fluid at 4°C. Direct brain cooling has a long history but there are few randomised controlled trials and most are of low quality [5]. One of the attractions has been the assumption that direct brain cooling has fewer side effects than systemic cooling but this has not been established [5]. Various methods of nasopharyngeal cooling have been reported in the literature [5], including Rhinochill and nasal [6] and pharyngeal balloons [7]. Rhinochill has been studied most (for example, [8]) and there is very limited human data on the pharyngeal balloon device [7] or the pharyngeal cooling system of Takeda. Springborg and colleagues [6] report the use of QuickCool nasal balloons in a mixed group of hyperthermic brain-injured patients. Temperature was measured in the oesophagus, bladder and (in some patients) intracranially. The goal of normothermia was not reliably achieved. As with Poli and colleagues [1], this research raises questions about bladder temperature as a proxy for intracranial temperature.Perfluorocarbons are costly and their use in Rhinochill has been questioned on environmental grounds [9]. Although in the overall context of medical interventions Rhinochill may not have major environmental impact, this nevertheless warrants consideration. Use of Rhinochill requires patients to be intubated and is contraindicated with base of skull fracture, which limits its use in traumatic brain injury.Recently another method of nasopharyngeal cooling has been reported by Fontaine and colleagues [10], with experimental evidence and a human case report. This method uses adiabatic expansion of gas pressure; 1 L compressed carbon dioxide delivered via a nasal cannula. Temperature is reduced because as the gas expands the pressure reduction transfers energy as work (very rapidly) and not as heat, although in practice there is some heat transfer as insulation is not perfect. Carbon dioxide is of course also a greenhouse gas and using it in this way requires patients to be intubated. Compressed air and oxygen were tested experimentally as alternatives and found to remove considerably less heat than carbon dioxide, but in vivo temperature differences were not significant. In their study, Poli and colleagues [1] show very nicely how different sites of temperature measurement reflect temperature change differently using intravenous and nasopharyngeal cooling. Where intracranial temperature is the key temperature of interest, as it arguably is in stroke and traumatic brain injury, their data show the importance of measuring this. As yet, however, there does not seem to be much appetite for targeting therapeutic cooling to intracranial temperature - invasive measurement is not clinically warranted in less severe stroke and traumatic brain injury - and core body temperature is the usual feedback parameter. In this case Poli and colleagues’ data [1] strongly suggest oesophageal temperature is the best proxy for intracranial temperature. One difficulty with this site of measurement is that with longer-term cooling, if drugs are given nasogastrically, this will affect the temperature readings [11]. Another is having to site both a nasogastric tube and an oesophageal temperature probe with attendant increased risk of sinusitis and abrasions. There have been moves to produce a nasogastric tube suitable for feeding/drug instillation, aspiration and temperature measurement but to our knowledge such a potentially useful device is not yet in commercial production.The authors are to be congratulated on their comprehensive measurement of temperature reduction efficacy and clear presentation of data measured at multiple sites. The challenge is to move from evidence of efficacy (temperature reduction) to evidence of effectiveness and improved patient outcomes.  相似文献   

17.
The use of vitamin C against different diseases has been controversially and emotionally discussed since Linus Pauling published his cancer studies. In vitro and animal studies showed promising results and explained the impact of vitamin C, particularly in cases with endothelial dysfunction. Indeed, studies (reviewed in this issue of Critical Care by Oudemans-van Straaten and colleagues) using high-dose vitamin C and the parenteral route of application seem to be more successful than oral vitamin C delivery.Endothelial dysfunction (ED) – as a pathology emerging, for example, out of surgery-elicited ischemia–reperfusion injury or arising during sepsis – contributes to tissue injury, thereby promoting the development of multiple organ failure [1] and as a consequence elevating the length of hospital stay and costs. To deal with this challenging situation, antioxidant therapy – in particular, the use of vitamin C – has been frequently recommended and is under continuous controversial discussion. In this issue of Critical Care, Oudemans-van Straaten and colleagues give an overview of the current experimental and clinical data for vitamin C in this context [2].The pathophysiological situation of ED – including impaired regulation of vascular tone, compromised endothelial barrier function and loss of the endothelium’s antithrombotic and antiatherogenic properties [3] – is mainly caused by two highly connected mechanisms: the loss of nitric oxide (NO) availability and function; and the highly increased production of reactive oxygen species, especially superoxide and peroxynitrite. Both of these mechanisms result from an uncoupling of endothelial nitric oxide synthase (eNOS) that occurs when tetrahydrobiopterin, a cofactor of eNOS, is not sufficiently available. In this uncoupled state, eNOS produces superoxide instead of NO (for a comprehensive review, see [4]). Furthermore, the accompanying inflammation leads to the activation of NADPH oxidase and inducible NO synthase, producing high amounts of superoxide and NO, respectively, thereby further promoting the formation of peroxynitrite.Vitamin C can ameliorate this situation by different methods of action (reviewed in detail in [2]). First, vitamin C inhibits the activation of NADPH oxidase and inducible NO synthase, as shown both in vitro and in animal models, thereby preventing the formation of reactive oxygen species. Furthermore, as vitamin C is necessary for the reductive recycling of tetrahydrobiopterin, it counteracts the uncoupling of eNOS, thereby contributing to the recovery of endothelial function.Interestingly, although there is consensus concerning the approach of reversing uncoupled eNOS via provision of tetrahydrobiopterin to combat ED [4,5], vitamin C seems not to be considered as a therapeutic option. This might be due to some contradictory results concerning the impact of vitamin C in the setting of vascular oxidative stress, ranging from beneficial effects in small clinical studies (reviewed in [2]) to no effect in a large-scale randomized clinical trial [6]. This latter result might have given the temporary deathblow to vitamin C for ED therapy.However, high-dose vitamin C administration – a phrase frequently occurring in the review [2] – cannot be achieved with oral application. Indeed, as Padayatty and coworkers showed in their investigation of vitamin C kinetics, most notably it is the route of vitamin C administration that has to be taken into account [7]. As the intestinal absorption for vitamin C is rapidly saturated, high dosages of vitamin C (>500 mg) have to be applied intravenously and not via the oral route, as practiced [6]. The failure of that study to show an effect is simply that the investigators chose the wrong route of application.In fact, there are two reasons to administer vitamin C parenterally: prevention or treatment of ED (as discussed in [2]), and compensation of a vitamin C deficit following surgery.Indeed, the results of several clinical investigations in patients undergoing cardiac surgery with use of cardiopulmonary bypass – the latter necessarily evoking ischemia–reperfusion injury – showed a strong and long-lasting decrease in vitamin C plasma concentrations following surgery [8,9]. Furthermore, Cowley and coworkers demonstrated a positive relationship between plasma antioxidant potential and survival rate in patients with severe sepsis [10]. Supplementation of vitamin C in patients undergoing any surgery eliciting ischemia–reperfusion injury or suffering sepsis therefore seems to be highly recommendable.For reconstitution of adequate vitamin C plasma levels in critically ill patients, high doses of vitamin C (3 g/day) given intravenously for 3 days or more are needed [11]. An argument against the high-dose application was the risk of an increased formation of vitamin C radicals. However, in a study with healthy volunteers we showed that parenteral vitamin C (750 or 7,500 mg for 6 consecutive days) does not lead to any pro-oxidative effects [12].We need to consider that intravenous application of vitamin C has nothing to do with the known physiology of vitamin C from nutrition or oral supplements. Circumventing the physiological control of vitamin C following oral intake may lead to very different effects that may be distinct from those we know from nutrition studies. Parenteral application is a pharmacological approach. Indeed, little is known about the distribution, metabolism, action and degradation as well as the optimal moment and time course for intravenous high-dose supplementation of vitamin C in the context of ED.Controlled studies are needed to elucidate whether early high-dose administration of vitamin C might help to keep the plasma level in a normal range and to prevent ED.  相似文献   

18.
Acute kidney injury occurs in approximately one-quarter to one-third of patients with major burn injury. Apart from the usual suspects – such as older age, severity of burn injury, sepsis and multiple organ dysfunction – volume overload probably has an important role in the pathogenesis of acute kidney injury.Steinvall and collaborators present the third study on acute kidney injury (AKI) defined by the RIFLE classification in patients with major burn injury [1]. AKI was formerly only considered relevant when there was a need for renal replacement therapy. We now know that moderate decreased kidney function also has an impact on patient outcomes [2]. Only since the first consensus definition for AKI, however – the RIFLE classification [3], which was modified later into the AKI staging system [4] – are we able to truly evaluate the epidemiology of AKI in diverse cohorts of patients. AKI has a population incidence greater than that of acute respiratory distress syndrome, and is comparable with that of sepsis [5]. The incidence rate in a general intensive care unit is on average 30% to 40%, but this rate varies according to the specific cohort.Despite the limitation that the study by Steinvall and colleagues includes only 127 patients with major burns, the study has several strengths. The authors present a very thorough evaluation of AKI, including many possible confounders for AKI. The cohort of patients also seems representative for burn unit patients in the western world [1].What did these studies learn, and how does the study of Steinvall and colleagues relate to the other two studies on this subject – those by Lopes and colleagues (n = 126) [6] and by Coca and colleagues (n = 304) [7]? Importantly, all three studies confirmed findings in other cohorts that increasing RIFLE class was associated with a stepwise increase of mortality. There was a large difference, however, in the incidence of AKI between the studies of Coca and colleagues and of Steinvall and colleagues (26.6% and 24.4%, respectively) compared with that of Lopes and colleagues (35.7% incidence). This difference cannot be explained by differences in baseline characteristics, such as age and total burned surface area. Other explanations should therefore be explored.The study by Lopes and colleagues classifies patients according to the original RIFLE classification, on both urine output and serum creatinine concentration [6]. This is in contrast to the studies by Steinvall and colleagues and Coca and colleagues, which only use serum creatinine [1,7]. Especially in burn patients, the serum creatinine concentration may underestimate kidney function. The cornerstone in acute burn care therapy is large-volume resuscitation to compensate for the massive fluid losses and decreased effective circulating volume. This may lead to hemodilution, and to false low serum creatinine concentrations that do not reflect true kidney function. Catabolism, leading to loss of muscle mass, may also contribute to low serum concentrations. As the muscles are the source of creatinine, less muscle mass will result in lower serum creatinine concentrations for the same glomerular filtration rate [8]. In other words, the two studies that only used creatinine criteria may have underestimated the true incidence of AKI.Steinvall and colleagues also report interesting data on the occurrence of AKI in relation to other organ dysfunctions. They found that approximately one-half of patients, especially those with more severe burn injury, developed AKI during the first week; the other half developed AKI during the next week. AKI was preceded, however, by other organ dysfunctions or sepsis in the majority of patients [1]. In burn injury, decreases of effective circulating volume are maximal during the first 8 hours. Apparently, the burn shock resuscitation schedule used was successful in preventing AKI in this very early phase of burn shock. So when burn shock is not the cause of AKI, what else is?This question brings us to the shift in paradigm on the pathophysiology of AKI. While formerly hypoperfusion and ischemia of the kidneys were thought the main causes of AKI in sepsis, there are now more data indicating that renal perfusion is not decreased in sepsis [9]. Instead, inflammation and apoptosis are probably playing an important role [10]. Data on renal perfusion in burn injury are lacking, but the present findings suggest that renal ischemia is also less relevant, at least in the acute phase of burn injury. Furthermore, major burn trauma patients differ from other intensive care trauma patients by experiencing an inflammatory response that is often more severe, and lasts much longer, compared with other trauma patients [11].Another factor that may contribute to AKI in a second stage after burn shock is that volume resuscitation leads to development of intra-abdominal hypertension and abdominal compartment syndrome [12,13].This brings us to the issue of the optimal burn resuscitation schedule. Most units use Ringer''s lactate and the Parkland resuscitation schedule (4 ml/body weight (kg)/% total burned surface area). Most patients receive more fluids, however – even volumes up to 6 ml/kg/total burned surface area have been reported [14]. Other resuscitation endpoints, or other types of fluids – such as hypertonic saline or colloids – may decrease the volume, decrease the incidence of intra-abdominal hypertension and abdominal compartment syndrome, and decrease AKI [15-17].In conclusion, AKI is also an important complication in burn patients as it is frequent and it is associated with mortality. Inflammation and volume overload play an important role in the pathogenesis of AKI. After decades of care for burn patients, therefore, we definitely need good studies into the optimal volume resuscitation strategy.  相似文献   

19.
Hyperosmolar lactate-based solutions have been used for fluid resuscitation in ICU patients. The positive effects observed with these fluids have been attributed to both lactate metabolism and the hypertonic nature of the solutions. In a recent issue of Critical Care, Duburcq and colleagues studied three types of fluid infused at the same volume in a porcine model of endotoxic shock. The control group was resuscitated with 0.9% NaCl, and the two other groups received either hypertonic sodium-lactate or hypertonic sodium-bicarbonate. The two hypertonic fluids proved to be more effective than 0.9% NaCl for resuscitation in this model. However, some parameters were more effectively corrected by hypertonic sodium-lactate than by hypertonic sodium-bicarbonate, suggesting that lactate metabolism was beneficial in these cases.Glucose is an energy substrate because its catabolism into CO2 and H2O allows the synthesis of ATP. During this process, glucose is first converted into pyruvate (glycolysis) before pyruvate enters mitochondria to be further catabolized (via the citric acid cycle). When pyruvate does not enter mitochondria, it leaves the cell in which it was produced to enter another one where it is converted into glucose (gluconeogenesis) or catabolized to allow ATP production. Pyruvate can also be converted into lactate in all cells. Importantly, the reaction is reversible; therefore, both glucose and lactate can be metabolized into pyruvate. Moreover, lactate leaves or enters the cell via the same transporter as pyruvate. Depending on the distance between the cells that release them and those that take them up, pyruvate and lactate travel, or not, in blood.If from a biochemical point of view pyruvate and lactate are carbohydrates perfectly usable for ATP production, their metabolism has some particularities. Compared with glucose, pyruvate and lactate are easily metabolized by all tissues even in the case of insulin resistance because insulin does not regulate the pyruvate-lactate transporter.In ICU patients, both blood glucose [1] and lactate [2] concentrations positively correlate with the severity of illness. Does this mean that glucose and lactate worsen disease in the critically ill? The answer remains uncertain for glucose, considering the efforts made to control blood glucose in ICU patients. However, this does not preclude using glucose in the ICU. Is lactate toxic or harmful? Growing evidence convincingly suggests that this is not the case. Lactate metabolism has been studied in both volunteers and patients by measuring lactate clearance after sodium-lactate loads that increased blood lactate concentrations above 10 mmol/L [3,4]. No side effects were observed, except an expected alkalinization and decrease in blood potassium concentration. Indeed, sodium-lactate has been used instead of sodium-bicarbonate to treat ventricular tachycardia [5]. Although counterintuitive, pioneer studies have shown the harmlessness and usefulness of sodium-lactate in ICU patients. Hyperosmolar sodium-lactate has been shown (i) to improve cardiac performance in patients undergoing elective cardiac surgery [6] and during acute heart failure [7]; (ii) to induce a negative cumulative fluid balance after coronary artery bypass grafting [8] and to decrease fluid accumulation during burn shock resuscitation [9]; (iii) to be effective in treating [10] and preventing [11] intracranial hypertension following severe traumatic brain injury; and (iv) to restore hemodynamic status in dengue shock syndrome with minimal fluid accumulation [12].The mechanism by which hyperosmolar sodium-lactate improved these parameters was attributed to both lactate metabolism and hyperosmolarity, but no discrimination between these two mechanisms was made. In a recent study published in Critical Care, Duburcq and colleagues elegantly discriminated between these two mechanisms using a porcine model of endotoxic shock [13]. Once shock had been established, the animals received the same volume of three different crystalloid-based solutions: 0.9% NaCl (control group), hyperosmolar sodium-bicarbonate or hyperosmolar sodium-lactate. Importantly, the hyperosmolar groups received the same amount of sodium and osmoles. Duburcq and colleagues observed that both hyperosmolar sodium-bicarbonate and hyperosmolar sodium-lactate prevented anuria and improved mean pulmonary arterial pressure, mixed venous oxygen saturation, oxygen extraction and oxygen delivery/global oxygen consumption ratio better than the control solution. However, hyperosmolar sodium-lactate improved mean arterial pressure, microvascular reactivity, arterial oxygen partial pressure/inspired oxygen fraction ratio, blood glucose concentration and fluid balance better than hyperosmolar sodium-bicarbonate and the control solution. This suggests that some of the positive effects of hyperosmolar sodium-lactate are due to the hyperosmolar nature of the solution, while others are directly related to lactate metabolism.Both sodium-bicarbonate and sodium-lactate are metabolizable anions. This means that the negative charge (bicarbonate or lactate) is converted into uncharged compounds, resulting in sodium load without chloride. In this respect, sodium-bicarbonate and sodium-lactate are responsible for similar effects on pH and electrolytes. Contrary to bicarbonate, however, lactate is an energy substrate (3.61 kcal/g). Therefore, future experiments could use hyperosmolar sodium-bicarbonate supplemented with another energy substrate (glucose, for example) in order to further specify whether the advantage of lactate is related to its particular metabolism or simply to its energy load.  相似文献   

20.
Dosing of antibiotics in critically ill patients is a significant challenge. The increasing number of patients undergoing extracorporeal membrane oxygenation further complicates the issue due to inflammatory activation and to drug sequestration in the circuit. Since patients receiving extracorporeal membrane oxygenation commonly face severe infections, appropriate antibiotic selection and correct dosing is of paramount importance to improve survival. Therapeutic drug monitoring (whenever available) or population pharmacokinetics, based on readily available clinical and laboratory data, should help tailor antibiotic dosing to the individual patient.In a recent edition of Critical Care, on behalf of the Antibiotic, Sedative and Analgesic Pharmacokinetics during Extracorporeal Membrane Oxygenation study investigators, Shekar and colleagues assessed meropenem pharmacokinetics (PK) in patients receiving extracorporeal membrane oxygenation (ECMO), with or without continuous renal replacement therapy (CRRT) [1].Patients on ECMO commonly have infections, reaching rates of more than 15 per 1,000 ECMO days according to data pooled from the Extracorporeal Life Support Organization Registry [2], and these infections are associated with increased mortality. There is also increasing evidence that initial antibiotic appropriateness impacts the outcome of severe infections. To ensure maximal bacterial killing, antibiotics should achieve adequate exposure as soon as possible; that is, an adequate concentration during an ideal time, according to the antibiotic’s characteristics. Consequently, not only starting an antibiotic but also selecting an ideal dose is crucial for therapeutic success.High antibiotic doses are usually needed to achieve therapeutic concentrations early in the infection course. This has been shown for β-lactams [3], vancomycin [4] and aminoglycosides [5], implying the need for higher than recommended doses. These high doses may also be necessary to overcome the impact of bacteria resistance, especially for hospital-acquired infections.Antibiotics are usually prescribed in a traditional pattern, taking into account only the existence of renal or liver dysfunction and considering the susceptibility pattern of the microorganism. Moreover, the antibiotic dose is usually maintained throughout the treatment course, although significant PK changes occur from the resuscitation phase to the recovery phase [6].Ideally, individualized dosing strategies should account for the altered PK and pathogen susceptibility in each patient. In order to achieve this goal it is necessary to understand the full impact of therapeutic interventions and patient characteristics on antibiotic PK.Drug dosing in critically ill patients is especially challenging due to the frequent and hard to predict changes in PK, especially the increased volume of distribution and clearance variation [7], secondary to volume resuscitation, increased cardiac output and capillary leak. Several therapeutic procedures are also associated with PK changes, and both CRRT and ECMO are among the most challenging. In fact the inflammatory activation induced by the extracorporeal circulation and exposure of blood to foreign material and the drug sequestration in the circuit [8] both contribute to alteration in antibiotic concentration and half-life.ECMO is among the most rapidly increasing support techniques in intensive care [9], and its use in the United States has increased more than four times in a space of only 5 years. Understanding the technique’s impact on antibiotic PK is therefore essential.In their study, Shekar and colleagues showed that the volume of distribution of meropenem is increased in patients undergoing ECMO (with or without CRRT) and consequently high initial antibiotic doses were commonly needed [1]. On the other hand, clearance was usually low, correlated with CRRT and with creatinine clearance, so subsequent doses had to be adjusted. Although conventional dosing was able to achieve a trough concentration >2 mg/dl in all patients, a higher target (>8 mg/dl) was only obtained when higher doses were used [1]. These conclusions were in line with another recently published study that accessed patients undergoing CRRT [10]: conventional dosing was not able to achieve the intended target concentrations in all patients, but increasing doses also exposed several patients to toxicity.Therapeutic drug monitoring has been shown to facilitate the achievement of adequate antibiotic concentration [11]. Unfortunately this treatment is only routinely available for vancomycin and aminoglycosides. Studies addressing β-lactam antibiotics PK, however, have also unveiled underdosing and toxic accumulation [12] due to large interindividual and intraindividual variability, suggesting the need to tailor the antibiotic dose to the patient and to adjust it according to PK changes [13,14].The study by Shekar and colleagues is part of a larger multicentric effort aimed at the determination of PK of multiple drugs, namely sedatives, opioids, antibiotics and antifungals, in ECMO patients [15]. This study is a painstaking work of undeniable importance that will culminate in the determination of evidence-based guidelines for drug therapy dosage during ECMO.In conclusion, we can clearly no longer rely on the ‘one size fits all’ paradigm when choosing the antibiotic dose. Knowledge of antibiotic PK, patient status and immune function, bacteria virulence, susceptibility and inoculum, as well as the PK impact of different therapies, should all contribute to dose selection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号