首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
PurposeThe purpose of this study was to assess liver function deterioration, as assessed using the model for end-stage liver disease (MELD) score variations, following transarterial chemo-embolization (TACE) versus selective internal radiation therapy (SIRT) in patients with unresectable unilobar hepatocellular carcinomas (HCC).Patients and methodsWe retrospectively evaluated all patients who underwent a single conventional TACE or SIRT procedure in our department from May 2013 to May 2018 for unilobar unresectable HCC. A total of 86 patients (76 men, 20 women; mean age, 65.5 years) were included. There were 63 patients in the TACE group [56 men, 7 women; mean age, 65.1 ± 9.6 (SD) years] and 23 patients in the SIRT group [20 men, 3 women; mean age, 70 ± 9.2 (SD) years]. Delta MELD, defined as post treatment minus pre-treatment MELD score, was considered for liver function deterioration and compared between patients who underwent single lobar treatment of SIRT versus TACE.ResultsPatients in SIRT group had significant higher tumor burden, alpha-fetoprotein serum level, and rates of macroscopic vessel invasion. Mean pre-treatment MELD scores did not differ between TACE [mean, 8.41 ± 1.71 (SD); range: 7.24–9.24] and SIRT groups [mean, 8.36 ± 1.74 (SD); range: 7.07–9.21] (P = 0.896) as well as Child-Pugh class and albumin-bilirubin (ALBI) grade distribution. However, following treatment, mean DeltaMELD was greater in TACE group (mean, 0.83 ± 1.83 [SD]; range: −0.30–  1.31) than in SIRT group (mean, −0.13 ± 1.06 [SD]; range: −0.49–0.32) (P = 0.021). At multivariate analysis, SIRT treatment was independently associated with a lower DeltaMELD score than TACE (R = −0.955 [−1.68; − 0.406]; P = 0.017;).ConclusionWhereas performed in patients with higher tumor burden, SIRT resulted in lower degrees of liver function worsening as assessed using MELD score variations.  相似文献   

2.
PurposeThe purpose of this study was to retrospectively compare two puncture routes (transpleural vs. transpulmonary) for computed tomography (CT) fluoroscopy-guided cutting needle biopsy of lung nodules with pleural contact.Patients and methodsA total of 102 patients (72 men; mean age, 71.1 ± 9.5 [SD] years) were included and 102 biopsies of 102 lung nodules (mean size, 16.7 ± 5.9 [SD] mm; range, 6.0–29.4 mm; mean length of pleural contact, 10.1 ± 4.2 [SD] mm; range, 2.8–19.6 mm) were analyzed. All procedures were classified as biopsies via the direct transpleural route or the transpulmonary route. The patient-, lesion-, and biopsy-related variables, diagnostic yields, and incidence of complications were compared between the two routes.ResultsBiopsy was performed via the direct transpleural route (n = 59; 57.8%) and transpulmonary route (n = 43; 42.2%). In the transpulmonary route group, the mean distance of the intrapulmonary pathway was 17.7 ± 9.4 [SD] mm (range: 4.1–47.6 mm; P < 0.001) and the introducer needle trajectory angle of < 45° was significantly observed (8.5% [5/59] vs. 60.5% [26/43]; P < 0.001). There was no significant difference in diagnostic accuracy between the direct transpleural and transpulmonary routes (93.2% [55/59] vs. 90.7% [39/43]; P = 0.718). The frequencies of all complications (64.4% [38/59] vs. 97.7% [42/43]; P < 0.001), pneumothorax (33.9% [20/59] vs. 65.1% [28/43]; P = 0.003), pneumothorax with chest tube placement (3.4% [2/59] vs. 18.6% [8/43]; P = 0.016), and pulmonary hemorrhage (47.5% [28/59] vs. 76.7% [33/43]; P = 0.004) were significantly lower in the direct transpleural group.ConclusionDirect transpleural route is recommended for CT fluoroscopy-guided biopsy of lung nodules with pleural contact because it is safer and yields similar diagnostic accuracy than transpulmonary route.  相似文献   

3.
PurposeTo compare morphological imaging features and CT texture histogram parameters between grade 3 pancreatic neuroendocrine tumors (G3-NET) and neuroendocrine carcinomas (NEC).Materials and methodsPatients with pathologically proven G3-NET and NEC, according to the 2017 World Health Organization classification who had CT and MRI examinations between 2006-2017 were retrospectively included. CT and MRI examinations were reviewed by two radiologists in consensus and analyzed with respect to tumor size, enhancement patterns, hemorrhagic content, liver metastases and lymphadenopathies. Texture histogram analysis of tumors was performed on arterial and portal phase CT images. images. Morphological imaging features and CT texture histogram parameters of G3-NETs and NECs were compared.ResultsThirty-seven patients (21 men, 16 women; mean age, 56 ± 13 [SD] years [range: 28-82 years]) with 37 tumors (mean diameter, 60 ± 46 [SD] mm) were included (CT available for all, MRI for 16/37, 43%). Twenty-three patients (23/37; 62%) had NEC and 14 patients (14/37; 38%) had G3-NET. NECs were larger than G3-NETs (mean, 70 ± 51 [SD] mm [range: 18 - 196 mm] vs. 42 ± 24 [SD] mm [range: 8 - 94 mm], respectively; P = 0.039), with more tumor necrosis (75% vs. 33%, respectively; P = 0.030) and lower attenuation on precontrast (30 ± 4 [SD] HU [range: 25-39 HU] vs. 37 ± 6 [SD] [range: 25-45 HU], respectively; P = 0.002) and on portal venous phase CT images (75 ± 18 [SD] HU [range: 43 - 108 HU] vs. 92 ± 19 [SD] HU [range: 46 - 117 HU], respectively; P = 0.014). Hemorrhagic content on MRI was only observed in NEC (P = 0.007). The mean ADC value was lower in NEC ([1.1 ± 0.1 (SD)] × 10−3 mm2/s [range: (0.91 - 1.3) × 10−3 mm2/s] vs. [1.4 ± 0.2 (SD)] × 10−3 mm2/s [range: (1.1 - 1.6) × 10−3 mm2/s]; P = 0.005). CT histogram analysis showed that NEC were more heterogeneous on portal venous phase images (Entropy-0: 4.7 ± 0.2 [SD] [range: 4.2-5.1] vs. 4.5 ± 0.4 [SD] [range: 3.7-4.9]; P = 0.023).ConclusionPancreatic NECs are larger, more frequently hypoattenuating and more heterogeneous with hemorrhagic content than G3-NET on CT and MRI.  相似文献   

4.
《Journal of vascular surgery》2019,69(5):1367-1378
BackgroundThoracic endovascular aortic repair (TEVAR) has become a mainstay of therapy for acute and chronic type B aortic dissection (TBAD). Dynamic aortic morphologic changes, untreated dissected aorta, and persistent false lumen perfusion have significant consequences for reintervention after TEVAR for TBAD. However, few reports contrast differences in secondary aortic intervention (SAI) after TEVAR for TBAD or describe their influence on mortality. This analysis examined incidence, timing, and types of SAI after TEVAR for acute and chronic TBAD and determined their impact on survival.MethodsAll TEVAR procedures for acute and chronic TBAD (2005-2016) were retrospectively reviewed. Patients with staged (<30 days) or concomitant ascending aortic arch repair or replacement were excluded. Acuity was defined by symptom onset (0-30 days, acute; >30 days, chronic). SAI procedures were grouped into open (intended treatment zone or remote aortic site), major endovascular (TEVAR extension or endograft implanted at noncontiguous site), and minor endovascular (side branch or false lumen embolization) categories. Kaplan-Meier methodology was used to estimate freedom from SAI and survival. Cox proportional hazards were used to identify SAI predictors.ResultsTEVAR for TBAD was performed in 258 patients (acute, 49% [n = 128]; chronic, 51% [n = 130]). Mean follow-up was 17 ± 22 months with an overall SAI rate of 27% (n = 70; acute, 22% [28]; chronic, 32% [42]; odds ratio, 1.7; 95% confidence interval, 0.9-2.9; P = .07]. Median time to SAI was significantly less after acute than after chronic dissection (0.7 [0-12] vs 7 [0-91] months; P < .001); however, freedom from SAI was not different (1-year: acute, 67% ± 4%, vs chronic, 68% ± 5%; 3-year: acute, 65% ± 7%, vs chronic, 52% ± 8%; P = .7). Types of SAI were similar (acute vs chronic: open, 61% vs 55% [P = .6]; major endovascular, 36% vs 38% [P = .8]; minor endovascular, 21% vs 21% [P = 1]). The open conversion rate (either partial or total endograft explantation: acute, 10% [13/128]; chronic, 15% [20/130]; P = .2) and incidence of retrograde dissection (acute, 6% [7/128]; chronic, 4% [5/130]; P = .5) were similar. There was no difference in survival for SAI patients (5-year: acute + SAI, 55% ± 9%, vs acute without SAI, 67% ± 8% [P = .3]; 5-year: chronic + SAI, 72% ± 6%, vs chronic without SAI, 72% ± 7% [P = .7]). Factors associated with SAI included younger age, acute dissection with larger maximal aortic diameter at presentation, Marfan syndrome, and use of arch vessel adjunctive procedures with the index TEVAR. Indication for the index TEVAR (aneurysm, malperfusion, rupture, and pain or hypertension) or remote preoperative history of proximal arch procedure was not predictive of SAI.ConclusionsSAI after TEVAR for TBAD is common. Acute TBAD has a higher proportion of early SAI; however, chronic TBAD appears to have ongoing risk of remediation after the first postoperative year. SAI types are similar between groups, and the occurrence of aorta-related reintervention does not affect survival. Patients' features and anatomy predict need for SAI. These data should be taken into consideration for selection of patients, device design, and surveillance strategies after TEVAR for TBAD.  相似文献   

5.
PurposeTo determine the accuracy and clinical significance of planar scintigraphy lung shunt fraction (PLSF) and single-photon emission computerized tomography (SPECT) computed tomography (CT) lung shunt fraction (SLSF) before Y-90 transarterial radioembolization.Materials and methodsSeventy patients (46 men, 24 women; mean age, 64 ± 9.5 [SD] years) who underwent 83 treatments with Y-90 transarterial radioembolization for primary or secondary malignancies of the liver with a PLSF ≥ 7.5% were retrospectively evaluated. The patients mapping technetium 99 m (Tc-99 m) macroaggregated albumin (MAA) PLSF and SLSF were calculated and compared to the post Y-90 delivery SLSF. A model using modern dose thresholds was created to identify patients who would require dose reduction due to a lung dose ≥ 30 Gy, with patients who required >50% dose reduction considered to be delivery cancelations.ResultsA significant difference was found between mean PLSF (14.7 ± 11.6 [SD]%; range: 7.5–84.1%) and mean SLSF (8.7 ± 8.5 [SD]%; range: 1.7–73.5) (P < 0.001). The mean realized LSF (7.1 ± 3 [SD]%; range:1.5–17.6) was significantly less than the PLSF (P <0.001) but not the SLSF (P = 0.07). PLSF significantly overestimated the realized LSF by more than the SLSF (8.5 ± 5.3 [SD] % [range: -0.1–21.7] vs. 0.8 ± 3.6 [SD] % [range: -5–13.2], respectively) (P < 0.001). Based on the clinical significance model, 20 patients (20/83, 24.1%) would have required dose reduction or cancelation when using PLSF but would not require even a dose reduction when using the SLSF. Significantly more deliveries would have been be canceled if PLSF was used as compared to SLSF (22/83 [26.5%] vs. 6/83 [7.2%], respectively) (P < 0.001).ConclusionSLSF is significantly more accurate at predicting realized LSF than PLSF and this difference is of clinical significance in a number of patients with a PLSF ≥ 7.5%.  相似文献   

6.
《Journal of vascular surgery》2020,71(2):400-407.e2
ObjectiveThe objective of this study was to compare short-term outcomes in patients who underwent thoracic endovascular aortic repair (TEVAR) with stent grafts alone or with a composite device design (stent graft plus bare-metal aortic stent) for acute type B aortic dissection in the setting of malperfusion.MethodsThis retrospective analysis included patients with acute (≤14 days of symptom onset) complicated type B dissection in the setting of malperfusion who were treated with stent grafts alone (TEVAR cohort) at two European institutions vs those who underwent TEVAR with a composite device design (Cook Medical, Bloomington, Ind) in the investigational STABLE I feasibility study and STABLE II pivotal study (STABLE cohort). Preoperative characteristics and 30-day outcomes (including mortality, malperfusion-related mortality, morbidity, and secondary interventions) were compared between the two groups.ResultsThe TEVAR cohort (41 patients; mean age, 58.8 ± 12.7 years; 78.0% male) and the STABLE cohort (84 patients; mean age, 57.8 ± 11.7 years; 71.4% male) were largely similar in preoperative medical characteristics, with more STABLE patients presenting with a history of hypertension (79.8% vs 58.5%; P = .018). The TEVAR and STABLE groups had similar lengths of dissection (451.8 ± 112.7 mm vs 411.8 ± 116.4 mm; P = .10) and similar proximal and distal extent of dissection. At presentation, the two groups exhibited comparable organ system involvement in malperfusion: renal (53.7% TEVAR, 57.1% STABLE), gastrointestinal (41.5% TEVAR, 44.0% STABLE), lower extremities (34.1% TEVAR, 52.4% STABLE), and spinal cord (9.8% TEVAR, 2.4% STABLE). The 30-day rate of all-cause mortality was 17.1% (7/41) in the TEVAR group and 8.3% (7/84) in the STABLE group (P = .22). The 30-day rate of malperfusion-related mortality (deaths from bowel/mesenteric ischemia or multiple organ failure) was 12% (5/41) in the TEVAR group and 2.4% (2/84) in the STABLE group (P = .038). The 30-day morbidity, for the TEVAR and STABLE groups, respectively, included bowel ischemia (9.8% [4/41] vs 2.4% [2/84]; P = .09), renal failure requiring dialysis (7.3% [3/41] vs 9.5% [8/84]; P > .99), paraplegia or paraparesis (4.9% [2/41] vs 3.6% [3/84]; P = .66), and stroke (2.4% [1/41] vs 10.7% [9/84]; P = .16). The occurrence of 30-day secondary intervention was similar in the TEVAR and STABLE groups (7.3% [3/41] vs 7.1% [6/84]; P > .99). True lumen expansion in the abdominal aorta was significantly greater in the STABLE group.ConclusionsIn patients with acute type B aortic dissection in the setting of branch vessel malperfusion, the use of a composite device with proximal stent grafts and distal bare aortic stent appeared to result in lower malperfusion-related mortality than the use of stent grafts alone. The 30-day rates of morbidity and secondary interventions were similar between the groups.  相似文献   

7.
《The Journal of arthroplasty》2021,36(12):3938-3944
BackgroundThe ideal dose of intravenous glucocorticoids to control pain in total hip arthroplasty (THA) remains unclear. This randomized controlled trial compared postoperative pain and tramadol requirement in patients undergoing unilateral primary THA who received one versus two perioperative doses of dexamethasone.MethodsPatients consented to undergo blinded, simple randomization to either one (at anesthetic induction [1D-group]: 54 patients) or two (with an additional dose 8 hours after surgery [2D-group]: 61 patients) perioperative doses of 8-mg intravenous dexamethasone. Pain was evaluated with visual analog scale at 8, 16, and 24 hours postoperatively and with tramadol requirement. The secondary outcomes included postoperative nausea and vomiting, time to ambulation, and length of stay.ResultsAge (mean, 66 ± 13 years), body mass index (mean, 29 ± 5), gender (60% female), and history of diabetes were similar between groups (P >.05). Pain was higher at 16 (4 [interquartile range {IQR} 3-5] vs 2 [IQR 1-3]; P <.001) and 24 (2.5 [IQR 2-3] vs 1 [IQR 0-1] P <.001) hours postoperatively in the 1D-group patients. 1D-group patients had significantly more tramadol consumption (50 [IQR 50-100] vs 0 [IQR 0-50]; P = .01), as well as postoperative nausea and vomiting (18 [33.3%] vs 5 [8.2%]; P = .001). Fifty-five (90%) patients in the 2D-group and 32 (59%) in the 1D-group ambulated on postoperative day 0 (P = .0002). Fifty-eight (95%) patients in the 2D-group and 37 (68%) in the 1D-group were discharged on postoperative day 1 (P = .0002).ConclusionAn additional dose of dexamethasone at 8 hours postoperatively significantly reduced pain, tramadol consumption, time to ambulation, and length of stay after primary THA.  相似文献   

8.
PurposeTo retrospectively review the ability of direct bilirubin serum level to predict mortality and complications in patients undergoing transarterial chemoembolization (TACE) for hepatocellular carcinoma (HCC) and compare it to the predictive value of the currently utilized total bilirubin serum level.Materials and methodsA total of 219 patients who underwent TACE for 353 hepatocelluar carcinomas (HCC) at a single institution were included. There were 165 men and 54 women, with a mean age of 61.4 ± 7.6 (SD) [range: 27–86 years]. The patients’ electronic medical records were evaluated and they were divided into cohorts based on total bilirubin (< 2, 2–3, and > 3 mg/dL) as well as direct bilirubin (< 1 and 1–2 mg/dL).ResultsDirect bilirubin serum level was significantly greater in the cohort of patients who did not survive as compared to those who survived 6 months ([0.58 ± 0.46 (SD) mg/dL; range: < 0.1–1.8 mg/dL] vs. [0.40 ± 0.31 (SD) mg/dL; range: < 0.1–1.6 mg/dL], respectively) (P = 0.04) and 12 months ([0.49 ± 0.38 (SD) mg/dL; range: < 0.1–1.8 mg/dL] vs. [0.38 ± 0.32 (SD) mg/dL; range: < 0.1–1.6 mg/dL], respectively) (P = 0.03). While total bilirubin serum level was not significantly different in those who did not and did survive 6 months ([1.54 ± 0.99 (SD) mg/dL; range: 0.3–3.9 mg/dL] vs. [1.27 ± 0.70 (SD) mg/dL; range: 0.3–3.75 mg/dL], respectively) (P = 0.16), it was significantly different when evaluating 12 months survival ([1.46 ± 0.87 (SD) mg/dL; range: 0.3–3.9 mg/dL] vs. [1.22 ± 0.65 (SD) mg/dL; range: 0.3–3.9 mg/dL]) (P = 0.03). Akaike information criterion (AIC) analysis revealed that direct bilirubin level more accurately predicted overall survival (AIC = 941.19 vs. 1000.51) and complications (AIC = 352.22 vs. 357.42) than total bilirubin serum levels.ConclusionDirect bilirubin serum level appears to outperform total bilirubin concentration for predicting complications and overall survival in patients undergoing TACE. Patients with relatively maintained direct bilirubin levels should be considered for TACE, particularly in the setting of bridging to transplant.  相似文献   

9.
PurposeTo evaluate the potential differences in non-target embolization and vessel microsphere filling of a reflux-control microcatheter (RCM) compared to a standard end-hole microcatheter (SEHM) in a swine model.Materials and methodsRadiopaque microspheres were injected with both RCM and SEHM (2.4-Fr and 2.7-Fr) in the kidneys of a preclinical swine model. Transarterial renal embolization procedures with RCM or SEHM were performed in both kidneys of 14 pigs. Renal arteries were selectively embolized with an automated injection protocol of radio-opaque microspheres. Ex-vivo X-ray microtomography images of the kidneys were utilized to evaluate the embolization by quantification of the deposition of injected microspheres in the target vs. the non-target area of injection. X-ray microtomography images were blindly analyzed by five interventional radiologists. The degree of vessel filling and the non-target embolization were quantified using a scale from 1 to 5 for each parameter. An analysis of variance was used to compare the paired scores.ResultsTotal volumes of radio-opaque microspheres injected were similar for RCM (11.5 ± 3.6 [SD] mL; range: 6–17 mL) and SEHM (10.6 ± 5.2 [SD] mL; range: 4–19 mL) (P = 0.38). The voxels enhanced ratio in the target (T) vs. non-target (NT) areas was greater with RCM (T = 98.3% vs. NT = 1.7%) than with SEHM (T = 89% vs. NT = 11%) but the difference was not significant (P = 0.30). The total score blindly given by the five interventional radiologists was significantly different between RCM (12.3 ± 2.1 [SD]; range: 6–15) and the standard catheter (11.3 ± 2.5 [SD]; range: 4–15) (P = 0.0073), with a significant decrease of non-target embolization for RCM (3.8 ± 1.3 [SD]; range: 3.5–4.2) compared to SEHM (3.2 ± 1.5 [SD]; range: 2.9–3.5) (P = 0.014).ConclusionIn an animal model, RCM microcatheters reduce the risk of non-target embolization from 11% to 1.7%, increasing the delivery of microspheres of 98% to the target vessels, compared to SEHM microcatheters.  相似文献   

10.
PurposeThe purpose of this study was to compare the diagnostic performance of ultra-low dose (ULD) to that of standard (STD) computed tomography (CT) for the diagnosis of non-traumatic abdominal emergencies using clinical follow-up as reference standard.Materials and methodsAll consecutive patients requiring emergency abdomen-pelvic CT examination from March 2017 to September 2017 were prospectively included. ULD and STD CTs were acquired after intravenous administration iodinated contrast medium (portal phase). CT acquisitions were performed at 125 mAs for STD and 55 mAs for ULD. Diagnostic performance was retrospectively evaluated on ULD and STD CTs using clinical follow-up as a reference diagnosis.ResultsA total of 308 CT examinations from 308 patients (145 men; mean age 59.1 ± 20.7 (SD) years; age range: 18–96 years) were included; among which 241/308 (78.2%) showed abnormal findings. The effective dose was significantly lower with the ULD protocol (1.55 ± 1.03 [SD] mSv) than with the STD (3.67 ± 2.56 [SD] mSv) (P < 0.001). Sensitivity was significantly lower for the ULD protocol (85.5% [95%CI: 80.4–89.4]) than for the STD (93.4% [95%CI: 89.4–95.9], P < 0.001) whereas specificities were similar (94.0% [95%CI: 85.1–98.0] vs. 95.5% [95%CI: 87.0–98.9], respectively). ULD sensitivity was equivalent to STD for bowel obstruction and colitis/diverticulitis (96.4% [95%CI: 87.0–99.6] and 86.5% [95%CI: 74.3–93.5] for ULD vs. 96.4% [95%CI: 87.0–99.6] and 88.5% [95%CI: 76.5–94.9] for STD, respectively) but lower for appendicitis, pyelonephritis, abscesses and renal colic (75.0% [95%CI: 57.6–86.9]; 77.3% [95%CI: 56.0–90.1]; 90.5% [95%CI: 69.6–98.4] and 85% [95%CI: 62.9–95.4] for ULD vs. 93.8% [95%CI: 78.6–99.2]; 95.5% [95%CI: 76.2–100.0]; 100.0% [95%CI: 81.4–100.0] and 100.0% [95%CI: 80.6–100.0] for STD, respectively). Sensitivities were significantly different between the two protocols only for appendicitis (P = 0.041).ConclusionIn an emergency context, for patients with non-traumatic abdominal emergencies, ULD-CT showed inferior diagnostic performance compared to STD-CT for most abdominal conditions except for bowel obstruction and colitis/diverticulitis detection.  相似文献   

11.
《Transplantation proceedings》2021,53(8):2476-2480
BackgroundThe treatment of coronavirus disease 2019 (COVID-19) is based on the patient's clinical status and levels of inflammatory biomarkers. The comparative activity of these biomarkers in kidney transplant (KT) patients with COVID-19 pneumonia from severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and non–SARS-CoV-2 etiologies is unknown. The aim of this study was to compare the clinical presentation and inflammatory parameters at admission of KT patients with COVID-19 pneumonia and those with non–COVID-19 pneumonia over the same period.MethodsBiomarkers were measured and compared between KT patients with COVID-19 pneumonia (n = 57) and non-COVID-19 pneumonia (n = 20) from March 2020 to March 2021.ResultsBoth groups showed comparable demographics. The KT patients with COVID-19 had fewer neutrophils (6824 ± 5000 vs 8969 ± 4206; P = .09) than the non-COVID group, although there was no significant difference in the lymphocyte count. Non–COVID-19 pneumonia was associated with higher d-dimer (median, 921 [interquartile range (IQR), 495-1680] vs median, 2215 [IQR, 879-3934]; P = 0.09) and interleukin-6 (median, 35 [IQR, 20-128] vs median, 222 [IQR, 38-500]; P = 0.006) levels. The ferritin level was higher in the COVID-19 group (median, 809 [IQR, 442-1,330] vs median, 377 [IQR, 276-885]; P = 0.008). In multivariable analysis, only d-dimer (hazard ratio [HR], 1; 95% confidence interval [CI],1-1.002; P = .02) and ferritin (HR, 1; 95% CI, 0.9-0.9; P = .02) increase the statistic signification.ConclusionCOVID-19 pneumonia in KT patients shows a different presentation of inflammatory biomarkers than other non-COVID pneumonias. It could be useful to identify KT patients with COVID-19. More detailed studies are necessary to understand the presentation of biomarkers in KT with COVID-19.  相似文献   

12.
BackgroundBrain death (BD) is characterized by a complex inflammatory response, resulting in dysfunction of potentially transplantable organs. This process is modulated by cytokines, which amplify graft immunogenicity. We have investigated the inflammatory response in an animal model of BD and analyzed the effects of thalidomide, a drug with powerful immunomodulatory properties.MethodsBD was induced in male Lewis rats. We studied three groups: Control (sham-operated rats) (n = 6), BD (rats subjected to brain death) (n = 6) and BD + Thalid (BD rats treated with one dose of thalidomide (200 mg/Kg), administered by gavage) (n = 6). Six hours after BD, serum levels of urea and creatinine, as well as systemic and renal tissue protein levels of TNF-α and IL-6, were analyzed. We also determined the mRNA expression of ET-1, and macrophage infiltration by immunohistochemistry.ResultsBD induced a striking inflammatory status, demonstrated by a significant increase of plasma cytokines: TNF-α (2.8 ± 4.3 pg/mL [BD] vs. 9.4 ± 2.8 pg/mL [Control]), and IL-6 (6219.5 ± 1380.6 pg/mL [BD] vs. 1854.7 ± 822.6 pg/mL [Control]), and in the renal tissue: TNF-α (2.5 ± 0.3 relative expression [BD] vs. 1.0 ± 0.4 relative expression [Control]; p < 0.05), and IL-6 (4.0 ± 0.4 relative expression [BD] vs. 1.0 ± 0.3 relative expression [Control]; p < 0.05). Moreover, BD increased macrophages infiltration (2.47 ± 0.07 cells/field [BD] vs. 1.20 ± 0.05 cells/field [Control]; p < 0.05), and ET-1 gene expression (2.5 ± 0.3 relative expression [BD] vs. 1.0 ± 0.2 relative expression [Control]; p < 0.05). In addition, we have observed deterioration in renal function, characterized by an increase of urea (194.7 ± 25.0 mg/dL [BD] vs. 108.0 ± 14.2 mg/dL [Control]; p < 0.05) and creatinine (1.4 ± 0.04 mg/dL [BD] vs. 1.0 ± 0.07 mg/dL [Control]; p < 0.05) levels. Thalidomide administration significantly reduced plasma cytokines: TNF-α (5.1 ± 1.4 pg/mL [BD + Thalid] vs. BD; p < 0.05), and IL-6 (1056.5 ± 488.3 pg/mL [BD + Thalid] vs. BD; p < 0.05), as well as in the renal tissue: TNF-α (1.5 ± 0.2 relative expression [BD + Thalid] vs. BD; p < 0.05), and IL-6 (2.1 ± 0.3 relative expression [BD + Thalid] vs. BD; p < 0.05). Thalidomide treatment also induced a significant decrease in the expression of ET-1 (1.4 ± 0.3 relative expression [BD + Thalid] vs. BD; p < 0.05), and macrophages infiltration (1.17 ± 0.06 cells/field [BD + Thalid] vs. BD; p < 0.05). Also thalidomide prevented kidney function failure by reduced urea (148.3 ± 4.4 mg/dL [BD + Thalid] vs. BD; p < 0.05), and creatinine (1.1 ± 0.14 mg/dL [BD + Thalid] vs. BD; p < 0.05).ConclusionsThe immunomodulatory properties of thalidomide were effective in decreasing systemic and local immunologic response, leading to diminished renal damage, as reflected in the decrease of urea and creatinine levels. These results suggest that use of thalidomide may represent a potential strategy for treating in BD kidney organ donors.  相似文献   

13.
《Journal of vascular surgery》2020,71(6):2056-2064
ObjectiveLimited data exist comparing atherectomy (At) with balloon angioplasty for infrapopliteal peripheral arterial disease. The objective of this study was to compare the outcomes of infrapopliteal At with angioplasty vs angioplasty alone in patients with critical limb ischemia.MethodsThis is a retrospective, single-center, longitudinal study comparing patients undergoing either infrapopliteal At with angioplasty or angioplasty alone for critical limb ischemia, between January 2014 and October 2017. The primary outcome was primary patency rates. Secondary outcomes were reintervention rates, assisted primary patency, secondary patency, major adverse cardiac events, major adverse limb events, amputation-free survival, overall survival, and wound healing rates. Data were analyzed in multivariate generalized linear models with log rank tests to determine survival in Kaplan-Meier curves.ResultsThere were 342 infrapopliteal interventions, 183 percutaneous balloon angioplasty (PTA; 54%), and 159 atherectomies (At) with PTA (46%) performed on 290 patients, with a mean age of 67 ± 12 years; 61% of the patients were male. The PTA and At/PTA groups had similar demographics, tissue loss (79% vs 84%; P = .26), ischemic rest pain (21% vs 16%; P = .51), mean follow-up (19 ± 9 vs 20 ± 9 months; P = .32), mean number of vessels treated (1.7 ± 0.8 vs 1.9 ± 0.8; P = .08) and the mean lesion length treated (6.55 ± 5.00 cm vs 6.02 ± 4.00 cm; P = .08), respectively. Similar 3-month (96 ± 1% vs 94 ± 1%), 6-month (85 ± 2% vs 86 ± 3%), 12-month (68 ± 3% vs 69 ± 4%), and 18-month (57 ± 4% vs 62 ± 4%) primary patency rates were seen in the two groups (P = .87). At/PTA patients had significantly higher reintervention rates as compared with the PTA patients (28% vs 16%; P = .02). Similar assisted primary patency rates (67 ± 4% vs 69 ± 4%; P = .78) and secondary patency rates (61 ± 4% vs 66 ± 4%; P = .98) were seen in the PTA and At/PTA groups at 18 months. The 30-days major adverse cardiac event rates (3% vs 2%; P = .13) and 30-day major adverse limb event rates (5% vs 4%; P = .2) were similar in both groups. Wound healing rates (72 ± 3% vs 75 ± 2%; P = .12), 1-year amputation-free survival (68 ± 4.1% vs 70 ± 2%; P = .5), and 1-year overall survival (76 ± 4% vs 78 ± 4%; P = .39) rates did not differ in the PTA and At/PTA groups. THE At/PTA group had higher local complication rates (7 [4%] vs 1 [0.5%]; P = .03)ConclusionsAt with angioplasty provides similar patency rates compared with angioplasty alone for infrapopliteal peripheral arterial disease, but associated with higher reintervention and local complication rates. Further appropriately designed studies are required to determine the exact role of At in this subset of patients.  相似文献   

14.
15.
PurposeThe purpose of this study was to compare the degree of perilymphatic enhancement between 4 hour post-contrast constant flip angle three-dimensional fluid attenuated inversion recovery (3D-FLAIR) images obtained with short repetition time (TR) and those obtained with long TR.Materials and methodsThis single-center, prospective study included patients who underwent MRI of the inner ear with heavily T2-weighted sequence, 3D-FLAIR sequence with a “short” TR of 10,000 ms (s3D-FLAIR) and with a “long” TR of 16,000 ms (l3D-FLAIR). Signal intensity ratio (SIR) and contrast-to-noise ratio (CNR) obtained with s3D-FLAIR and l3D-FLAIR were quantitatively assessed using region of interest (ROI) method and compared. The morphology of the endolymphatic space on both sequences was also evaluated.ResultsFrom March 2020 to July 2020, 20 consecutive patients were enrolled (9 women and 11 men; mean age, 52.1 ± 14.5 [SD] years; age range: 29–75 years). On l3D-FLAIR images, mean SIR (21.1 ± 8.8 [SD]; range: 7.6–46.1) was significantly greater than that on s3D-FLAIR images (15.7 ± 6.7 [SD]; range: 5.9–33.4) (P < 0.01). On l3D-FLAIR images, mean CNR (17 ± 8.5 [SD]; range: 2–40) was significantly greater than that on s3D-FLAIR images (12 ± 6.3 [SD]; range: 3.2–29.8) (P < 0.01). Kappa value for inter-rater agreement for endolymphatic hydrops, vestibular atelectasis and perilymphatic fistula were 0.93 (95% CI: 0.74–1), 1 (95% CI: 0.85–1) and 1 (95% CI: 0.85–1) respectively.ConclusionThis study demonstrates that the sensitivity of 3D-FLAIR sequences to low concentration gadolinium in the perilymphatic space is improved by elongation of the TR, with SIR and CNR increased by +34.4% and +41.3% respectively.  相似文献   

16.
PurposeAcceleration of MRI acquisitions and especially of T2-weighted sequences is essential to reduce the duration of MRI examinations but also kinetic artifacts in liver imaging. The purpose of this study was to compare the acquisition time and the image quality of a single-shot fat-suppressed turbo spin-echo (TSE) T2-weighted sequence with deep learning reconstruction (HASTEDL) with that of a fat-suppressed T2-weighted BLADE TSE sequence in patients with focal liver lesions.Materials and methodsNinety-five patients (52 men, 43 women; mean age: 61 ± 14 [SD]; age range: 28–87 years) with 42 focal liver lesions (17 hepatocellular carcinomas, 10 sarcoidosis lesions, 9 myeloma lesions, 3 liver metastases and 3 focal nodular hyperplasias) who underwent liver MRI at 1.5 T including HASTEDL and BLADE sequences were retrospectively included. Overall image quality, noise level in the liver, lesion conspicuity and sharpness of liver lesion contours were assessed by two independent readers. Liver signal-to-noise ratio (SNR) and lesion contrast-to-noise ratio (CNR) were measured and compared between the two sequences, as well as the mean duration of the sequences (Student t-test or Wilcoxon test for paired data).ResultsMedian overall quality on HASTEDL images (3; IQR: 3, 3) was significantly greater than that on BLADE images (2; IQR: 1, 3) (P < 0.001). Median noise level in the liver on HASTEDL images (0; IQR: 0, 0.5) was significantly lower than that on BLADE images (1; IQR: 1, 2) (P < 0.001). On HASTEDL images, mean liver SNR (107.3 ± 39.7 [SD]) and mean focal liver lesion CNR (87.0 ± 76.6 [SD]) were significantly greater than those on BLADE images (67.1 ± 23.8 [SD], P < 0.001 and 48.6 ± 43.9 [SD], P = 0.027, respectively). Acquisition time was significantly shorter with the HASTEDL sequence (18 ± [0] s; range: 18–18 s) compared to BLADE sequence (152 ± 47 [SD] s; range: 87–263 s) (P < 0.001).ConclusionBy comparison with the BLADE sequence, HASTEDL sequence significantly reduces acquisition time while improving image quality, liver SNR and focal liver lesions CNR.  相似文献   

17.
PurposeThe purpose of this study was to compare ventricular vascular coupling ratio (VVCR) between patients with repaired standard tetralogy of Fallot (TOF) and those with repaired TOF-pulmonary atresia (TOF-PA) using cardiovascular magnetic resonance (CMR).Materials and methodsPatients with repaired TOF aged > 6 years were prospectively enrolled for same day CMR, echocardiography, and exercise stress test following a standardized protocol. Sanz's method was used to calculate VVCR as right ventricle (RV) end-systolic volume/pulmonary artery stroke volume. Regression analysis was used to examine associations with exercise test parameters, New York Heart Association (NYHA) class, RV size and biventricular systolic function.ResultsA total of 248 subjects were included; of these 222 had repaired TOF (group I, 129 males; mean age, 15.9 ± 4.7 [SD] years [range: 8–29 years]) and 26 had repaired TOF-PA (group II, 14 males; mean age, 17.0 ± 6.3 [SD] years [range: 8–29 years]). Mean VVCR for all subjects was 1.54 ± 0.64 [SD] (range: 0.43–3.80). Mean VVCR was significantly greater in the TOF-PA group (1.81 ± 0.75 [SD]; range: 0.78–3.20) than in the standard TOF group (1.51 ± 0.72 [SD]; range: 0.43–3.80) (P = 0.03). VVCR was greater in the 68 NYHA class II subjects (1.79 ± 0.66 [SD]; range: 0.75–3.26) compared to the 179 NYHA class I subjects (1.46 ± 0.61 [SD]; range: 0.43–3.80) (P < 0.001).ConclusionNon-invasive determination of VVCR using CMR is feasible in children and adolescents. VVCR showed association with NYHA class, and was worse in subjects with repaired TOF-PA compared to those with repaired standard TOF. VVCR shows promise as an indicator of pulmonary artery compliance and cardiovascular performance in this cohort.  相似文献   

18.
Background and PurposeThere are few data regarding the occurrence of (RIFLE)-based acute kidney dysfunction (AKD) after heart transplantation (HT) and its risk factors. The aim of this study was to apply RIFLE criteria in patients who developed AKD following HT to compare patients with and without AKD and to determine incidence and risk factors of AKD.Patients and MethodsWe retrospectively analyzed the records of 65 patients who underwent HT between 2003 and 2012. We investigated 3 levels of renal dysfunction outlined in RIFLE criteria: risk (R), injury (I), and failure (F). Appropriate class was assigned comparing baseline creatinine level to peak levels in the first 7 days after HT. Perioperative variables of heart transplant recipients were collected.ResultsThe mean age at transplantation was 32.8 ± 16.6 years with 72.7% males. The incidence of AKD was 61%, risk occured in 18%, injury in 16%, and failure in 27% of the patients. Patients who had AKD were significantly older (37.9 ± 15.6 vs 24.6 ± 15.0 years: P = .008), had higher body mass index (24.7 ± 6.7 vs 18.6 ± 4.3; P = .002), and more frequently had history of hypertension (92% vs 8%; P = .011) and smoking (100% vs 0%; P = .008) when compared with those who did not have AKD. When compared with patients who did not develop AKD postoperatively, preoperative higher creatinine levels (1.1 ± 0.3 vs 0.8 ± 0.4; P = .025), intraoperative higher mean arterial pressures (99.2 ± 14.1 vs 89.0 ± 11.4 mm Hg; P = .011), a higher frequency of intraoperative acidosis (81% vs 19%; P = .041), higher lactate levels (5.1 ± 3.8 vs 2.8 ± 1.7 mmol/L; P = .038), and postoperative more frequent use of cyclosporine (91% vs 9%; P = .025) were seen in those who developed AKD. Logistic regression analysis revealed that age (odds ratio [OR], 1.057; 95% confidence interval [CI], 1.010–1.106; P = .018) and use of cyclosporine (OR, 0.099; 95% CI, 0.010–0.935; P = .043) were independent risk factors for AKD.ConclusionsOur results suggest that based on RIFLE criteria, AKD occur in more than half of HTs postoperatively. Older age and use of cyclosporine are associated with AKD following HT.  相似文献   

19.
PurposeThe purpose of this study was to evaluate the effectiveness and complication rate of computed tomography (CT)-guided epidural injection of steroids and local anesthetics for pain relief in patients with neuralgia due to acute or chronic herpes zoster (HZ).Materials and methodsA prospective study was conducted from April 2017 to February 2019 including patients with HZ neuralgia (HZN) at any stage (acute or chronic, the latter being defined as pain lasting more than 3 months and also called post herpetic neuralgia [PHN]). The sensory ganglion of the affected dermatome and/or the affected sensory nerve was targeted under CT-guidance and local injection of a mixture of two vials of methylprednisolone 40 mg/mL and 2 mL of Lidocaine 1% was performed. Using a visual analogue scale (VAS, 0 to 10), pain was assessed prior to the procedure, and at day 7, 1 month, 3 months and 6 months. Adverse effects were graded according to the Society of Interventional Radiology classification.ResultsTwenty patients were included. There were 9 men and 11 women with a mean age of 67 ± 13.9 (SD) years (range: 27–83 years). Of these, 14 patients had acute HZN and 6 had PHN. Mean VAS at baseline was 8.1 ± 1.2 (SD) (range: 6–10) with significant decrease (P < 0.0001) at day 7 (3.4 ± 3.2 [SD]; range: 0–10), day 30 (3.4 ± 3.2 [SD]; range: 0–9), day 90 (2.9 ± 3.2 [SD]; range: 0–9), and day 180 (2.5 ± 3.1 [SD]; range: 0–9). Infiltrations were significantly more effective on acute HZN than on PHN (P < 0.001) and required significantly fewer infiltrations for pain relef (P = 0.002). Only one grade A adverse event was reported.ConclusionEpidural injection of a mixture of steroids and local anesthetics under CT-guidance is effective on HZN with a persisting effect over 6 months.  相似文献   

20.
PurposeThe purpose of this study was to investigate right atrial and ventricular strain parameters on cardiac magnetic resonance (CMR) in patients with precapillary pulmonary hypertension (PPH) and whether they can aid in the assessment of PPH prognosis.Materials and methodsAdult patients with groups 1 and 4 PPH were invited to participate in the study. Age- and sex-matched healthy volunteers were also recruited as controls. At baseline, patients underwent clinical examination, N-terminal pro-B-type natriuretic peptide measurement and CMR with feature tracking post-processing (CMR-FT). Healthy controls underwent only CMR-FT. The study's primary endpoint was clinical failure, defined as death, hospitalization or demonstrable clinical deterioration during follow-up. Patients who were unable to perform 6-minute walking test due to musculoskeletal disorders were excluded from the study.ResultsThirty-six patients (8 men, 28 women; mean age, 50.6 ± 13.8 [SD] years [range: 18.6–78.5 years]) and 12 healthy control subjects (5 mean, 7 women; mean age, 40.6 ± 13.5 [SD] years [range: 23.1–64.4 years]) were recruited. Right ventricular global longitudinal strain (GLS) was significantly impaired in PPH patients (?20.2 ± 5.3 [SD] % [range: ?28.8 to ?9.1%] vs. ?28.4 ± 3.1% [?33.7 to ?22.7%] respectively, P < 0.001). The right atrial GLS was significantly impaired in PPH compared to healthy controls (?19.9 ± 4.5% [range: ?28.6 to ?3.6%] vs. ?26.5 ± 4.2% [range: ?32.8 to ?15.8%] respectively) (P < 0.001). Clinical failure occurred in 19 (19/36, 53%) of patients. Right ventricular GLS predicted clinical failure most reliably among CMR parameters (?22.6 ± 3.8 [SD] % [range: ?27.6 to ?12.7%] for patients without clinical failure vs. ?18 ± 5.6 [SD] % [range: ?28.8 to ?9.1%] for patients with clinical failure; hazard ratio [HR] = 1.85; P = 0.007; area under the AUC curve = 0.75). Lower absolute right atrial GLS was significantly associated with clinical failure (?22.7 ± 3.0 [SD] % [range: ?28.6 to ?17.7%] for patients without clinical failure vs. ?16.9 ± 5.8 [SD] % [range: ?24.2 to ?3.6%] for patients with clinical failure) (HR = 1.53; P = 0.035).ConclusionCMR feature tracking-derived myocardial strain parameters of both the right atrium and ventricle can assist clinicians in the prognosis of PPH.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号