首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Excellent outcomes among HIV+ kidney transplant (KT) recipients have been reported by the NIH consortium, but it is unclear if experience with HIV+ KT is required to achieve these outcomes. We studied associations between experience measures and outcomes in 499 HIV+ recipients (SRTR data 2004–2011). Experience measures examined included: (1) center‐level participation in the NIH consortium; (2) KT experiential learning curve; and (3) transplant era (2004–2007 vs. 2008–2011). There was no difference in outcomes among centers early in their experience (first 5 HIV+ KT) compared to centers having performed > 6 HIV+ KT (GS adjusted hazard ratio [aHR]: 1.05, 95% CI: 0.68–1.61, p = 0.82; PS aHR: 0.93; 95% CI: 0.56–1.53, p = 0.76), and participation in the NIH‐study was not associated with any better outcomes (GS aHR: 1.08, 95% CI: 0.71–1.65, p = 0.71; PS aHR: 1.13; 95% CI: 0.68–1.89, p = 0.63). Transplant era was strongly associated with outcomes; HIV+ KTs performed in 2008–2011 had 38% lower risk of graft loss (aHR: 0.62; 95% CI: 0.42–0.92, p = 0.02) and 41% lower risk of death (aHR: 0.59; 95% CI: 0.39–0.90, p = 0.01) than that in 2004–2007. Outcomes after HIV+ KT have improved over time, but center‐level experience or consortium participation is not necessary to achieve excellent outcomes, supporting continued expansion of HIV+ KT in the US.  相似文献   

2.
More than one‐third of US adults have limited health literacy, putting them at risk of adverse clinical outcomes. We evaluated the prevalence of limited health literacy among 1578 adult kidney transplant (KT) candidates (May 2014‐November 2017) and examined its association with listing for transplant and waitlist mortality in this pilot study. Limited health literacy was assessed at KT evaluation by using a standard cutoff score ≤5 on the Brief Health Literacy Screen (score range 0‐12, lower scores indicate worse health literacy). We used logistic regression and adjusted Cox proportional hazards models to identify risk factors for limited health literacy and to quantify its association with listing and waitlist mortality. We found that 8.9% of candidates had limited health literacy; risk factors included less than college education (adjusted odds ratio [aOR] = 2.87, 95% confidence interval [CI]:1.86‐4.43), frailty (aOR = 1.85, 95% CI:1.22‐2.80), comorbidity (Charlson comorbidity index [1‐point increase] aOR = 1.12, 95% CI: 1.04‐1.20), and cognitive impairment (aOR = 3.45, 95% CI: 2.20‐5.41) after adjusting for age, sex, race, and income. Candidates with limited health literacy had a 30% (adjusted hazard ratio = 0.70, 95% CI: 0.54‐0.91) decreased likelihood of listing and a 2.42‐fold (95% CI: 1.16‐ to 5.05‐fold) increased risk of waitlist mortality. Limited health literacy may be a salient mechanism in access to KT; programs to aid candidates with limited health literacy may improve outcomes and reduce disparities.  相似文献   

3.

Background

Objective measures for preoperative risk assessment are needed to inform surgical risk stratification. Previous studies using preoperative imaging have shown that the psoas muscle is a significant predictor of postoperative outcomes. Because psoas measurements are not always available, additional trunk muscles should be identified as alternative measures of risk assessment. Our research assessed the relationship between paraspinous muscle area, psoas muscle area, and surgical outcomes.

Methods

Using the Michigan Surgical Quality Collaborative database, we retrospectively identified 1309 surgical patients who had preoperative abdominal computerized tomography scans within 90 d of operation. Analytic morphomic techniques were used to measure the cross-sectional area of the paraspinous muscle at the T12 vertebral level. The primary outcome was 1-y mortality. Analyses were stratified by sex, and logistic regression was used to assess the relationship between muscle area and postoperative outcome.

Results

The measurements of paraspinous muscle area at T12 were normally distributed. There was a strong correlation between paraspinous muscle area at T12 and total psoas area at L4 (r = 0.72, P <0.001). Paraspinous area was significantly associated with 1-y mortality in both females (odds ratio = 0.70 per standard deviation increase in paraspinous area, 95% confidence interval 0.50–0.99, P = 0.046) and males (odds ratio = 0.64, 95% confidence interval 0.47–0.88, P = 0.006).

Conclusions

Paraspinous muscle area correlates with psoas muscle area, and larger paraspinous muscle area is associated with lower mortality rates after surgery. This suggests that the paraspinous muscle may be an alternative to the psoas muscle in the context of objective measures of risk stratification.  相似文献   

4.
We examined a novel database wherein national US transplant registry identifiers were linked to records from a large pharmaceutical claims warehouse (2008–2015) to characterize antidepressant use before and after kidney transplantation, and associations [adjusted hazard ratio (aHR) 95% CI] with death and graft failure. Among 72 054 recipients, 12.6% filled antidepressant medications in the year before transplant, and use was more common among women and patients who were white, unemployed, and had limited functional status. Pre‐transplant antidepressant use was associated with 39% higher 1‐year mortality (aHR 1.39, 95% CI 1.18–1.64) and 15% higher all‐cause graft loss risk (aHR 1.15, 95% CI 1.02–1.30). More than 50% of patients who filled antidepressants pre‐transplant continued fill post‐transplant. Antidepressant use in the first year after transplant was associated with twofold higher risk of death (aHR 1.94, 95% CI 1.60–2.35), 38% higher risk of death‐censored graft failure, and 61% higher risk of all‐cause graft failure in the subsequent year. Pre‐listing antidepressant use was also associated with increased mortality, but transplantation conferred a survival benefit regardless of prelisting antidepressant use status. While associations may in part reflect underlying behaviors or comorbidities, kidney transplant candidates and recipients treated with antidepressant medications should be monitored and supported to reduce the risk of adverse outcomes.  相似文献   

5.
Excellent outcomes have been demonstrated in primary human immunodeficiency virus (HIV)–positive (HIV+) kidney transplant recipients, but a subset will lose their graft and seek retransplantation (re‐KT). To date, no study has examined outcomes among HIV+ re‐KT recipients. We studied risk for death and graft loss among 4149 (22 HIV+ vs. 4127 HIV‐negative [HIV?]) adult re‐KT recipients reported to the Scientific Registry of Transplant Recipients (SRTR) (2004–2013). Compared to HIV? re‐KT recipients, HIV+ re‐KT recipients were more commonly African American (63.6% vs. 26.7%, p < 0.001), infected with hepatitis C (31.8% vs. 5.0%, p < 0.001) and had longer median time on dialysis (4.8 years vs. 2.1 years, p = 0.02). There were no significant differences in length of time between the primary and re‐KT events by HIV status (1.5 years vs. 1.4 years, p = 0.52). HIV+ re‐KT recipients experienced a 3.11‐fold increased risk of death (adjusted hazard ratio [aHR]: 3.11, 95% confidence interval [CI]: 1.82–5.34, p < 0.001) and a 1.96‐fold increased risk of graft loss (aHR: 1.96, 95% CI: 1.14–3.36, p = 0.01) compared to HIV? re‐KT recipients. Re‐KT among HIV+ recipients was associated with increased risk for mortality and graft loss. Future research is needed to determine if a survival benefit is achieved with re‐KT in this vulnerable population.  相似文献   

6.
Despite its impact on quality of life and potential for complications, specific risk and protective factors for herpes zoster (HZ) after kidney transplantation (KT) remain to be clarified. We included 444 patients undergoing KT between November 2008 and March 2013. Peripheral blood lymphocyte subpopulations were measured at baseline and months 1 and 6. The risk factors for early (first post‐transplant year) and late HZ (years 1–5) were separately assessed. We observed 35 episodes of post‐transplant HZ after a median follow‐up of 48.3 months (incidence rate: 0.057 per 1000 transplant‐days). Median interval from transplantation was 18.3 months. Six patients (17.1%) developed disseminated infection. Postherpetic neuralgia occurred in 10 cases (28.6%). The receipt of anti‐cytomegalovirus (CMV) prophylaxis with (val)ganciclovir decreased the risk of early HZ [adjusted hazard ratio (aHR): 0.08; 95% CI: 0.01–1.13; P‐value = 0.062], whereas the natural killer (NK) cell at month 6 was protective for the occurrence of late HZ [aHR (per 10‐cells/μl increase): 0.94; 95% CI: 0.88–1.00; P‐value = 0.054]. In conclusion, two easily ascertainable factors (whether the patient is receiving anti‐CMV prophylaxis and the NK cell count at month 6) might be potentially useful to tailor preventive strategies according to individual susceptibility to post‐transplant HZ.  相似文献   

7.
Increased risk donors (IRDs) may inadvertently transmit blood‐borne viruses to organ recipients through transplant. Rates of IRD kidney transplants in children and the associated outcomes are unknown. We used the Scientific Registry of Transplant Recipients to identify pediatric deceased donor kidney transplants that were performed in the United States between January 1, 2005 and December 31, 2015. We used the Cox regression analysis to compare patient and graft survival between IRD and non‐IRD recipients, and a sequential Cox approach to evaluate survival benefit after IRD transplants compared with remaining on the waitlist and never accepting an IRD kidney. We studied 328 recipients with and 4850 without IRD transplants. The annual IRD transplant rates ranged from 3.4% to 13.2%. IRDs were more likely to be male (= .04), black (P < .001), and die from head trauma (P = .006). IRD recipients had higher mean cPRA (0.085 vs 0.065, P = .02). After multivariate adjustment, patient survival after IRD transplants was significantly higher compared with remaining on the waitlist (adjusted hazard ratio [aHR]: 0.48, 95% CI: 0.26‐0.88, P = .018); however, patient (aHR: 0.93, 95% CI: 0.54‐1.59, P = .79) and graft survival (aHR: 0.89, 95% CI: 0.70‐1.13, P = .32) were similar between IRD and non‐IRD recipients. We recommend that IRDs be considered for transplant in children.  相似文献   

8.
Although neutropenia is a common complication after lung transplant, its relationship with recipient outcomes remains understudied. We evaluated a retrospective cohort of 228 adult lung transplant recipients between 2008 and 2013 to assess the association of neutropenia and granulocyte colony‐stimulating factor (GCSF) treatment with outcomes. Neutropenia was categorized as mild (absolute neutrophil count 1000‐1499), moderate (500‐999), or severe (<500) and as a time‐varying continuous variable. Associations with survival, acute rejection, and chronic lung allograft dysfunction (CLAD) were assessed with the use of Cox proportional hazards regression. GCSF therapy impact on survival, CLAD, and acute rejection development was analyzed by propensity score matching. Of 228 patients, 101 (42.1%) developed neutropenia. Recipients with severe neutropenia had higher mortality rates than those of recipients with no (adjusted hazard ratio [aHR] 2.97, 95% confidence interval [CI] 1.05‐8.41, P = .040), mild (aHR 14.508, 95% CI 1.58‐13.34, P = .018), or moderate (aHR 3.27, 95% CI 0.89‐12.01, P = .074) neutropenia. Surprisingly, GCSF treatment was associated with a higher risk for CLAD in mildly neutropenic patients (aHR 3.49, 95% CI 0.93‐13.04, P = .063), although it did decrease death risk in severely neutropenic patients (aHR 0.24, 95% CI 0.07‐0.88, P = .031). Taken together, our data point to an important relationship between neutropenia severity and GCSF treatment in lung transplant outcomes.  相似文献   

9.
Prediction models for post‐kidney transplantation mortality have had limited success (C‐statistics ≤0.70). Adding objective measures of potentially modifiable factors may improve prediction and, consequently, kidney transplant (KT) survival through intervention. The Short Physical Performance Battery (SPPB) is an easily administered objective test of lower extremity function consisting of three parts (balance, walking speed, chair stands), each with scores of 0–4, for a composite score of 0–12, with higher scores indicating better function. SPPB performance and frailty (Fried frailty phenotype) were assessed at admission for KT in a prospective cohort of 719 KT recipients at Johns Hopkins Hospital (8/2009 to 6/2016) and University of Michigan (2/2013 to 12/2016). The independent associations between SPPB impairment (SPPB composite score ≤10) and composite score with post‐KT mortality were tested using adjusted competing risks models treating graft failure as a competing risk. The 5‐year posttransplantation mortality for impaired recipients was 20.6% compared to 4.5% for unimpaired recipients (p < 0.001). Impaired recipients had a 2.30‐fold (adjusted hazard ratio [aHR] 2.30, 95% confidence interval [CI] 1.12–4.74, p = 0.02) increased risk of postkidney transplantation mortality compared to unimpaired recipients. Each one‐point decrease in SPPB score was independently associated with a 1.19‐fold (95% CI 1.09–1.30, p < 0.001) higher risk of post‐KT mortality. SPPB‐derived lower extremity function is a potentially highly useful and modifiable objective measure for pre‐KT risk prediction.  相似文献   

10.
For some patient subgroups, human immunodeficiency virus (HIV) infection has been associated with worse outcomes after kidney transplantation (KT); potentially modifiable factors may be responsible. The study goal was to identify factors that predict a higher risk of graft loss among HIV‐positive KT recipients compared with a similar transplant among HIV‐negative recipients. In this study, 82 762 deceased donor KT recipients (HIV positive: 526; HIV negative: 82 236) reported to the Scientific Registry of Transplant Recipients (SRTR) (2001–2013) were studied by interaction term analysis. Compared to HIV‐negative recipients, the hepatitis C virus (HCV) amplified risk 2.72‐fold among HIV‐positive KT recipients (adjusted hazard ratio [aHR]: 2.72, 95% confidence interval [CI]: 1.75–4.22, p < 0.001). Forty‐three percent of the excess risk was attributable to the interaction between HIV and HCV (attributable proportion of risk due to the interaction [AP]: 0.43, 95% CI: 0.23–0.63, p = 0.02). Among HIV‐positive recipients with more than three HLA mismatches (MMs), risk was amplified 1.80‐fold compared to HIV‐negative (aHR: 1.80, 95% CI: 1.31–2.47, p < 0.001); 42% of the excess risk was attributable to the interaction between HIV and more than three HLA MMs (AP: 0.42, 95% CI: 0.24–0.60, p = 0.01). High‐HIV‐risk (HIV‐positive/HCV‐positive HLAwith more than three MMs) recipients had a 3.86‐fold increased risk compared to low‐HIV‐risk (HIV‐positive/HCV‐negative HLA with three or fewer MMs)) recipients (aHR: 3.86, 95% CI: 2.37–6.30, p < 0.001). Avoidance of more than three HLA MMs in HIV‐positive KT recipients, particularly among coinfected patients, may mitigate the increased risk of graft loss associated with HIV infection.  相似文献   

11.
Frailty is associated with increased mortality among lung transplant candidates. We sought to determine the association between frailty, as measured by the Short Physical Performance Battery (SPPB), and mortality after lung transplantation. In a multicenter prospective cohort study of adults who underwent lung transplantation, preoperative frailty was assessed with the SPPB (n = 318) and, in a secondary analysis, the Fried Frailty Phenotype (FFP; n = 299). We tested the association between preoperative frailty and mortality following lung transplantation with propensity score–adjusted Cox models. We calculated postestimation marginalized standardized risks for 1‐year mortality by frailty status using multivariate logistic regression. SPPB frailty was associated with an increased risk of both 1‐ and 4‐year mortality (adjusted hazard ratio [aHR]: 7.5; 95% confidence interval [CI]: 1.6‐36.0 and aHR 3.8; 95%CI: 1.8‐8.0, respectively). Each 1‐point worsening in SPPB was associated with a 20% increased risk of death (aHR: 1.20; 95%CI: 1.08‐1.33). Frail subjects had an absolute increased risk of death within the first year after transplantation of 12.2% (95%CI: 3.1%‐21%). In secondary analyses, FFP frailty was associated with increased risk of death within the first postoperative year (aHR: 3.8; 95%CI: 1.1‐13.2) but not over longer follow‐up. Preoperative frailty is associated with an increased risk of death after lung transplantation.  相似文献   

12.
Low case volume has been associated with poor outcomes in a wide spectrum of procedures. Our objective was to study the association of low case volume and worse outcomes in pediatric heart transplant centers, taking the novel approach of including waitlist outcomes in the analysis. We studied a cohort of 6482 candidates listed in the Organ Procurement and Transplantation Network for pediatric heart transplantation between 2002 and 2014; 4665 (72%) of the candidates underwent transplantation. Candidates were divided into groups according to the average annual transplantation volume of the listing center during the study period: more than 10, six to 10, three to five, or fewer than three transplantations. We used multivariate Cox regression analysis to identify independent risk factors for waitlist and posttransplantation mortality. Of the 6482 candidates, 24% were listed in low‐volume centers (fewer than three annual transplantations). Of these listed candidates in low‐volume centers, only 36% received a transplant versus 89% in high‐volume centers (more than 10 annual transplantations) (p < 0.001). Listing at a low‐volume center was the most significant risk factor for waitlist death (hazard ratio [HR] 4.5, 95% confidence interval [CI] 3.5–5.7 in multivariate Cox regression and HR 5.6, CI 4.4–7.3 in multivariate competing risk regression) and was significant for posttransplantation death (HR 1.27, 95% CI 1.0–1.6 in multivariate Cox regression). During the study period, one‐fourth of pediatric transplant candidates were listed in low‐volume transplant centers. These children had a limited transplantation rate and a much greater risk of dying while on the waitlist.  相似文献   

13.
Frailty is associated with inferior survival and increased resource requirements among kidney transplant candidates, but assessments are time‐intensive and costly and require direct patient interaction. Waitlist hospitalization may be a proxy for patient fitness and could help those at risk of poor outcomes. We examined United States Renal Data System data from 51 111 adult end‐stage renal disease patients with continuous Medicare coverage who were waitlisted for transplant from January 2000 to December 2011. Heavily admitted patients had higher subsequent resource requirements, increased waitlist mortality and decreased likelihood of transplant (death after listing: 1–7 days: hazard ratio [HR] 1.24, 95% confidence interval [CI] 1.20–1.28; 8–14 days: HR 1.49, 95% CI 1.42–1.56; ≥15 days: HR 2.07, 95% CI 1.99–2.15; vs. 0 days). Graft and recipient survival was inferior, with higher admissions, although survival benefit was preserved. A model including waitlist admissions alone performed better (C statistic 0.76, 95% CI 0.74–0.80) in predicting postlisting mortality than estimated posttransplant survival (C statistic 0.69, 95% CI 0.67–0.73). Although those with a heavy burden of admissions may still benefit from kidney transplant, less utility is derived from allografts placed in this population. Current kidney allocation policy, which is based in part on longevity matching, could be significantly improved by consideration of hospitalization records of transplant candidates.  相似文献   

14.
Previous studies have reported contradictory results regarding the effect of pre‐transplant dialysis modality on the outcomes after kidney transplantation (KT). To minimize the confounding effect of donor‐related variables, we performed a donor‐matched retrospective comparison of 160 patients that received only one modality of pre‐transplant dialysis (peritoneal dialysis [PD] and hemodialysis [HD] in 80 patients each) and that subsequently underwent KT at our center between January 1990 and December 2007. Cox regression models were used to evaluate the association between pre‐transplant dialysis modality and primary study outcomes (death‐censored graft survival and patient survival). To control for imbalances in recipient‐related baseline characteristics, we performed additional adjustments for the propensity score (PS) for receiving pre‐transplant PD (versus HD). There were no significant differences according to pre‐transplant dialysis modality in death‐censored graft survival (PS‐adjusted hazard ratio [aHR]: 0.65; 95% confidence interval [95% CI]: 0.25–1.68) or patient survival (aHR: 0.58; 95% CI: 0.13–2.68). There were no differences in 10‐year graft function or in the incidence of post‐transplant complications either, except for a higher risk of lymphocele in patients undergoing PD (odds ratio: 4.31; 95% CI: 1.15–16.21). In conclusion, pre‐transplant dialysis modality in KT recipients does not impact short‐ or long‐term graft outcomes or patient survival.  相似文献   

15.
In November 2003, OPTN policy was amended to allow kidney transplant candidates to accrue waiting time while registered as status 7, or inactive. We evaluated trends in inactive listings and the association of inactive status with transplantation and survival, studying 262 824 adult first‐time KT candidates listed between 2000 and 2011. The proportion of waitlist candidates initially listed as inactive increased from 2.3% prepolicy change to 31.4% in 2011. Candidates initially listed as inactive were older, more often female, African American, and with higher body mass index. Postpolicy change, conversion from initially inactive to active status generally occurred early if at all: at 1 year after listing, 52.7% of initially inactive candidates had been activated; at 3 years, only 66.3% had been activated. Inactive status was associated with a substantially higher waitlist mortality (aHR 2.21, 95%CI:2.15–2.28, p < 0.001) and lower rates of eventual transplantation (aRR 0.68, 95%CI:0.67–0.70, p < 0.001). In summary, waitlist practice has changed significantly since November 2003, with a sharp increase in the number of inactive candidates. Using the full waitlist to estimate organ shortage or as a comparison group in transplant outcome studies is less appropriate in the current era.  相似文献   

16.
There is notable heterogeneity in the implementation of cytomegalovirus (CMV) prevention practices among CMV‐seropositive (R+) kidney transplant (KT) recipients. In this prospective observational study, we included 387 CMV R+ KT recipients from 25 Spanish centers. Prevention strategies (antiviral prophylaxis or preemptive therapy) were applied according to institutional protocols at each site. The impact on the 12‐month incidence of CMV disease was assessed by Cox regression. Asymptomatic CMV infection, acute rejection, graft function, non‐CMV infection, graft loss, and all‐cause mortality were also analyzed (secondary outcomes). Models were adjusted for a propensity score (PS) analysis for receiving antiviral prophylaxis. Overall, 190 patients (49.1%) received preemptive therapy, 185 (47.8%) antiviral prophylaxis, and 12 (3.1%) no specific intervention. Twelve‐month cumulative incidences of CMV disease and asymptomatic infection were 3.6% and 39.3%, respectively. Patients on prophylaxis had lower incidence of CMV disease [PS‐adjusted HR (aHR): 0.10; 95% confidence interval (CI): 0.01–0.79] and asymptomatic infection (aHR: 0.46; 95% CI: 0.29–0.72) than those managed preemptively, with no significant differences according to the duration of prophylaxis. All cases of CMV disease in the prophylaxis group occurred after prophylaxis discontinuation. There were no differences in any of the secondary outcomes. In conclusion, antiviral prophylaxis was associated with a lower occurrence of CMV disease in CMV R+ KT recipients, although such benefit should be balanced with the risk of late‐onset disease.  相似文献   

17.
We examined the effects of COVID-19 on solid organ waiting list mortality in the United States and compared effects across patient demographics (e.g., race, age, and sex) and donation service areas. Three separate piecewise exponential survival models estimated for each solid organ the overall, demographic-specific, and donation service area-specific differences in the hazard of waitlist mortality before and after the national emergency declaration on March 13, 2020. Kidney waiting list mortality was higher after than before the national emergency (adjusted hazard ratio [aHR], 1.37; 95% CI, 1.23–1.52). The hazard of waitlist mortality was not significantly different before and after COVID-19 for liver (aHR, 0.94), pancreas (aHR, 1.01), lung (aHR, 1.00), and heart (aHR, 0.94). Kidney candidates had notable variability in differences across donation service areas (aHRs, New York City, 2.52; New Jersey, 1.84; and Michigan, 1.56). The only demographic group with increased waiting list mortality were Blacks versus Whites (aHR, 1.41; 95% CI, 1.07–1.86) for kidney candidates. The first 10 weeks after the declaration of a national emergency had a heterogeneous effect on waitlist mortality rate, varying by geography and ethnicity. This heterogeneity will complicate comparisons of transplant program performance during COVID-19.  相似文献   

18.
Despite the Final Rule mandate for equitable organ allocation in the United States, geographic disparities exist in donor lung allocation, with the majority of donor lungs being allocated locally to lower‐priority candidates. We conducted a retrospective cohort study of 19 622 lung transplant candidates waitlisted between 2006 and 2015. We used multivariable adjusted competing risk survival models to examine the relationship between local lung availability and waitlist outcomes. The primary outcome was a composite of death and removal from the waitlist for clinical deterioration. Waitlist candidates in the lowest quartile of local lung availability had an 84% increased risk of death or removal compared with candidates in the highest (subdistribution hazard ratio [SHR]: 1.84, 95% confidence interval [CI]: 1.51‐2.24, P < .001). The transplantation rate was 57% lower in the lowest quartile compared with the highest (SHR: 0.43, 95% CI: 0.39‐0.47). The adjusted death or removal rate decreased by 11% with a 50% increase in local lung availability (SHR: 0.89, 95% CI: 0.85‐0.93, P < .001) and the adjusted transplantation rate increased by 19% (SHR: 1.19, 95% CI: 1.17‐1.22, P < .001). There are geographically disparate waitlist outcomes in the current lung allocation system. Candidates listed in areas of low local lung availability have worse waitlist outcomes.  相似文献   

19.
Height explains a substantial proportion of gender‐based disparity in waitlist mortality among liver transplant candidates. We sought to identify a clinically relevant height cutoff below which waitlist mortality increases significantly. We examined all nonstatus one adult liver transplant candidates from 2010 to 2014. We used a recursive application of the minimum P value approach with univariate competing risk regressions (deceased donor liver transplantation as the competing risk) to detect differences in waitlist mortality with regards to height. Of 69 883 candidates, 36% (24 819) were women and 64% (45 064) were men. Median height for all was 173 cm: 163 cm in women, 178 cm in men. The optimal search method of recursively evaluating smaller height intervals yielded 166 cm as the optimal height cutoff. Using height <166 cm as the cutoff, 72% of women and 9% of men met criteria. Compared to candidates ≥166 cm, “short stature” candidates had higher rates of death/delisting (28% vs 24%) and lower rates of transplantation (38% vs 44%) (P < .01 for both). After adjustment for clinical and demographic characteristics, height <166 cm remained associated with an 8% increased risk of waitlist mortality (95% CI 1.03‐1.14, P < .01). Short candidate height may be a motivation to explore split livers or living donors as accelerated liver transplantation options.  相似文献   

20.
The replication kinetics of nonpathogenic anelloviruses belonging to the Alphatorquevirus genus (such as torque teno virus) might reflect the overall state of posttransplant immunosuppression. We analyzed 221 kidney transplant (KT) recipients in whom plasma alphatorquevirus DNA load was quantified by real‐time polymerase chain reaction at baseline and regularly through the first 12 posttransplant months. Study outcomes included posttransplant infection and a composite of opportunistic infection and/or de novo malignancy (immunosuppression‐related adverse event [iRAE]). Alphatorquevirus DNA loads at month 1 were higher among patients who subsequently developed posttransplant infection (P  = .023) or iRAE (P  = .009). Likewise, those with iRAE beyond months 3 and 6 also exhibited higher peak viral loads over the preceding periods. Areas under the curve for log10 alphatorquevirus DNAemia estimated by months 1 or 6 were significantly higher in patients experiencing study outcomes. Alphatorquevirus DNA loads above 3.15 and 4.56 log10 copies/mL at month 1 predicted the occurrence of posttransplant infection (adjusted hazard ratio [aHR]: 2.88; 95% confidence interval [CI]: 1.13‐7.36; P  = .027) and iRAE (aHR: 5.17; 95% CI: 2.01‐13.33; P  = .001). In conclusion, posttransplant monitoring of plasma alphatorquevirus DNA kinetics may be useful to identify KT recipients at increased risk of immunosuppression‐related complications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号