ImportanceImmunotherapy has emerged as an effective treatment option for the management of advanced cancers. The effects of these immune checkpoint inhibitors in the older patient population has not been adequately assessed.ObjectiveTo understand the impact of aging on CTLA-4 and PDL-1 inhibitors efficacy and immune-related adverse events (irAE) in the context of real-world management of advanced solid cancers.Design, Setting, and ParticipantsThis retrospective study involved all non-study patients with histologically-confirmed metastatic or inoperable solid cancers receiving immunotherapy at Kingston Health Sciences Centre. We defined ‘older patient’ as age ≥ 75. All statistical analyses were conducted under SPSS IBM for Windows version 24.0.Main Outcomes and MeasuresStudy outcomes included immunotherapy treatment response, survival, as well as number, type, and severity of irAEs.ResultsOur study (N = 78) had 29 (37%) patients age <65, 26 (33%) patients age 65–74, and 23 (30%) patients age ≥75. Melanoma, non-small cell lung cancer, and renal cell carcinoma accounted for 70%, 22%, and 8% of the study population, respectively. Distributions of ipilimumab (32%), nivolumab (33%), and pembrolizumab (35%) were similar in the study. The response rates were 28%, 27%, and 39% in the age <65, age 64–74, age ≥75 groups, respectively (P = 0.585). Kaplan-Meier curve showed a median survival of 28 months (12.28–43.9, 95% CI) and 17 months (0–36.9, 95% CI) in the age <65 and age 64–74 groups, respectively; the estimated survival probability did not reach 50% in the age ≥75 group (P = 0.319). There were no statistically significant differences found in terms of irAEs, multiple irAEs, severity of grade 3 or higher, types of irAEs, and irAEs resolution status when comparing between different age groups.Conclusion and RelevanceOur results suggest that patients age ≥75 are able to gain as much benefit from immunotherapy as younger patients, without excess toxicity. Our findings suggest that single agent immunotherapy is generally well-tolerated across different age groups with no significant difference in the type, frequency or severity of irAEs. Future studies evaluating aging and combination immunotherapy are warranted. 相似文献
Purpose: Non-ambulatory persons with cerebral palsy are prone to low bone mineral density. In ambulatory persons with cerebral palsy, bone mineral density deficits are expected to be small or absent, but a consensus conclusion is lacking. In this systematic review bone mineral density in ambulatory persons with cerebral palsy (Gross Motor Function Classification Scales I–III) was studied.
Materials and methods: Medline, Embase, and Web of Science were searched. According to international guidelines, low bone mineral density was defined as Z-score?≤??2.0. In addition, we focused on Z-score?≤??1.0 because this may indicate a tendency towards low bone mineral density.
Results: We included 16 studies, comprising 465 patients aged 1–65?years. Moderate and conflicting evidence for low bone mineral density (Z-score?≤??2.0) was found for several body parts (total proximal femur, total body, distal femur, lumbar spine) in children with Gross Motor Function Classification Scales II and III. We found no evidence for low bone mineral density in children with Gross Motor Function Classification Scale I or adults, although there was a tendency towards low bone mineral density (Z-score?≤??1.0) for several body parts.
Conclusions: Although more high-quality research is needed, results indicate that deficits in bone mineral density are not restricted to non-ambulatory people with cerebral palsy.
Implications for Rehabilitation
Although more high-quality research is needed, including adults and fracture risk assessment, the current study indicates that deficits in bone mineral density are not restricted to non-ambulatory people with CP.
Health care professionals should be aware that optimal nutrition, supplements on indication, and an active lifestyle, preferably with weight-bearing activities, are important in ambulatory people with CP, also from a bone quality point-of-view.
If indicated, medication and fall prevention training should be prescribed.
Partial nephrectomy (PN) is generally favored for cT1 tumors over radical nephrectomy (RN) when technically feasible. However, it can be unclear whether the additional risks of PN are worth the magnitude of renal function benefit.
Objective
To develop preoperative tools to predict long-term estimated glomerular filtration rate (eGFR) beyond 30 d following PN and RN, separately.
Design, setting, and participants
In this retrospective cohort study, patients who underwent RN or PN for a single nonmetastatic renal tumor between 1997 and 2014 at our institution were identified. Exclusion criteria were venous tumor thrombus and preoperative eGFR <15 ml/min/1.73 m2.
Intervention
RN and PN.
Outcome measurements and statistical analysis
Hierarchical generalized linear mixed-effect models with backward selection of candidate preoperative features were used to predict long-term eGFR following RN and PN, separately. Predictive ability was summarized using marginal , which ranges from 0 to 1, with higher values indicating increased predictive ability.
Results and limitations
The analysis included 1152 patients (13 206 eGFR observations) who underwent RN and 1920 patients (18 652 eGFR observations) who underwent PN, with mean preoperative eGFRs of 66 ml/min/1.73 m2 (standard deviation [SD] = 18) and 72 ml/min/1.73 m2 (SD = 20), respectively. The model to predict eGFR after RN included age, diabetes, preoperative eGFR, preoperative proteinuria, tumor size, time from surgery, and an interaction between time from surgery and age (marginal ). The model to predict eGFR after PN included age, presence of a solitary kidney, diabetes, hypertension, preoperative eGFR, preoperative proteinuria, surgical approach, time from surgery, and interaction terms between time from surgery and age, diabetes, preoperative eGFR, and preoperative proteinuria (marginal ). Limitations include the lack of data on renal tumor complexity and the single-center design; generalizability needs to be confirmed in external cohorts.
Conclusions
We developed preoperative tools to predict renal function outcomes following RN and PN. Pending validation, these tools should be helpful for patient counseling and clinical decision-making.
Patient summary
We developed models to predict kidney function outcomes after partial and radical nephrectomy based on preoperative features. This should help clinicians during patient counseling and decision-making in the management of kidney tumors. 相似文献
ABSTRACTAdolescents and young adults smoke waterpipe tobacco (WT) and cigarillos, at least in part, based on erroneous beliefs that these products are safer than cigarettes. To address this challenge, we used a systematic, three-phase process to develop a health communication campaign to discourage WT and cigarillo smoking among at-risk (tobacco users and susceptible non-users) 16- to 25-year-olds. In Phase 1, we used a national phone survey (N = 896) to determine salient message beliefs. Participants reported constituents (i.e., harmful chemicals) emitted by the products were worrisome. In Phase 2, we developed and evaluated four message executions, with varying imagery, tone, and unappealing products with the same constituents, using focus groups (N = 38). Participants rated one execution highly, resulting in our development of a campaign where each message: (1) identified a tobacco product and constituent in the smoke; (2) included an image of an unappealing product containing the constituent (e.g., pesticides, gasoline) to grab attention; and (3) used a humorous sarcastic tone. In Phase 3, we tested the campaign messages (17 intervention and six control) with a nationally representative online survey (N = 1,636). Participants rated intervention and control messages highly with few differences between them. Exposure to messages resulted in significant increases in all risk beliefs from pre to post (p < 0.05). For WT, intervention messages increased beliefs about addiction more than control messages (p < 0.05). This systematic, iterative approach resulted in messages that show promise for discouraging WT and cigarillo use. 相似文献
To quantify eating disorder (ED) stability and diagnostic transition among a community-based sample of adolescents and young adult females in the United States.
Methods
Using 11 prospective assessments from 9,031 U.S. females ages 9–15 years at baseline of the Growing Up Today Study, we classified cases of the following EDs involving bingeing and purging: bulimia nervosa (BN), binge ED, purging disorder (PD), and subthreshold variants defined by less frequent (monthly vs. weekly) bingeing and purging behaviors. We measured number of years symptomatic and probability of maintaining symptoms, crossing to another diagnosis, or resolving symptoms across consecutive surveys.
Results
Study lifetime disorder prevalence was 2.1% for BN and roughly 6% each for binge ED and PD. Most cases reported symptoms during only one survey year. Twenty-six percent of cases crossed between diagnoses during follow-up. Among participants meeting full threshold diagnostic criteria, transition from BN was most prevalent, crossing most frequently from BN to PD (12.9% of BN cases). Within each disorder phenotype, 20%–40% of cases moved between subthreshold and full threshold criteria across consecutive surveys.
Conclusions
Diagnostic crossover is not rare among adolescent and young adult females with an ED. Transition patterns from BN to PD add support for considering these classifications in the same diagnostic category of disorders that involve purging. The prevalence of crossover between monthly and weekly symptom frequency suggests that a continuum or staging approach may increase utility of ED classification for prognostic and therapeutic intervention. 相似文献
BACKGROUND AND PURPOSE: There is concern about the increase of radiation-induced malignancies with the application of modern radiation treatment techniques such as intensity-modulated radiotherapy (IMRT) and proton radiotherapy. Therefore, X-ray scatter and neutron radiation as well as the impact of the primary dose distribution on secondary cancer incidence are analyzed. MATERIAL AND METHODS: The organ equivalent dose (OED) concept with a linear-exponential and a plateau dose-response curve was applied to dose distributions of 30 patients who received radiation therapy of prostate cancer. Three-dimensional conformal radiotherapy was used in eleven patients, another eleven patients received IMRT with 6-MV photons, and eight patients were treated with spot-scanned protons. The treatment plans were recalculated with 15-MV and 18-MV photons. Secondary cancer risk was estimated based on the OED for the different treatment techniques. RESULTS: A modest increase of 15% radiation-induced cancer results from IMRT using low energies (6 MV), compared to conventional four-field planning with 15-MV photons (plateau dose-response: 1%). The probability to develop a secondary cancer increases with IMRT of higher energies by 20% and 60% for 15 MV and 18 MV, respectively (plateau dose-response: 2% and 30%). The use of spot-scanned protons can reduce secondary cancer incidence as much as 50% (independent of dose-response). CONCLUSION: By including the primary dose distribution into the analysis of radiation-induced cancer incidence, the resulting increase in risk for secondary cancer using modern treatment techniques such as IMRT is not as dramatic as expected from earlier studies. By using 6-MV photons, only a moderate risk increase is expected. Spot-scanned protons are the treatment of choice in regard to secondary cancer incidence. 相似文献
In cystic fibrosis (CF), perturbations of total daily energy expenditure (TDEE) may be a major determinant of altered nutrition and growth. Measurement of TDEE is problematic, though the flex-heart rate method (FHRM) provides a close estimation of TDEE, as compared to the cost-prohibitive, gold standard, the double-labeled water method, and permits estimates of the energy cost of daily activities (ECA) above resting energy expenditure (REE). We hypothesize that alterations in ECA affects TDEE in CF. PURPOSE: To measure components of TDEE in adolescents with CF and normal lung function compared with controls, and to determine whether ECA can be improved by diet and exercise. METHODS: Clinically stable CF subjects (aged 9-13, n=12) and age- and gender-matched controls (n=13) had repeated measurements of TDEE by FHRM, REE, and maximal cardiopulmonary exercise testing (CPET) during a 6-week exercise and diet program. RESULTS: While the mean REE was similar in both groups, ECA was significantly lower in CF adolescents as compared to controls (p=0.02). During CPET, maximal exercise in CF was characterized by hyperventilation, which was unrelated to ventilation-perfusion mismatching. There were no changes in REE after dietary intervention. CONCLUSION: ECA in CF adolescents with normal lung function is lower when compared to healthy controls. These findings support the hypothesis that clinically stable patients with CF have inefficient energy metabolism or alternatively conserve energy during activities of daily living. 相似文献