首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 837 毫秒
1.
2.
Cancer patients with COVID-19 have reduced survival. While most cancer patients, like the general population, have an almost 100% rate of seroconversion after COVID-19 infection or vaccination, patients with haematological malignancies have lower seroconversion rates and are far less likely to gain adequate protection. This raises the concern that patients with haematological malignancies, especially those receiving immunosuppressive therapies, may still develop the fatal disease when infected with COVID-19 after vaccination. There is an urgent need to develop Guidelines to help direct vaccination schedules and protective measures in oncology patients, differentiating those with haematological malignancies and those in an immunocompromised state.Subject terms: Haematological cancer, Cancer therapy

As severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) continues to spread globally at an alarming rate, it is having an unprecedented impact on cancer patients [1]. Much of the focus in the available literature has highlighted the burden that the coronavirus disease 2019 (COVID-19) pandemic has placed on cancer care, including delaying diagnoses and treatment and halting clinical trials. As more data become available, we are seeing that oncology patients have worse outcomes to the COVID-19 infection, including greater incidence of acute respiratory distress syndrome and higher morbidity and mortality rates [24]. Studies have revealed reduced survival from COVID-19 in highly susceptible cancer patients, including those with advanced age, harbouring multiple comorbidities, or where the cancer is haematological [3, 5]. In a study of 3377 patients with haematological malignancies, the risk of death with COVID-19 was 34%, which is markedly higher than the 4.8% reported with solid tumours [6].While most cancer patients, like the general population, have an almost 100% rate of seroconversion after COVID-19 infection or receiving mRNA or adenovirus-based COVID-19 vaccines, patients with haematological malignancies, most notably those receiving anti-CD20 immunotherapy, have lower rates of seroconversion and are far less likely to gain adequate protection [2, 7]. In a study of 200 patients with cancer, seroconversion rates of 94, 85 and 70% were reported in patients with solid cancers, haematological malignancies, and haematological malignancies receiving anti-CD20 therapies, respectively [2, 3]. Similar results have been seen in a retrospective study of 160 adults with haematological malignancies vaccinated in the U.S. in early 2021 [7]. Interestingly, a recent study reported that rituximab prevents the anti-SARS-CoV-2 humoral response for at least 6 months after recovery from COVID-19 infection [8]. This raises the concern that patients with haematological malignancies, especially those receiving immunosuppressive therapies, may still develop the fatal disease when infected with COVID-19 after vaccination. However, while this may be clinically significant, it is unclear how seroconversion correlates with clinical outcomes. Two Nature Medicine publications have recently reported that the COVID-19-vaccine efficacy can be linked to neutralising antibody titre [9].There is now available data demonstrating an improved serological response to a third dose of the mRNA COVID-19 vaccine in select immunocompromised groups, including solid-organ transplant recipients, renal-dialysis patients, and patients with haematological malignancies, most notably those who had not received anti-CD20 therapy within a year [10]. Many countries now recommended an additional booster in select immunocompromised cohorts. However, it is important to note that those who did not have an antibody response after two vaccine doses remained seronegative after their third dose of the same vaccine [10].While the timing of anti-CD20 therapy impacts the humoral response, it does not modify the T cell response [10]. Given the importance of a T cell response in COVID-19, the emergence of a specific T cell response is another expected benefit of a booster vaccine, especially for patients receiving anti-CD20 therapy [10]. Therefore, the data support using a booster vaccine in immunocompromised patients, accepting that some individuals will still have vaccine failure. Studies are underway to identify if using a different vaccine type as a booster, “heterologous boosting”, may help produce antibodies [11].Understanding the impact of immunosuppression on the effectiveness of these vaccines highlights the need for other prophylactic strategies in this immunosuppressed population to mitigate COVID-19 infection or by boosting the immune system response with unique vaccine schedules [2]. Unfortunately, most COVID-19 clinical trial studies to date have excluded patients diagnosed with a malignancy. Therefore, there is minimal information on the safety and efficacy of these vaccines in this population [12].Position statements and guidelines for COVID-19 vaccinations in individuals with cancer receiving anti-cancer therapies are frequently released. While many suggest that COVID-19 vaccinations be administered two weeks or more prior to chemotherapy, this recommendation has not been practical. Limited by vaccination availability and the difficulties surrounding the scheduling of chemotherapy around the need for two vaccinations [13]. The push for earlier vaccine administration in these groups has been the priority. Advice from the British Society for Haematology provides guidance for clinicians caring for patients with blood cancers, including the most up to date information from the National Cancer Research Institute.Decisions surrounding the appropriateness of the COVID-19 vaccine for individuals affected by cancer are currently made on an individual basis by their healthcare team. Patients must be counselled about the unknown vaccine safety profile and effectiveness in immunocompromised populations, the potential for reduced immune responses and the need to continue to follow all current guidance to protect themselves against COVID-19. Therefore, there is a need to develop Policies and Guidelines to help direct vaccination administration schedules in oncology patients, differentiating those with haematological malignancies and those in an immunocompromised state. Interestingly, vaccine-safety information is also absent in the context of therapies that stimulate the immune system, such as the widely available immune checkpoint inhibitors. While the choice of cancer treatment can profoundly impact the rates of seroconversion, currently, there is no evidence that patients receiving these agents are at higher risk of adverse events following the administration of COVID-19 vaccination [14].While patients with haematological malignancies represent a highly susceptible group with an urgent requirement for effective and available vaccines, with the limited data available at this time, it is critical that these individuals continue to maintain ongoing COVID-19 protective measures after vaccination or infection, including wearing face-masks, social distancing and the screening and vaccination of family members. Testing for seroconversion after vaccination or infection may also be warranted.  相似文献   

3.
Giant cell tumour (GCT) of the bone accounts for 5 to 9 % of primary bone tumors. The most common sites are long bones. The incidence of craniofacial GCTs, involving the sphenoid (Kujas et al. Arch Anat Cytol Pathol, 47(1):7–12, 1999), ethmoid and temporal bones is rare, but they do exist, among all GCT tumors 2 % are found in craniofacial bones. Here is a report on a case of osteoclastoma of maxilla, presented to us with complains of swelling and pain over left side of cheek, nasal obstruction from last 5 months. Excision of whole mass from anterior wall was done under general anesthesia.  相似文献   

4.
The combination of COVID-19 vaccination with immunotherapy by checkpoint inhibitors in cancer patients could intensify immunological stimulation with potential reciprocal benefits. Here, we examine more closely the possible adverse events that can arise in each treatment modality. Our conclusion is that caution should be exercised when combining both treatments.Subject terms: Cancer, Viral infection

The persisting spreading of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2; coronavirus disease 2019 [COVID-19]) has prompted the development of vaccine candidates at an accelerated rate with remarkable efficacy. To date, the most advanced vaccines are those that transfer nucleotide coding for the S protein as DNA (inactivated adenovirus-DNA delivered) such as the Astrazeneca vaccine or as lipid nanoparticle-encapsulated RNA forms such as those produced by Pfizer BioNTech and Moderna. The Johnson and Johnson (Janssen) and the Sputnik V vaccines as well as vaccines other than nucleic acid vectors are also joining the market in different regions. Given the disproportionate impact of COVID-19 in cancer patients, these vulnerable subjects should be included in the populations prioritised for early vaccination along with the transplanted patient, rheumatic disease patients and other immunosuppressed subjects [1].There is a scarcity of data on the consequences of COVID-19 vaccination in cancer patients under specific treatments, as recently stressed by Korompoki et al. [2], especially those in phase III vaccine trials [3]. One recent short article by Waissengrin et al. [4] reports on the BNT162b2 messenger RNA (mRNA) COVID-19 vaccine administered in cancer patients under checkpoint inhibitors (CPIs). Based on a relatively limited number of cases, the authors note that when compared to matched controls, CPI therapy results in a constant and variable increase of all COVID-19 vaccination side effects, which is cause for alarm. The authors nevertheless consider their data as supporting the short-term safety of the mRNA COVID-19 vaccine in patients under CPIs. A larger set of patients would certainly be necessary to confirm their findings and deliver a clear message on this point. On the other hand, the report by Waissengrin et al. also mentions the apparent short-term absence of immunologically related adverse events (IRAEs) in a subgroup of 134 patients treated by CPIs who received the BNT162b2 mRNA COVID-19 vaccine [4]. However, the authors recognise the possibility that rare IRAEs [4] could be identified in larger cohorts of patients under COVID-19 vaccination.This possibility was clinically confirmed in the recent case report by Au et al. describing a close temporal association between BNT162b2 vaccination and the onset of a harmful cytokine release syndrome (CRS) in a patient with colorectal cancer on long-standing anti-programmed death-ligand 1 (PD-1) monotherapy [5]. The authors suggest that this CRS could be due to the vaccine and could occur on the background of immune activation secondary to the PD-1 blockade with an increase in T cell proliferation and effector function.The reports by Waissengrin et al. [4] and Au et al. [5] illustrate the view that interaction between immunotherapy and COVID-19 vaccination should be considered from both sides, i.e. how vaccination could influence immunotherapy and, conversely, how immunotherapy could impact COVID-19 vaccination (Fig. 1). Both treatments cause their own stimulation of the immunological system, and more particularly, at the T cell and dendritic cell levels, their coexistence could therefore lead to final effects that potentiate their respective activity. In this respect, it has recently been reported that influenza vaccination improves the survival of patients under CPIs without having any detrimental effects in terms of safety [6, 7]. But overall, COVID-19 vaccination generates more severe side effects than influenza vaccines; consequently, its impact on CPI-related side effects cannot be overlooked.Open in a separate windowFig. 1The reciprocal interaction between COVID-19 vaccination and cancer treatment by checkpoint inhibitors.ICI: Immune Checkpoint Inhibitors.CPIs induce IRAEs at a rate of 20–50% for any grade, and the risk for developing these toxicities is higher in elderly patients [8, 9]. Of note, one of these IRAEs is colitis, which can impact microbiota integrity with potential immune consequences. This could result from the complex interrelationship between the overall immune status and microbiota, which has been well elucidated [10]. Interestingly, the influence of microbiota on the immune response to vaccination has also been reported [11]. Therefore, more attention should be paid to a potential loss of COVID-19 vaccination efficacy in patients under CPI due to a possible occurrence of IRAEs and more particularly a case of colitis.In the context of rare unexpected findings under CPIs, one should consider in addition the rare but nevertheless concerning phenomenon of tumour hyper-progression (THP) [12]. This form of tumour flare, although infrequent, can cause a potentially fatal locoregional progression of the disease. The pathophysiology of THP is not clearly established but includes an expansion of activated T lymphocytes in the tumour itself and its microenvironment [13]. This excessive tumour infiltration by lymphocytes could be amplified by an increased bulk of activated lymphocytes resulting from the boosting effect of the vaccination itself (Fig. 1). We have recently shown that it may be possible to identify patients under CPIs at risk for this THP by discriminating germinal genetic profiles [14]. This could serve as a tool for screening CPI-treated patients scheduled to receive COVID-19 vaccination.The articles by Waissengrin et al. [4] and Au et al. [5] express more or less strong messages of caution and point to the need to gain a deeper knowledge of the reciprocal interaction between COVID-19 vaccination and CPI cancer treatment by investigating large cohorts of patients. To this end, along with the classical parameters of patient follow-up, more specific immunology-based investigations should be conducted to examine all the aspects of the long-term immune response. This is the main objective of the recently launched VOICE trial [15]. This prospective, multicentric trial aims to closely examine on a long-term basis whether immunotherapy and chemotherapy, alone or combined, influence COVID-19 vaccination in treated patients. Study parameters include the antibody response, the SARS-CoV-2-specific T cell response and the functional and phenotypical characterisation of the cellular immune response. This type of long-term follow-up is particularly necessary when considering current developments in CPI treatment and the increasing role played by their adjuvant setting. The immunotherapeutic management of malignant melanoma [16] and lung cancer [17] are clear illustrations of these current developments in CPI treatment. The association of chemotherapy and targeted therapies with CPI treatment in most therapeutic situations also generates potential difficulties in data interpretation, further increasing the need for multi-cohort studies. The ESMO recently emphasised the importance of monitoring COVID-19 vaccine effects in cancer patients through specific studies and registries [18]. In France, the ANRS S0001S COV-POPART cohort study was recently launched (NCT04824651). It monitors 8650 vaccinated subjects with various pathologies including cancer to evaluate their relative capacity to produce antibodies against SARS-CoV- 2 (the study includes a control group of 1850 subjects without the targeted pathologies). Concerning cancer treatment and more particularly early clinical trials, an international group of experts recently recommended that trials on anti-cancer drugs with unknown safety profiles should be avoided until 2 to 4 weeks after the second dose of the COVID-19 vaccine [19]. More generally, the reciprocal interaction between COVID-19 vaccination and cancer treatments should be examined in greater depth.In spite of a complex and still evolving SARS-COV2, COVID-19 vaccination is increasingly combined with CPI treatments in cancer patients and is likely to impact every treatment. At first sight, this combination should boost the immunological stimulation with potential reciprocal benefits. However, the clinical picture described in this article tempers this judgement. Its aim is to deliver a message of caution and raise the awareness of caregivers and prescribers to the particular attention that should be paid to patients at risk.  相似文献   

5.
Chondrosarcoma of the faciomaxillary area constitute only 4 % of non-epithelial tumours of the nasal cavity, paranasal sinuses and nasopharynx, making it one of the rare malignancy (Indian J Otolaryngol Head Neck Surg 60(3):284–286, [1]), with its myxoid variety still rarest. It is a slow-growing tumour, occurring mostly in middle-aged men. Primary chondrosarcoma of the nasal and paranasal sinus region, including the nasal septum, rarely extends into the cranial or intracranial areas unless there is recurrence (Indian J Ophthalmol 41(4):189–191, [2]. When it does occur, early diagnosis is difficult because patients generally present with common, nonspecific sinonasal complaints. A 45 year male patient came to ENT OPD with complaints of epistaxis, diplopia, facial swelling and loosening of teeth. On examination extensive swelling was present involving right maxilla, palate and ethmoid with shifting of Rt eye laterally. FNAC of swelling was non specific infected cystic lesion. On endoscopic examination there was erosion of Rt lateral wall of nose with mucoid material filling maxilla. Its wall was expanded & lined with velvety red mucosa. Maxillary antral biopsy report was myxoid chondrosarcoma. CT scan revealed extensive lesion of maxilla, ethmoid going up to optic nerve and brain. This case of myxoid chondrosarcoma is presented as it is a rare diagnosis. It presented with advanced disease involving nasomaxilloethmoid region and extended up to optic canal and middle cranial fossa. In a thorough review of Indian literature this was a rarely diagnosed tumour.  相似文献   

6.
7.
In Reply     
The sensitivity analyses suggested in the Letter to the Editor by de Vries et al. was performed, but no material change in relative risk for bladder cancer was found. This is not surprising given the limited contribution of the studies excluded in the sensitivity analyses.Regarding thiazolodinediones and cancer, we ran the sensitivity analyses suggested by Dr. de Vries et al. despite that data from the three studies [13] were not totally overlapping.The pooled relative risk (RR) for bladder cancer for any thiazolidinedione (TZD) use changed from 1.13 (95% confidence interval [CI]: 1.05–1.23) to 1.12 (95% CI: 1.03–1.22) after the study by Azoulay et al. [1] was excluded and to 1.13 (95% CI: 1.04–1.23) after excluding the study by Wei et al. [2]. With reference to pioglitazone, the pooled RR changed from 1.20 (95% CI: 1.07–1.34) to 1.18 (95% CI: 1.05–1.32) after excluding the study by Azoulay et al. [1] and to 1.20 (95% CI: 1.04–1.39) after excluding the study by Wei et al. [2]. With reference to all cancers, the RR remained 0.96 (95% CI: 0.91–1.01) after of any of the three studies was excluded [13].The limited change in the RR for bladder cancer is not surprising, because the two studies together [1, 2] accounted for about 10% of the weight of the studies of bladder cancer included in the meta-analysis [4], and the three studies [13] accounted for about 3.5% of all cancer sites.  相似文献   

8.
Allowing selected patients with few distant metastases to undergo potentially curative local ablation, the designation “oligometastatic” has become a widely popular concept in oncology. However, accumulating evidence suggests that many of these patients harbour an unrecognised microscopic disease, leading either to the continuous development of new metastases or to an overt polymetastatic state and questioning thus an indiscriminate use of potentially harmful local ablation. In this paper, reviewing data on oligometastatic disease, we advocate the importance of identifying a true oligometastatic disease, characterised by a slow speed of development, instead of relying solely on a low number of lesions as the term “oligometastatic” implies. This is particularly relevant in clinical practice, where terminology has been shown to influence decision making. To define a true oligometastatic disease in the context of its still elusive biology and interaction with the immune system, we propose using clinical criteria. As discussed further in the paper, these criteria can be classified into three categories involving a low probability of occult metastases, low tumour growth rate and low tumour burden. Such cases with slow tumour-cell shedding and slow proliferation leave a sufficiently broad window-of-opportunity to detect and treat accessible lesions, increasing thus the odds of a cure.Subject terms: Cancer models, Metastasis

In 1995, Hellman and Weichselbaum summarised available evidence on local ablation of distant metastases, deducing the concept of oligometastases as an intermediate state between a localised tumour and a widespread metastatic disease [1]. The authors emphasised the principal condition of a limited number and sites of metastases (from Greek “oligos” meaning few) that could offer some patients a potentially curative therapeutic opportunity, but no specific diagnostic criteria were provided. Also ignited by an accelerated availability and use of new methods of local ablation including stereotactic ablative body radiotherapy (SABR) and radiofrequency ablation, the next 25 years were marked by an exponential rise of interest in oligometastatic disease [2]. Emerging prospective clinical trials, both single-arm and randomised, have often relied on the number of distant metastases not surpassing five [3]. This criterion has been adopted in the recent consensus document of the European Society for Radiotherapy and Oncology (ESTRO) and American Society for Radiation Oncology (ASTRO), complying with the need for standardisation to meaningfully advance scientific research [4]. However, mounting evidence, also originating from the above-mentioned trials, has pointed to the drawbacks of a definition based on snapshot imaging as will be further discussed in this paper.The STOMP (Surveillance or Metastasis-Directed Therapy for Oligometastatic Prostate Cancer Recurrence) trial was a phase II study randomly assigning asymptomatic prostate cancer patients with a biochemical recurrence after primary treatment to either surveillance or metastasis-directed therapy (surgery or SABR). The patients had to have a maximum of three extracranial lesions detected on choline positron emission tomography/computed tomography (PET/CT) imaging. Although the study met its primary endpoint of median androgen deprivation therapy-free survival improvement (13 versus 21 months, hazard ratio, 0.60; 80% CI, 0.40–0.90; log-rank P = 0.11), no difference was observed in the rates of polymetastatic progression (55% versus 61%) [5]. Similarly, the landmark SABR-COMET (Stereotactic Ablative Radiotherapy for the Comprehensive Treatment of Oligometastases) trial, which was the first study to investigate the impact of local ablation of oligometastases on overall survival, randomised 99 patients with different types of primary tumours and five or fewer metastases to either palliative standard of care alone or complemented with SABR. The addition of local intervention enhanced median overall survival from 28 to 41 months (hazard ratio, 0.57; 95% CI, 0.30–1.10; log-rank P = 0.090), but the proportion of patients presenting with new metastases was almost identical in both arms (58% versus 60%), possibly owing to subclinical dissemination [6].The unrecognised microscopic disease could have also contributed to the results of the following study focusing on a surgical approach. The phase III PulMiCC (Pulmonary Metastasectomy versus Continued Active Monitoring in Colorectal Cancer) trial explored if there was a benefit of pulmonary metastasectomy in colorectal cancer patients over active surveillance. Despite being prematurely terminated after enrolling only 65 patients due to difficulties in accrual, the two arms were well-balanced and the trial provocatively demonstrated a lower-than-expected difference in the estimated 5-year overall survival which was 38% for metastasectomy (1–5 lesions) versus 29% for surveillance [7]. On the other hand, the phase III CLOCC (Chemotherapy + Local Ablation Versus Chemotherapy) trial showed that the addition of local therapy of liver metastases by radiofrequency ablation with or without resection to systemic therapy significantly prolonged median overall survival from 40.5 to 45.6 months among 119 patients with colorectal cancer. Importantly, the allowed number of liver lesions was up to nine, and one-third of the study population had more than five metastases [8].Although the situation in colorectal cancer is rather unique in that the liver is the first location of metastatic disease due to the predominant dissemination through the portal system, a tentative interpretation of these four trials could be that relying solely on the number of metastases is not sufficient to define a true oligometastatic disease. Even though some false oligometastatic cases harbouring unrecognised micrometastases derive a survival advantage from local ablation of all visible lesions, which in principle is a substitute for cytoreduction, it cannot be excluded that the crucial part of this benefit is conveyed by the systemic treatment which the patients receive in parallel or afterwards. Consequently, treating a false oligometastatic disease with local ablation may ultimately be harmful because of possible procedural complications, particularly in the case of invasive methods, leading to a prolonged interruption of systemic treatment or interfering with its initiation. The SABR-COMET trial noted a 20% increase in grade 2 or worse adverse events in the interventional arm. There were also three treatment-related deaths (4.5%) but none in the standard-of-care arm [6]. Hence, the need for correct identification of patients with a true oligometastatic disease seems warranted. At the same time, we acknowledge the existence of specific situations in oncology (e.g. oligoprogression and oligopersistence explained further in the text) where a strict distinction between a true and false oligometastatic disease may be marginal. These outliers should always be judged individually taking into account the patient’s preference, symptomatology, comorbidities and available treatment alternatives.However, even if focusing on a true oligometastatic disease, we should keep in mind that the metastatic competence of malignant tumours is progressively increasing, influenced by many factors. The investigators of the prospective longitudinal cohort study TRACERx Renal (TRAcking renal cell Cancer Evolution through Therapy) analysed almost 1000 biopsies from 100 patients with metastatic clear-cell renal cell carcinoma and found that tumours initially presenting with an indolent disease course in the form of oligometastases gradually continued to progress towards a widespread phenotype [9]. Moreover, according to a retrospective study of different primary tumours, patients with a rate of new lung metastases below 0.6 per year live longer than those with a rate above 3.6 per year [10].Taken together, it is the time factor that represents the key trait of a disease that can be cured by radical local therapy of all visible lesions. It is the time factor that defines the speed of tumour-cell shedding and proliferation. The slower cancer develops, the higher the chances of a local approach to succeed because of the widening therapeutic window-of-opportunity. A true oligometastatic disease therefore stands for slowly developing metastases, which we propose to call “argometastases” (from Greek “argos” meaning slow). But does terminology matter? A 2017 systematic review of seven studies covering several oncologic and non-oncologic conditions concluded that different terminology used for the same pathology impacts decision making [11]. Although the term “oligometastatic” was not explored in that study, we assume that the conclusions pertain to it as well. Moreover, oligometastatic presentation is rare, and it is well known that misdiagnosis and late diagnosis rank among the most important issues of rare diseases [12, 13]. In this respect, given the intrinsic feature of a number of lesions, some physicians facing patients with few metastases may be automatically tempted to propose local ablation if this is technically feasible. However, technical feasibility does not equal clinical relevance, the latter of which means to recognise a true oligometastatic disease. Its optimal definition will probably only be possible if biological characteristics, including genetic determinants, epigenetic modifiers and immune response markers, are integrated [14]. At present, despite continuous advances in this field, we are still far from their adoption in clinical practice.Therefore, we would like to point out and summarise clinical findings which can be used to optimise the use of local ablation in patients presenting with newly diagnosed metastases. These recommendations do not cover situations where patients have disseminated cancer overall controlled by a systemic treatment except for several progressing lesions (oligoprogression) which can be easily treated for example by SABR. Neither will be discussed the consolidation of a few persisting metastases after otherwise successful systemic treatment (oligopersistence).We have classified clinical findings associated with a true oligometastatic disease into three categories encompassing a low probability of occult metastases, low tumour growth rate and low tumour burden (Table 15]. Several retrospective analyses demonstrated a positive predictive value of a longer disease-free interval after primary treatment for overall survival [1618]. Due to the stochastic nature of this relationship, there is no cut-off to define the presence or absence of occult disease, and we expect the probability distribution to be continuous. Accordingly, a synchronous manifestation (de novo oligometastases) has a worse prognosis than a metachronous manifestation (oligorecurrence), which occurs after at least 3–6 months have elapsed since primary treatment [19]. The probability of occult dissemination also increases with the development of every new visible metastasis [15]. This corresponds to the observation that the lower the number of metastases, the better the prognosis with the best outcomes seen in patients with a single distant lesion [16, 17]. Analogously, a controlled primary tumour is a prerequisite for controlled cancer cell shedding, admitting that distant metastases can themselves be a source of further spread.Table 1Clinical characteristics of a true oligometastatic disease (“argometastases”).
Low probability of occult metastases
Metachronous presentation with a long disease-free intervala
Controlled primary tumour
No suspicious micronodules of unknown originb
No regional lymph nodes involvement at initial diagnosis
Favourable distant organ site involvementc
Possibility to lower the detection threshold by auxiliary imaging and laboratory methodsd
Susceptible tumour origin (histotype)
Low tumour growth rate
According to tumour growth kinetics as per a series of follow-up imaging (if available)
Low tumour burden
Limited size and number of lesions and limited number of organ sites allowing a safe and complete local ablatione
Open in a separate windowaHere, disease-free interval is defined as the time between oligometastatic presentation and completion of previous anticancer therapy. The probability of occult metastases progressively declines with increasing disease-free intervals.bUsually between 2 and 8 mm (corresponding to the so-called grey zone) and found in different organs, typically in the lungs.cNot only in terms of tumour location that should allow a safe and complete ablation but also in terms of tumour type associated with a survival benefit of local ablation (e.g. colorectal cancer oligometastases in the liver or head and neck cancer oligometastases in the lungs).dAuxiliary imaging includes for example PET/CT, particularly with new tracers such as PSMA-targeted PET/CT and auxiliary laboratory methods comprise tumour marker tests and potentially also a liquid biopsy.eTaking also into consideration the location of lesions within a given organ site.The above-mentioned approach to disease-free interval should be distinguished from a situation when a disease-free interval (or a similar measure) is used to assess the efficacy of local ablation. While in the former case it evaluates the time period prior to local ablation in order to help identify true oligometastases and is usually recapitulated in clinical practice; in the latter case it not only looks at the time period after local ablation, being commonly employed in clinical trials, but can also be used in routine practice as feedback information because we expect a true oligometastatic disease not to recur after successful local ablation. The latter approach has one more implication in that it provides prevalence estimates. As an example, 5- and 10-year disease-free survival rates after liver metastasectomy in colorectal cancer patients are 25 and 20%, respectively. Keeping in mind that liver dissemination occurs in about half of these patients and only a minority of them undergo resection, these data confirm the rarity of a true oligometastatic phenotype [20].Regional lymph node involvement at initial diagnosis is another negative predictive factor for subclinical hematogenous dissemination, particularly in the case of synchronous oligometastases but probably also in the metachronous setting [17, 19, 21]. The impact of primary tumour origin and histology is well known with some cancer types (e.g. colorectal cancer or clear-cell renal cell carcinoma) drawing more benefit from local treatment than the other [1, 22]. However, the phenotypic intertumoral heterogeneity is considerable and still not sufficiently understood as testified by the emerging concept of oligometastases in diseases traditionally considered typical examples of leukaemia-like dissemination like pancreatic cancer [23]. Moreover, although little is known about the role of organ tropism in the development of a true oligometastatic disease, the site of metastatic outgrowth seems to impact the success rates of local ablation as documented by different outcomes in patients with colorectal cancer and liver involvement (CLOCC trial) or lung involvement (PulMiCC trial) [7, 8]. Another example is head and neck cancer, where long-term survivorship after distant recurrence has been linked to human papillomavirus (HPV)-positive oropharyngeal carcinoma with lung oligometastases [24]. There are several factors that can explain these observations. Apart from a possible bias induced by a retrospective collection of data, rarity of some metastatic manifestations, cross-trial comparisons and differences in technical feasibility and preferred modalities according to anatomic locations, growing evidence suggests an implication of the microenvironment, particularly the immune system [25].Perhaps the greatest potential for detecting micrometastases have imaging and laboratory methods. Currently, the detection threshold of the former modalities is about 2 mm, but such small lesions are non-specific. Usually, a size of about 8 mm triggers further investigations to conclude on their origin, either by means of imaging methods and/or bioptically (Fig. 1) [26]. The tissue sample is almost always mandatory to differentiate the original tumour from second primaries and non-malignant conditions. The presence of suspicious nodules in the grey zone between 2 and 8 mm poses a diagnostic challenge and prevents certainty in excluding occult metastases unless a biopsy is performed, which on the other hand may require more invasive interventions to obtain the tissue possibly accompanied by increased risk of complications (e.g. a pulmonary wedge resection).Open in a separate windowFig. 1A simplified model of distant dissemination showing the difference between a true and false oligometastatic disease.In both scenarios (a, b), cancer cell shedding starts at time t0 leading to a detectable metastatic outgrowth at time t1. Lesions of at least 8 mm in diameter appearing at time t2 are amenable to a proper diagnostic workup including radiology, nuclear medicine and pathology. However, smaller leasions (2–8 mm) are often non-specific, requiring thus follow-up. Although at time t3, an oligometastatic state can be confirmed in both scenarious radiologically, only situation A corresponds to a true oligometastatic disease because there are no non-specific micronodules in the grey zone (2–8 mm) and, more imporantly, not any unrecognised subclinical dissemination. Figure includes modified templates from Servier Medical Art.Imaging plays a decisive role in assessing growth kinetics, based on a chronological series of examinations, tumour burden, defined by size (volume) and number of lesions and number of organ sites, and location of lesions within a given organ site. All three parameters are inherently connected and determine the technical feasibility and safety of local treatment. Notably, tumour doubling time varies both on a case-by-case basis and in the same patient. According to volumetric analyses, it ranges from less than one week to more than 1 year, albeit usually being in the order of several months [27]. At an individual level, growth curves follow most accurately a Gompertzian model. Initial exponential size increments characterised by constant doubling times progressively slow down with the tumour becoming larger [28]. Taking this into account, a follow-up imaging to evaluate growth kinetics or assess the nature of suspicious lesions in the grey zone may be justified in selected patients but should always be carefully considered. A new promising method for improving the detection of metastatic lesions is prostate-specific membrane antigen (PSMA)-targeted PET/CT. The recent randomised phase II ORIOLE trial showed that if all PSMA-positive lesions are treated with SABR, the proportion of prostate cancer patients developing new metastases at 6 months is significantly lower than if some lesions are left untreated (16% versus 63%, P = 0.006) [29]. Hence, tailoring imaging modalities to tumour types is one of the promising avenues for future research.Laboratory methods comprise both traditional tumour marker tests which have been validated in some malignancies, such as prostate-specific antigen (PSA) in prostate cancer, and presently still investigational liquid biopsies based on detection of different elements such as cell-free circulating tumour DNA, circulating tumour cells, microRNA or exosomes in body fluids, typically in the peripheral blood [30]. According to the PREDATOR study, postoperative analysis of molecular residual disease by circulating tumour DNA testing significantly correlates with disease-free survival in metastatic colorectal cancer patients undergoing metastasectomy with curative intent [31]. Data on pre-interventional liquid biopsy are still limited but could potentially contribute to quantification of disease burden and measurement of disease kinetics (e.g. circulating tumour DNA doubling time) [32, 33].Our hypothetical model has several limitations some of which have already been addressed, especially a lack of prospective validation. Besides that, a restricted insight into tumour biology prevents us from integrating the multifaceted effects of heterogeneous behaviour of the primary tumour and its different metastases in terms of cancer cell shedding and proliferation and those of an outstanding phenomenon known as a dormant state allowing cancer cells to preserve their tumour-generating capacity and reawaken several years later [34]. Accumulating data confirm that an oligometastatic stage is a dynamic process, and evolutionary trajectories of malignant dissemination can even be bidirectional as shown in a preclinical study in which the investigators managed to reverse a polymetastatic to oligometastatic phenotype by epigenetic manipulations using microRNAs [35]. In this respect, patient outcomes differ according to exposition to different systemic drugs, including conventional chemotherapy, targeted agents and modern immunotherapy. Potentially impacting on characteristics and behaviour of oligometastases, these drugs can be given at various time points in the disease course, including but not limited to the above-mentioned scenarios of oligoprogression and oligopersistence. Finally, we acknowledge the fact that due to its multiparametric complexity, determining a true oligometastatic state with currently available diagnostic tools may be impossible in some cases. In such situations, the therapist’s expertise remains crucial, and decisions can be guided for example by local tumour growth imminently threatening to cause symptoms or lead to a missed opportunity for local ablation. In the same way, the risk of serious adverse events, either existing or impending, has a profound influence on the treatment choice. Nevertheless, with the advent of new technologies in clinical practices the gap of uncertainty will be undoubtedly getting narrower.In conclusion, when employing local treatments in patients with few metastases, tumour dynamics seems to be the major denominator of therapeutic success. Cancers with slow tumour-cell shedding and slow proliferation leave a sufficiently broad window-of-opportunity to detect and treat accessible lesions. In case sporadic micronodules later develop in overt metastases, the indolent behaviour of such tardily appearing “argometastases” gives us another fair opportunity to eradicate them. A scientific terminology is a mighty tool that may eventually steer our decision making, not only in daily practice but also when dealing with a rare and sometimes over-diagnosed entity as a true oligometastatic disease probably is.  相似文献   

9.
Only static medialisation of the paralysed vocal cord is most commonly performed today for vocal cord palsy which does not offer very good voice post-operatively. Colledge and Balance’s (1927) operation of anastomosis of the phrenic nerve to the recurrent laryngeal nerve for laryngeal palsy or Tuckers (1976) nerve muscle pedicle technique has not offered significant reanimation of the paralysed muscles of the vocal cord. Moreover, it is virtually impossible to offer dynamism to the paralysed muscles; but dynamism can conveniently be transmitted to the paralysed vocal cord by appropriate muscle transplantation as has been done in palatopharyngoplasty for rhinolalia aperta (Ghosh 1983, 1986). Isshiki’s laryngoplasty operations (1977) also offer only static correction. In view of the above short comings, the present statico-dynamic operation was conceptualised. A new technique of medialisation of the paralysed vocal cord statico-dynamically for improvement of voice is described here. In one operation, such as this, both arytenoid adduction and vocal cord adduction are expected to be achieved. A rectangular island of lamina of the thyroid cartilage, attached to the inner perichondrium, on the paralysed side, is created by drilling an endless canal on the lateral aspect of the thyroid lamina to the level of the inner perichondrium. The mobile cartilage island along with the vocal cord and the arytenoid is fixed in a medialised position. Dynamism is quintessential for normal vocal cord function. For this, superiorly and inferiorly based superior and inferior bellies of the omohyoideus are passed over the island of cartilage crossing each other, forming the ‘crossed musculoplasty’. By their contractions further adduction of the island along with its attached vocal cord is brought about, thus further improving the quality of voice.  相似文献   

10.
11.
This article presents a case of repeated spontaneous pregnancies and successful deliveries in a young woman treated with repeated stem cell transplantation and gonadotropin-releasing hormone agonist therapy.Two years ago we published an exceptional case [1] of a young patient who successfully delivered a healthy neonate after spontaneous conception despite two (repeated) stem cell transplantations and aggressive conditioning chemotherapy in parallel with monthly gonadotropin-releasing hormone agonist (GnRH-a) cotreatment (Decapeptyl CR, 3.75 mg; Ferring, Saint-Prex, Switzerland) and irradiation for lymphoma [1]. Against all the odds, this patient conceived again and again delivered a second normal neonate, most probably attributable to the GnRH-a cotreatment during chemotherapy.In brief, this young woman received chemotherapy and, in parallel, monthly depot GnRH-a injections in 1995 when she was 15 years old for stage IV anaplastic lymphoma [1]. Less than 1 year afterward she underwent autologous stem cell transplantation (SCT) with carmustine, etoposide, cytarabine, and cyclophosphamide (the BEAC protocol) for persistent disease [1], again with GnRH-a pre- and cotreatment during chemotherapy [13]. Her first spontaneous pregnancy occurred at the age of 24 years, but that pregnancy ended in miscarriage. One month later, she conceived again, and that pregnancy developed normally until 25 weeks of gestation, when recurrence of the lymphoma was diagnosed with subsequent intrauterine growth retardation and demise, after dexamethasone, etoposide, ifosfamide, and cisplatin (DVIP) chemotherapy during pregnancy [1]. After pregnancy termination, she again received a GnRH-a in parallel with DVIP and BEAC conditioning, followed by a second autologous SCT. An attempt at in vitro fertilization was discontinued because of a poor response, but 3 months later she spontaneously conceived, and after a normal gestation she delivered, in August 2006, a normal, term, 3,450 gram, female neonate [1]. About 1 year later she spontaneously conceived, for the fourth time, and on August 9, 2008 she again successfully delivered a normal, 3,450 gram, female neonate, with an Apgar score of 10 at 5 minutes.SCT almost invariably induces ovarian failure, irrespective of patient age or treatment protocol [1, 46]. Only 0.6% of patients conceive after one autologous or allogeneic SCT, according to a large survey on fertility after SCT, involving 37,362 women [5]. The estimated odds for spontaneous conception after two SCTs are negligible (0.006 × 0.006 = 0.000036) [1, 35]. Carter et al. [6] conducted a retrospective study on reproductive function and pregnancy outcomes in 619 women and partners of men treated with autologous or allogeneic hematopoietic SCT. They found that only 3% of their female survivors succeeded in conceiving after one SCT [6]. Thus, theoretically, according to their findings, the estimated odds for spontaneous conception after two SCTs are 0.03 × 0.03 = 0.0009, which is <1:1,000. Although several reports on spontaneous conceptions and deliveries after SCT have been published [7], we are not aware of any publication of repeated successful deliveries after repeated SCT in the same patient. To the best of our knowledge, this is the first case.The administration of a GnRH-a before and in parallel with chemotherapy simulates a prepubertal hormonal milieu, and through this mechanism, and/or possibly others [13], might have minimized the gonadotoxic effect of chemotherapy and increased the chance of spontaneous ovulations and successful conceptions and deliveries [13]. Indeed, similarly, Remérand et al. [8] recently reported on four successful pregnancies in a patient who had been treated with allogeneic bone marrow transplantation when she was 4 years old. GnRH-a treatment simulates the prepubertal hormonal milieu, in keeping with our patient''s recent repeated spontaneous gestation, despite SCT [8]. Because most of the methods involving ovarian or egg cryopreservation are not yet clinically established and unequivocally successful, physicians should inform these young women of the possible beneficial effect of a GnRH-a in minimizing gonadal damage and preservation of ovarian function and fertility, in addition to the options of cryopreservation of embryos and ova [13].  相似文献   

12.
To date, there are no effective interventions to prevent the onset and severity of chemotherapy-induced peripheral neuropathy (CIPN). Exercising during chemotherapy treatment has displayed a range of clinical benefits, yet only limited published studies have investigated whether exercise is protective against preventing CIPN. This Editorial discusses a randomised control study of the efficacy of strength or balance exercise to prevent CIPN.Subject terms: Lifestyle modification, Chemotherapy

Chemotherapy-induced peripheral neuropathy is a common and dose-limiting side-effect from numerous neurotoxic chemotherapies [1]. Sensory or motor CIPN symptoms can effect around half of cancer patients, with symptoms often leading to functional impairments that can impact quality of life [2]. As suggested by leading oncology institutions, there are no established interventions to prevent the onset or severity of CIPN, with dose adjustments one of the few methods at minimising symptom-burden [3, 4].Exercise as an intervention has been more commonly used in oncology clinics. Established benefits include improved physical function, cardiometabolic profile, psychological wellbeing and symptom profile, as well as associations with reduced recurrence and improved survival [5]. It is unsurprising that many institutions are endorsing the promotion of exercise to patients [6].Few studies have explored the effect of exercise for CIPN. Regular resistance, aerobic and balance exercise can be beneficial in improving the symptoms that are associated with CIPN including impaired balance, strength and functional capacity [7]. However, the preventative effects of exercise on CIPN symptomology including numbness, tingling and pain are less known. As CIPN symptoms can result in reduced quality of life due to the resulting changes in lifestyle and functional capacity [1], preventing the onset of CIPN symptoms and the associated functional side-effects are critical at improving many aspects of survivors lives.In this issue of the British Journal of Cancer, Müller et al. present findings of the ‘PIC study’, a well-designed three-arm randomised control trial evaluating the preventative potential of balance or resistance exercise versus usual care on objective CIPN symptoms during neurotoxic chemotherapy. While exercise has recently been investigated in the context of treatment for already developed CIPN, few studies exist investigating exercise as a preventative therapy. Additionally and in particular, most studies lack objective and neurological assessment of CIPN, which were conducted in this study.In this study, 170 participants (mostly female, breast cancer patients receiving taxane chemotherapy) received exercise training for 105 min/week throughout the duration of their chemotherapy (17.2 ± 5.3 weeks). Participants were randomised to receive either: (1) Resistance Training (RT): supervised machine-based 2 × 45 min at 70–80% of their maximum lifting capacity and 1 × 15 min home-prescribed core-strengthening session; (2) Sensorimotor (balance) Exercise Training (SMT): 3 × 35 min either at home or supervised in a group setting; or (3) Usual Care (UC), which did not receive any physical activity support but was offered either intervention after completing treatment. Both intervention arms received weekly phone calls to monitor compliance and adverse events.This study is one of the few exercise interventions during chemotherapy to include a detailed neurophysiological assessment, with many studies relying solely on clinician-graded and patient-reported symptoms. While these are important tools in both clinical and research settings, high quality exercise studies including objective measures of CIPN have been lacking, particularly investigating the preventative effect. This study used the Total Neuropathy Score-reduced (TNSr), which is a composite scoring tool assessing patient-reported and objective clinical CIPN symptoms including sensory, motor, deep tendon reflexes and strength. Peroneal and sural nerve conduction studies were assessed, including action potentials and nerve conduction velocities. Postural control was objectively measured using an AccuSway force plate with eyes open for maximum duration (single leg stance) and closed for 30 seconds (bipedal stance). Muscle strength was assessed using a quadriceps maximum voluntary contraction with an isokinetic dynamometer. Patient-reported outcomes were used to assess CIPN (EORTC-QLQ-CIPN20), quality of life (EORTC-QLQ-C30) and fear of falling (FES-I). Assessments were conducted before and shortly after chemotherapy, as well as 3 and 6 months later.Critically, this study did not find that either the resistance or balance exercise interventions prevented objective or subjective CIPN symptoms using intention-to-treat analyses. When excluding the high number of participants that did not adhere to the prescribed interventions, the adherent exercisers displayed reduced subjective sensory CIPN feet symptoms during chemotherapy compared to the UC group (P = 0.039, ES = 1.27). Improved self-reported sensory CIPN symptoms have been similarly reported in previous research of exercise during chemotherapy [8, 9]. Additionally, balance was preserved after completing treatment in both exercise groups compared to a reduction in the control group (SMT: P = 0.045, ES = 0.27; RT: P = 0.023, ES = 0.28). Although these findings did not persist until the follow-up timepoints, they could have important clinical ramifications on fear of falling and falls-risk that can be exacerbated due to CIPN symptoms during the critical treatment period [10].Adherent exercisers in the study experienced benefits consistently reported in the exercise-oncology literature including improved muscle strength (P < 0.001, ES = 0.57), quality of life (P = 0.005, ES = 0.64), physical function (P = .014, ES = 0.63) and fatigue (P = 0.016, ES = 0.45) [11], suggesting that exercise during chemotherapy should be promoted if performed in a safe and structured manner. Adherent exercisers also had higher chemotherapy compliance (96.6 ± 4.8% versus 92.2 ± 9.4% in the control group, P = 0.045), which could have important clinical implications due to receiving nearer to their prescribed dose. The relationship between exercise and chemotherapy dose has been investigated with a potentially beneficial effect [12], although it could be hypothesised that patients who have reduced physical function may experience more complications during treatment and are less able to exercise regularly. Yet importantly, the question regarding the preventative effect of exercise for CIPN still needs more refining.Although the findings of this study are promising, they should still be interpreted with caution. The study initially aimed to recruit 300 participants, with the final accumulated sample size being n = 170 given the difficulties of recruiting participants (25% recruitment rate). Studies requiring supervised exercise interventions with additional hospital visits commencing from the beginning of chemotherapy treatment can be difficult for patients to commit to. Attendance in both interventions was around 50% (mainly due to time constraints and motivational issues), with only 35 participants across both interventions classified as adhering to ≥67% of the prescribed interventions, highlighting the difficulty of delivering structured exercise interventions during this period.Exercise to prevent CIPN remains one of the few interventions that the American Society of Clinical Oncology suggests requires further clinical trials to investigate based on preliminary evidence [3]. Although studies such as by Müller et al. assist in answering this question, future studies are needed with higher adherence rates during chemotherapy to assess the preventative potential. Additionally, as it would be important clinically to prescribe a combination of aerobic, resistance and balance exercise to patients exposed to neurotoxic chemotherapy at-risk of CIPN, future studies should incorporate multi-modal exercise, including an aerobic component, which was not included in the current study and has been shown to be important for assisting patients with CIPN [7].  相似文献   

13.
Because of its poor prognosis and high mortality rate, early diagnosis of medullary thyroid carcinoma (MTC) is a challenge. For almost two decades, routine serum calcitonin (CT) measurement has been used as a tool for early MTC diagnosis, with conflicting results. In 2006, the European Thyroid Association (ETA) recommended serum CT measurement in the initial workup of thyroid nodules, whereas the American Thyroid Association (ATA) declined to recommend for or against this approach.In late 2009, the revised ATA guidelines were published, and in June 2010 the ETA released new guidelines for the diagnosis and management of thyroid nodules that had been drafted in collaboration with the American Association of Clinical Endocrinologists and with the Associazione Medici Endocrinologi, and the picture became even more complex. The ATA still takes no stand for or against screening but acknowledges that, if testing is done, a CT value >100 pg/ml should be considered suspicious and an indication for treatment. As for the ETA, it seems to have taken a step back from its 2006 position, and it now advocates CT screening only in the presence of clinical risk factors. These new positions are more cautious and less straightforward because prospective, randomized, large-scale, long-term trial data are lacking. Are such studies feasible? Can they solve the CT dilemma? In the absence of adequate evidence, selective aggressive case finding should be pursued to improve MTC prognosis.Medullary thyroid carcinoma (MTC) is derived from the calcitonin (CT)-secreting parafollicular cells (C cells) of the thyroid and represents 4%–10% of all thyroid malignancies [1]. Approximately 25% of all MTCs are hereditary forms that can be detected by molecular screening for RET proto-oncogene mutations [13]. In families known to harbor a RET mutation, carriers can be identified before disease onset and offered preventive care [1]. Three in four MTCs are sporadic tumors [1], which usually present as palpable thyroid nodules. Although several clinical features can raise the suspicion of MTC, diagnosis of these tumors is often a challenge. In fact, in most series, the sensitivity of fine-needle aspiration cytology for presurgical MTC diagnosis is around 40%–50% [4], substantially lower than that reported for thyroid malignancies arising from follicular cells.When patients present with lymph node involvement or distant metastases, the outcome is usually poor [1], so early diagnosis and radical surgical treatment are the main means for reducing MTC-related morbidity and mortality. The fact that virtually all MTCs are associated with elevated circulating levels of CT at the time of diagnosis led several groups to suggest that routine measurement of serum CT in patients with nodular thyroid disease might improve the early diagnosis of MTC (references in [4]). In 2006, the European Thyroid Association (ETA) published a consensus statement [5] recommending this approach, but in guidelines issued in roughly the same period [6], the American Thyroid Association (ATA) declined to take a stand on this issue, emphasizing instead unresolved questions related to cost/effectiveness. The ATA noted that, in most studies, screening accuracy required confirmation of basal CT elevations with additional assays of pentagastrin-stimulated CT—a problem largely confined to the U.S., where pentagastrin was no longer available. On the basis of the available evidence, several other aspects of the issue also appeared controversial at that time. First, the studies that had been published had used different CT assays that varied widely in sensitivity [4], hindering reliable comparison of the results and making it impossible to draw meaningful conclusions on the diagnostic accuracy of the approach. Furthermore, the studies varied widely in terms of their false-positivity rates [4], basal CT thresholds that required confirmatory poststimulation testing [4], and cutoffs for distinguishing benign and malignant disease. The latter issue is further complicated by the fact that, although C-cell hyperplasia (CCH) is undeniably a precancerous lesion in familial settings [1, 7], there is no clearcut evidence that sporadic CCH progresses to MTC.Three years later, in 2009, the ATA issued specific guidelines for MTC management [7] and revised their recommendations for thyroid nodules and differentiated thyroid cancer [8]. Shortly thereafter, the ETA also released updated guidelines [9], which had been drawn up in collaboration with the American Association of Clinical Endocrinologists (AACE) and the Italian Associazione Medici Endocrinologi (AME). Once again, the ATA declined to take a stand for or against CT screening in patients with thyroid nodules but specified that, if testing was undertaken, a basal CT value >100 pg/ml should be regarded as suspicious, prompting further evaluation and appropriate treatment [8]. As for the ETA, its straightforward support of screening in 2006 was attenuated in the new AACE–AME–ETA recommendations, which call for mandatory measurement of CT levels only when MTC is specifically suspected and/or there is a family history of MTC [9].In short, the issue seemed to have become paradoxically less clear. To some extent, the distance between the ATA and AACE–AME–ETA positions had shrunk: both groups acknowledged that CT assays may be of help in the preoperative diagnosis of MTC. However, the task forces behind these guidelines obviously felt that the available evidence was still not strong enough to support systematic CT screening in all patients with thyroid nodules. An effective diagnostic screening procedure should meet certain criteria: it should be high in sensitivity as well as specificity. It should also allow early detection of the disease that translates into lower morbidity and mortality rates. And last but not least, it needs to be cost-effective. Based on the evidence at hand, routine CT screening for MTC certainly did not meet all these criteria.However, several enlightening reports appeared on this subject between 2006 and 2009 [1015]. They confirmed, first of all, that CT screening in patients with nodular thyroid disease is a highly sensitive tool for the presurgical diagnosis of MTC [1012], far more sensitive than cytology [10]. Second, they strengthened the claim that CT screening allows earlier diagnosis of MTC than the classical approach [15], with data showing higher rates of tumors diagnosed during the pT1 stage [12] and better postsurgical outcomes in settings where CT is routinely measured (particularly in patients with basal CT levels <100 pg/ml) [10, 12]. In addition, two recent independent reports—one from Europe and the other from the U.S.— suggested that CT screening is actually cost-effective [13, 14], but unfortunately the conclusions in both cases were based on an analysis of hypothetical data that may or may not reflect real-life scenarios.Other issues have also remained unresolved. Even with screening, for example, a substantial proportion of MTCs are still being diagnosed late, when surgical eradication is no longer possible [10], and a significant number of the MTCs are incidentally discovered microcarcinomas [1012], which have never been shown conclusively to progress toward more advanced disease. Finally, even with the most recent ultrasensitive assays, false-positivity rates for basal CT testing remain high (and positive predictive values [PPVs] remain low) [1012]. In clinical epidemiology, the prevalence of the disease in the study population is generally regarded as the major determinant of the PPV. For an uncommon disease like MTC, the majority of positive results in screening will inevitably be false positives, no matter how sensitive and specific the assay. Basal CT assays alone will not be sufficient: a two-level approach that includes stimulated CT measurement is needed to improve the PPV. It is important to note, at this point, that the limited availability of pentagastrin should no longer be considered an obstacle to this approach because recent studies have convincingly demonstrated that CT stimulation can be achieved equally well with calcium injections [16].Whether routine CT screening should or should not be adopted and its impact on MTC-related morbidity and mortality are questions that can be answered only in large, long-term, prospective, randomized multicenter initiatives, with standard enrollment criteria, CT assay techniques, and breakpoints for interpreting the results. An effort of this type might finally provide answers to the questions that continue to be debated in connection with MTC diagnosis and outcome.In the meantime, how can our currently available knowledge be translated into clinical practice? The only reasonable approach is selective aggressive case finding. Attention should be focused on clinical risk factors for MTC. In affected families with documented RET mutations, basal and stimulated CT assays play pivotal roles in decisions regarding the timing of prophylactic surgery for mutation carriers (whereas individuals who test negative for the mutation need no further workup or surveillance) [7]. Periodic CT assays are also indicated for relatives of MTC patients belonging to the rare kindreds that are negative for RET mutations but still meet the clinical criteria for multiple endocrine neoplasia (MEN) 2A or MEN 2B [7]. As for the relatives of patients whose MTCs are not associated with RET mutations and clearly sporadic, their risk for MTC is <1% [17], so they do not require CT surveillance [7]. More aggressive workup is needed, however, when thyroid nodules present clinical features that are suggestive of MTC (location in the upper third of a lobe, pain on palpation, hypoechogenicity with microcalcifications, lymph node abnormalities, and association with flushing and/or diarrhea). In these cases, fine-needle aspiration biopsy (FNAB) should be supplemented with serum CT assays. CT measurement should also be considered and discussed with patients whose FNAB is indeterminate/nondiagnostic and those with multinodular goiters characterized by cytology-negative dominant nodule(s). And in all these cases, the results of testing have to be interpreted with an eye to the wide range of other conditions that can cause minimal or mild non–C cell related elevations in serum CT, including smoking, renal failure, autoimmune thyroiditis, nonthyroidal neuroendocrine neoplasms, and heterophilic antibodies [4].  相似文献   

14.
With tremendous advances in sequencing and analysis in recent years, a wealth of genetic information has become available to identify and classify breast cancer into five main subtypes - luminal A, luminal B, claudin-low, human epidermal growth factor receptor 2-enriched, and basal-like. Current treatment decisions are often based on these classifications, and while more beneficial than any single treatment for all breast cancers, targeted therapeutics have exhibited limited success with most of the subtypes. Luminal B breast cancers are associated with early relapse following endocrine therapy and often exhibit a poor prognosis that is similar to that of the aggressive basal-like breast cancers. Identifying genetic components that contribute to the luminal B endocrine resistant phenotype has become imperative. To this end, numerous groups have identified activation of the phosphatidylinositol 3-kinase (PI3K) pathway as a common recurring event in luminal B cancers with poor outcome. Examining the pathways downstream of PI3K, Fu and colleagues have recreated a human model of the luminal B subtype of breast cancer. The authors were able to reduce expression of phosphatase and tensin homolog (PTEN), the negative regulator of PI3K, using inducible short hairpin RNAs. By varying the expression of PTEN, the authors effectively conferred endocrine resistance and recapitulated the luminal B gene expression signature. Using this system in vitro and in vivo, they then tested the ability of selective kinase inhibitors downstream of PI3K to enhance current endocrine therapies. A combination of fulvestrant, which blocks ligand-dependent and -independent estrogen receptor signaling, with protein kinase B inhibition was found to overcome endocrine resistance. These findings squarely place PTEN expression levels at the nexus of luminal B breast cancers and indicates that patients with PTEN-low estrogen receptor-positive tumors might benefit from combined endocrine and PI3K pathway therapies.The phosphatidylinositol 3-kinase (PI3K) pathway has been the focus of intense pre-clinical and clinical investigation due to its high frequency of alteration in human cancers. Luminal B breast cancers are no exception and have proven exceedingly difficult to treat clinically. With current endocrine therapies having limited success in luminal B breast cancer, Fu and colleagues in Breast Cancer Research [1] have tested several combinations of kinase inhibitors with antiestrogen treatment to determine if this one-two punch is more effective at inhibiting breast cancer cell growth. Luminal B breast cancers typically exhibit activation of the PI3K pathway and have a worse outcome [2,3]. While luminal B tumors present a lower frequency of PIK3CA mutations than luminal A tumors, they do display a greater frequency of phosphatase and tensin homolog (PTEN) aberrations [4]. Subsequently, these PTEN-reduced tumor cells display greater PI3K pathway activation [2] and resistance to endocrine therapies [5-8].In the present study, Fu and colleagues [1] generated human estrogen receptor-positive (ER+) breast cancer cell lines that contained inducible PTEN short hairpin RNAs, thus allowing them to dial down the expression of PTEN expression to varying levels. Moderate decreases in PTEN expression resulted in the hyperactivation of the PI3K pathway and a concomitant gene expression change most similar to luminal B breast cancers [9]. Notably, these changes were readily apparent with only moderate decreases in PTEN expression, arguing that complete loss of PTEN, as observed in many ER-negative breast cancers [10], is not requisite to elicit PI3K pathway hyperactivation in ER+ cells. Reduction in PTEN expression rendered these ER+ cells resistant to antiestrogen treatments, with the authors again dialing PTEN expression downward to achieve more resistance in vitro and in vivo. These striking results provide the first substantial evidence that PTEN expression is intimately linked to antiestrogen sensitivity.While targeted therapies for luminal B breast cancers have not yet become clinically apparent, the authors’ initial findings point towards heightened PI3K signaling as a possible key contributor to aberrant proliferation and tumorigenesis. Given their results linking PTEN levels and antiestrogen resistance, the authors sought to combine inhibition of PI3K and mitogen-activated protein kinase pathway components with antiestrogen treatment to elicit an anti-tumor response. Employing antiestrogen treatment with clinically relevant concentrations of mammalian target of rapamycin (mTOR), protein kinase B (AKT) and mitogen-activated protein kinase kinase inhibitors led to significant attenuation of cell growth. Furthermore, when antiestrogen treatment was combined with mTOR and AKT inhibitors, cell growth was massively suppressed and apoptosis was increased, strongly indicating a synergistic effect of antiestrogen treatment alongside mTOR and AKT inhibition. Although the levels of PTEN and the antiestrogen used dictated the extent of the response in vitro, the in vivo combination of fulvestrant with an AKT inhibitor significantly accelerated tumor regression (three-fold) compared with either inhibitor alone. While this study did not include the use of direct PI3K pharmacological inhibitors, one might also expect that a broad spectrum anti-PI3K agent might also prove efficacious in combination with fulvestrant.This study is consistent with previous work that showed PTEN loss and PIK3CA mutation were not mutually exclusive [11] and builds on evidence that PIK3CA mutations do not segregate with high or low PTEN-expressing tumors [10]. Moreover, PIK3CA mutations are associated with a better outcome in ER+ breast cancer while PTEN deficiency is correlated with a poor prognosis [2,10]. However, these initial studies were somewhat handicapped by their yes-or-no assessment of PTEN expression. The current study implies that small changes in PTEN expression are sufficient to elicit a growth advantage and treatment-resistance phenotype to breast cancer cells. Thus, regardless of PIK3CA status, PTEN levels could be used as a predictive marker for endocrine therapy. However, a clear limitation of the current study is its heavy reliance on established breast cancer cell lines. Additional work in physiological settings (for example, patient-derived xenografts) would provide further validation that this might be a viable clinical strategy. While implementing a PTEN detection strategy and expression level cutoff clinically could prove challenging, the utility of estrogen deprivation in combination with AKT inhibitors holds tremendous promise in effectively treating ER+ tumors with reduced PTEN.  相似文献   

15.
The phase II study of leuprolide for ovarian function preservation in hematopoietic stem cell transplantation patients by Cheng, Takagi, Milbourne et al. (The Oncologist 2012; 17:000–000) is reviewed.The authors are congratulated for their prospective, phase II study of gonadotropin-releasing hormone analog (GnRH-a) cotreatment for preservation of ovarian function in young women undergoing hematopoietic stem cell transplantation (HSCT). The late effects of cancer treatment have recently attracted interest among a spectrum of health care providers, and protection against the iatrogenic infertility caused by gonadotoxic chemotherapy has assumed a high priority. Whereas several avenues are offered to patients before gonadotoxic chemotherapy, none is ideal and none of them guarantees survivors'' future fertility.The authors enrolled 60 eligible patients; 59 underwent HSCT and 44 were evaluable (median age, 25 years; median follow-up, 355 days) [1]. Only seven of the 44 patients (16%) regained ovarian function. Of the 33 who received myeloablative regimens, six (18%) regained ovarian function, and among the 11 who received nonmyeloablative regimens, only one (9%) regained ovarian function (not significant). The authors concluded that the GnRH-a leuprolide did not preserve ovarian function in patients who underwent HSCT using either myeloablative or nonmyeloablative regimens [1]. However, despite the good intention of the authors, this study falls short of reaching solid conclusions because of several methodological problems.Whereas in previous studies [29], the GnRH-a was administered before the initial chemotherapy, on first exposure to the gonadotoxic treatment, in the present study [1], the agonist was started 2 months before the conditioning chemotherapy preceding HSCT, regardless of previous chemotherapy exposure. Indeed, all patients except two had received at least one prior chemotherapy regimen. The median number of prior chemotherapy regimens before HSCT was two (range, 0–8), and 12 patients also received prior local radiation. Therefore, ovarian reserve was probably affected by previous gonadotoxic exposure, and thus the starting point of the participating patients was suboptimal from the very beginning. Supporting this speculation is the relatively high gonadotropin levels used for eligibility for the study. For follicle-stimulating hormone (FSH) and luteinizing hormone measurements, normal premenopausal levels are 2.6–12.6 IU/L. The inclusion of patients with higher FSH concentrations, up to 20 IU/L, would include subjects with diminished ovarian reserve, as, for example, a result of previous exposure to gonadotoxic chemotherapy and/or radiotherapy. Anecdotal case reports suggest that administration of GnRH-a with every gonadotoxic chemotherapy regimen may be effective in preserving fertility. Such treatment has been associated with multiple spontaneous pregnancies and successful deliveries despite multiple autologous HSCT, 10 years apart [10]. This is the only published case of repeated spontaneous pregnancies and two successful deliveries after repeated autologous HSCTs and GnRH-a treatment. It was reported in this journal and followed the report of similar GnRH-protected multiple pregnancies in a patient who had undergone allogeneic bone marrow transplantation. These cases suggest that the prepubertal milieu induced by the GnRH-a cotreatment might have contributed to the preserved fertility despite repeated bone marrow transplant (BMT) [10, 11]. The odds against repeated pregnancies following one, or several, SCTs are exceedingly high [1013].Another methodological problem is the use of a very high dosage of the GnRH-a, twice the used dosage in previous studies [29]. Whereas previous studies used a monthly injection of 3.75 mg triptorelin or leuprolide (11.25 mg every 3 months) or 3.6 mg goserelin, the authors of the current study administered 22.5 mg leuprolide in a 3-month depot i.m. injection within 2 months of HSCT. These high doses brought about intolerable side effects, and led nine of the 59 patients to refuse the second dose of leuprolide and leave the study [1]. Administration of the agonist 10–14 days before chemotherapy is sufficient to overcome the flare-up effect of the agonist and establishes the hypogonadotropic milieu before starting chemotherapy. Because >6 months of exposure to a hypogonadotropic milieu may be associated with irreversible bone loss, osteopenia, and osteoporosis, especially in hematologic patients receiving glucocorticoids, it is advisable to keep the hypogonadotropic period as short as possible. Therefore, it is probably unnecessary and potentially undesirable to administer a GnRH-a 2 months before starting chemotherapy [1].The authors found that five of 15 patients who underwent autologous transplantation (33%) resumed cyclic ovarian activity, compared with only two of the 29 patients who underwent allogeneic transplantation (7%) (p = .04) [1]. This is in keeping with our results showing that GnRH-a administration could significantly minimize the rate of premature ovarian failure in young women survivors after autologous HSCT, but not for those who underwent allogeneic HSCT [14].In order to be able to draw unequivocal conclusions, a similar study should be prospectively randomized comparing reproductive outcomes in patients receiving a GnRH-a in parallel with every gonadotoxic exposure of chemotherapy with the same outcomes in those receiving the same chemotherapy without a GnRH-a. Such a study should have sufficient power and the GnRH-a should be administered at the least effective dosage to minimize side effects and dropout, and to keep the possible risk for osteopenia as low as possible. Such a prospective randomized controlled trial was recently published on young breast cancer patients, with clear and solid conclusions [3]. The clinical and tumor characteristics of the 133 patients randomized to chemotherapy alone and the 148 patients randomized to chemotherapy plus triptorelin were similar. Twelve months after the last cycle of chemotherapy (last follow-up, August 18, 2009), the rate of early menopause was 25.9% in the chemotherapy-alone group and 8.9% in the chemotherapy plus triptorelin group, an absolute difference of −17% (95% confidence interval, −26% to −7.9%; P < .001). The odds ratio for treatment-related early menopause was 0.28 (95% confidence interval, 0.14 to 0.59; P < .001). The use of triptorelin-induced temporary ovarian suppression during chemotherapy in premenopausal patients with early-stage breast cancer reduced the occurrence of chemotherapy-induced early menopause. The publication of similar studies of reproductive efficacy of GnRH protection in SCT patients would be a significant step forward.See the accompanying article on pages 233–238 of this issue.  相似文献   

16.
Lo SS  Albain KS 《The oncologist》2011,16(11):1482-1483
The study of Joh et al., published in this issue of The Oncologist is reviewed.There are a number of multigene assays that have been shown to reliably determine prognosis in patients with early-stage breast cancer [14]. However, only one test—the 21-gene recurrence score (RS) assay oncotype DX®—has been studied for use in predicting chemotherapy benefit among patients randomized in phase III studies of chemotherapy plus tamoxifen versus tamoxifen alone [1, 3, 4]. Thus, this test is commonly used by physicians to help determine which patients with early-stage, estrogen receptor–positive breast cancer should receive adjuvant cytotoxic chemotherapy to reduce the likelihood of recurrence and improve survival over and above the benefit from endocrine therapy alone. In the current issue of The Oncologist, Joh et al. [5] present the results of a retrospective study of the impact of this assay on physician decision making in the adjuvant treatment setting. Their results are consistent with other studies published on this topic to date—the RS assay does indeed impact physician decision making, and the frequency with which it does so is consistent across studies.Two published prospective studies showed that the results of the RS assay changed the medical oncologist''s adjuvant treatment recommendation pre- to post-RS assay result in 31.5% and 32% of cases, respectively [6, 7]. The majority of the changes were from chemotherapy plus hormone therapy to hormone therapy alone (22.5% and 21%) [6, 7]. A meta-analysis of nine studies, which also included seven retrospective studies, demonstrated a 36% overall rate of change in the treatment decision [8]. In patients for whom chemotherapy followed by hormone therapy was initially recommended, results of the RS assay led to 51% of cases being switched to a recommendation of hormone therapy alone. RS testing led to 13% of cases being switched from an initial recommendation of hormone therapy alone to chemotherapy followed by hormone therapy.Although most of the studies included in this meta-analysis were from North American sites, the impact of the RS assay on decision making is also consistent on a global level. Similar results were reported in a U.K. study in which the initial treatment recommendation changed in 37.8% of cases [9], a German study in which the initial treatment recommendation changed in 38.1% of cases [10], and a prospective Spanish study that showed a 32% rate of treatment recommendation change [6].The retrospective study of Joh et al. [5] is an interesting addition to the other published retrospective studies on this topic in that it also evaluated the decision making of other breast specialists (breast surgeons and breast pathologists) in addition to medical oncologists. Physicians more frequently overestimated the recurrence risk when using traditional clinical–pathologic tumor features than when using the RS. The authors make the point that, based on clinical–pathologic features alone, patients could often be overtreated with chemotherapy because current studies indicate that patients with breast cancers of the biology reflected by a lower RS most likely gain little benefit from chemotherapy. It remains uncertain what the correct RS cutoff should be, below which chemotherapy could be omitted. The results of the recently closed intergroup Trial Assigning IndividuaLized Options for treatment (Rx) [TAILORx], which randomized patients whose tumors had an RS of 11–25 to chemotherapy or not, are eagerly awaited to clarify the best adjuvant treatment approach for this group of women.The authors appropriately comment on the limitations of their study. In addition to the points they raise, because the entire denominator of patients presenting to physicians was not included, some bias is perhaps introduced as reflected in the high percentage of cases with a low RS. Furthermore, it is not clear whether the physicians participating in this investigation used tools such as Adjuvant! Online to formulate the pre-RS risk determination, or whether a proliferation index such as Ki-67 was routinely available.In this study, the authors found that the RS significantly correlated with tumor grade, mitotic activity, lymphovascular invasion, hormone receptor status and human epidermal growth factor receptor 2/neu status, factors routinely found on a standard pathology report. They allude to an as-yet-unpublished algorithm by their group to predict an RS accurately. Indeed, a number of studies are under way globally to develop predictive tools for use when multigene assays such as oncotype DX® are not available or cannot be afforded [1113]. These algorithms require validation in phase III studies with an endocrine therapy alone control arm. Then, users need to apply rigid quality control, especially for those algorithms dominated by immunohistochemical scoring of multiple single genes, because most of the studies reported to date employed central laboratories with expert breast pathologists. In particular, algorithms that provide chemotherapy prediction data note the dominant contribution of a proliferation index, with cutpoints not yet standardized [14]. Overall, it should be noted that cost savings from obtaining the RS assay and basing treatment choices on that result rather than clinical–pathologic variables alone have been demonstrated in North American, European, and Asian countries [1519].This study, together with all others published to date, documents that the 21-gene RS assay does impact physician adjuvant treatment decision making, and does so in a similar magnitude worldwide. The randomized, ongoing Microarray In Node negative Disease may Avoid ChemoTherapy [MINDACT] study will inform the question of survival outcome when selecting therapy based on clinical–pathologic factors versus results of a multigene assay. However, it appears that the best predictive tools of the future for chemotherapy benefit will most likely combine clinical–pathologic and multigene variables in one algorithm [1, 20]. Until these new tools are validated, use of the 21-gene RS assay for treatment decision making remains a valid and useful option for many patients and their physicians.Editor''s Note: See the accompanying article, “The effect of Oncotype DX recurrence score on treatment recommendations for patients with estrogen receptor positive early stage breast cancer and correlation with estimation of recurrence risk by breast cancer specialists,” by J.E. Joh, N.N. Esposito, J.V. Kiluk, et al., on pages 1520–1526 of this issue.  相似文献   

17.
Additional research is needed to improve the ability to detect life-threatening cancer at an early curable stage and to prevent the development of such cancer. Many research groups are working to discover more effective and safer methods to detect and prevent life-threatening breast cancer. The results from such research studies will ultimately allow women’s expectations for breast cancer prevention and early detection to be met.In his commentary titled “Breast Cancer Prevention: Can Women’s Expectations Be Met?” [1], Dr. Ponzone raises an important and timely question. Dr. Ponzone asks whether breast “cancer prevention” and “early detection” are attainable goals and whether these phrases have the same meaning to women at risk of breast cancer as to health professionals. This is a critically important issue, because researchers and health care providers strive to reduce the incidence and mortality from breast cancer by working to develop safe and effective methods to prevent breast cancer.As Dr. Ponzone points out, mammography “is not without its drawbacks” [1]. Mammography, although associated with reduced breast cancer-specific mortality in some studies [2, 3], has not been found to reduce breast cancer-specific mortality in others [4]. In addition, mammograms can detect noninvasive cancers, some of which might not evolve to invasive breast cancer (the problem of overdiagnosis) [5]. However, I believe it is misguided to conclude that “preventive measures for a given individual might have only modest impact” and that “efforts of cancer specialists should focus more on improving the length and quality of life of patients through therapeutic advances.” Although cancer specialists should work to develop more effective therapies for women with all stages of breast cancer, the greatest impact on breast cancer incidence and mortality will come from appropriately applying risk-based cancer preventive and early detection strategies.The word “prevention” is often interpreted differently by the general population and health care providers. For health care experts, interventions that reduce the incidence of disease (in this case, cancer), even if incompletely, are considered to have prevented the disease in some individuals. However, for most of the general population, interventions that “prevent” disease are considered to be 100% effective (i.e., to reduce the incidence to zero) and to have minimal toxicity. The common perception is that an individual receiving preventive treatment will have no side effects and will never develop the disease to be prevented (cancer, in this case). The common example of such a “preventive intervention” is that of the polio vaccine given in childhood with minimal toxicity and almost 100% efficacy [6]. Other acceptable “preventive interventions” include treatment with statins to reduce cholesterol levels to prevent heart disease [7], antihypertensive drugs to prevent strokes [8], and bisphosphonate drugs to prevent bone fractures [9]. However, in each of these cases, the intervention is neither 100% effective nor risk-free. It is remarkable that the general population accepts medical intervention to prevent heart disease, strokes, and bone fractures but often does not accept “preventive interventions” to prevent cancer.There are currently available interventions that clearly prevent many breast cancers in high-risk women. These include bilateral prophylactic mastectomy, which prevents up to 90% of breast cancers in very high-risk women [10, 11]; antiestrogen preventive therapy (with anti-estrogen selective estrogen receptor modulators, such as tamoxifen or raloxifene), which prevents approximately 50% of breast cancers [12]; and aromatase inhibitors, which prevent up to 70% of breast cancers in moderately high-risk women [13]). These interventions prevent breast cancer in many women but are often not accepted because of the possible side effects. The behavioral interventions that Dr. Ponzone mentions (avoidance of environmental carcinogens and lifestyle factors such as diet and exercise) likely also prevent some cancers; however, these highly tolerable interventions are less effective than the surgical or medical interventions mentioned. In clinical practice, these various preventive interventions are being used in a tiered fashion according to risk. Thus, for women at extremely high risk of breast cancer (such as those carrying BRCA1 or BRCA2 mutations), bilateral prophylactic mastectomies are considered and frequently performed. For women at moderately high risk (e.g., those with precancerous lesions such as atypical ductal hyperplasia), preventive therapy with tamoxifen, raloxifene, or an aromatase inhibitor is being prescribed and accepted by many women. The remaining women (those at low to moderate risk of breast cancer) might benefit from behavioral interventions such as exercise, diet, and alcohol avoidance alone. The current interest in healthy lifestyles has led Dr. Graham Colditz to suggest that by avoiding exposure to carcinogens, receiving vaccination for oncogenic viruses, and implementing lifestyle measures to minimize tobacco use and obesity, it is possible to reduce cancer incidence by 50% or more [14]. Although it is currently difficult to determine whether an individual woman will benefit from these behavioral interventions, such measures are generally healthful and thus should be recommended.Dr. Ponzone also cites the recent report by Tomasetti and Vogelstein as evidence that cancer prevention interventions are unlikely to be generally useful. Drs. Tomasetti and Vogelstein investigated the relationship between the lifetime risk of specific cancer types and the total number of divisions of “normal self-renewing cells” [15]. These investigators reached the provocative conclusion that only one third of cancer risk can be attributed to inherited predispositions or environmental factors, with the remaining two thirds of cancer risk attributable to random DNA mutations occurring in normal, noncancerous cells. These investigators attributed this random DNA mutation rate as “bad luck” and concluded that such findings suggest that cancer preventive interventions such as avoiding environmental or endogenous carcinogens will do little to reduce the risk of these cancers. The conclusion that much of cancer risk can be attributed to DNA mutations is certainly correct; however, the conclusion that the rate of DNA mutation has little to do with endogenous and exogenous exposure to carcinogens and mutagens is unlikely to be true.The report by Drs. Tomasetti and Vogelstein has been criticized by others [1618]. However, it is important to point out several major issues with their analysis here. Central to the study by Tomasetti and Vogelstein is the hypothesis that cancer risk can be directly related to the number of stem cell divisions in normal tissue [15]. In their report, they showed a positive linear relationship between the lifetime risk of cancer (abstracted from incidence data from the Surveillance, Epidemiology, and End Results Program database) and the number of stem cell divisions in normal tissues over an average lifetime (estimated from immunostaining for stem cell markers or from biologic studies). However, they carefully selected the tumor types to include in their study. Tomasetti and Vogelstein left out important common cancers that might not fit their linear relationship (e.g., breast, prostate, and ovary) [15]. Equally problematic is the “expansion” of some tumors into nontraditional subsets that are treated as separate tumors (e.g., splitting osteosarcomas into five different subtypes, each equally weighted as esophageal, testicular, and head and neck cancer). This process of selecting specific tumors that fit their hypothesis, and leaving out those that do not, greatly weakens the validity of their conclusion and does not allow their analysis to be generally applicable to many cancer types.Dr. Ponzone also cites problems with the “early detection” of breast cancer. Mammograms are certainly able to detect breast cancer at an early stage. However, the current debate has been focused on whether mammograms detect too many cancers that are not life-threatening [25]. This problem of “overdiagnosis” of nonlethal cancers is a major focus of current early detection research. Similar to “prevention,” the phrase “early detection” often implies to the general population a test that is 100% effective in detecting cancer (i.e., is 100% sensitive), with no false-positive results (i.e., 100% specific). However, no screening test will be 100% sensitive and 100% specific. Although mammograms will not detect all breast cancers, currently, with computer-aided detection, mammograms are 85% sensitive and 92% specific [19]. Thus, mammography remains the reference standard breast screening test. However, a need certainly exists to develop breast screening tests that more effectively detect lethal cancers without identifying nonlethal cancers.The concept of breast cancer “early detection” is also evolving. Clinicians now use a risk-based approach to detect breast cancer. For low- to average-risk women, the generally accepted screening guidelines for the general population are being used. Although debate is ongoing concerning at what age mammographic screening should start (40 or 50 years old or older) and whether mammograms should be obtained yearly or every other year [2023], such screening approaches should only be applied to those women with the population or average risk. For high-risk women, more aggressive screening approaches are generally used (and are authorized for payment by Medicare and insurance companies). For women with a lifetime risk of 20%–25% or higher, including women with BRCA1 or BRCA2 mutations, annual mammograms and annual breast magnetic resonance imaging scans have been recommended. Bilateral breast ultrasonography is also often added to mammography for breast cancer screening in women with lobular premalignant lesions (e.g., atypical lobular hyperplasia and lobular carcinoma in situ). Thus, a risk-based approach is also now being used for breast cancer screening.So, are women’s expectations for breast cancer prevention and early detection being met? For the highest risk women, the answer appears to be yes. However, for most women (in particular, those at low to moderate risk), the answer is clearly no. For such women, it is clear that additional research is needed to improve the ability to detect life-threatening cancer at an early curable stage and to prevent the development of these cancers. Many research groups are working to discover more effective and safer methods to detect and prevent life-threatening breast cancers. Promising prevention strategies include using novel medical therapies such as drugs targeting precancerous cells [24], natural products [25], cancer vaccines [26], and combinations of exercise, diet, and antidiabetic drugs such as metformin [27, 28]. Novel early detection strategies are also being developed that use blood-based DNA, RNA, or protein markers to detect life-threatening cancer [29]. The results from such research studies will ultimately allow women’s expectations for breast cancer prevention and early detection to be met.  相似文献   

18.
19.
The study by Partridge et al., published in this edition of The Oncologist, is examined.Young breast cancer patients face multiple different challenges when diagnosed and when undergoing treatment for breast cancer than do postmenopausal patients. Very young breast cancer patients, especially those aged <35 years, have been described in multiple reports to have a worse prognosis [13]. Additionally, because women are not routinely recommended to consider initiating screening mammography until their 40s or 50s, a younger woman is more likely to develop a cancer at an advanced stage and to be diagnosed because of the onset of symptoms. There remains concern that a delay in diagnosis of breast cancer in a younger woman is one of the contributing factors to these worse outcomes in young breast cancer patients.It was estimated by the American Cancer Society that ∼5% of cases of breast cancer will be diagnosed each year in women aged <40 years [4]. Several case series have demonstrated worse outcomes in younger breast cancer patients. Nixon et al. [1] reported on 107 patients aged <35 years with either stage I or stage II breast cancer and compared them with their older counterparts. The younger patients were found to have a statistically significant greater recurrence risk and risk for developing distant metastatic disease. Kroman et al. [3] reported that younger women who had node-negative cancers or tumors <2 cm were significantly more likely to die from their cancer if they did not receive systemic adjuvant therapy. However, this effect was mitigated if they received cytotoxic therapy. Race may also play a factor in the diagnosis of breast cancer in young women. Black women have a twofold higher chance of being diagnosed at <35 years of age than white females. In addition, black women have higher presenting stages and higher mortality rates from breast cancer in the U.S. than white women [5, 6].When looking at the underlying biology of younger women with breast cancer, the majority of reports show a higher rate of hormone receptor–negative tumors [13] than in postmenopausal breast cancer patients. When evaluating very low-risk disease, such as node-negative tumors that are ≤1 cm, age was identified as an independent risk factor for the recurrence-free survival interval. This was more significant in women with human epidermal growth factor receptor (HER)-2+ and hormone receptor–positive tumors and was not demonstrated to be significant in patients with triple-negative tumors [7]. Additionally, Anders et al. [8] used microarray data from 784 women with early-stage breast cancers, again demonstrating lower percentages of hormone receptor positivity and higher grade tumors. There were 367 gene sets found to significantly differentiate the tumors arising in younger women from those in older women. Examples of gene sets that were differentiated by age included those encoding epidermal growth factor receptor and mammalian target of rapamycin as well as BRCA1, among many others [8]. This information was particularly intriguing and hypothesis generating regarding combination therapy that could take advantage of such pathways in younger breast cancer patients. Therefore, the question remains, is it truly the age of the patient herself that is the underlying cause of this described worse prognosis or is age just a surrogate marker for more aggressive tumor subtypes that occur more frequently in younger patients?Another confounding factor in this debate is the question surrounding a delay in diagnosis. There have been conflicting reports on whether or not a delay in diagnosis and a delay from the time of first symptom to the time of treatment initiation influence breast cancer prognosis. One large systematic review demonstrated that delays of 3–6 months from the time of diagnosis to the time of initiation of treatment were associated with a statistically significant lower 5-year survival rate [9]. However, it was noted that, in the included studies that did account for stage, a delay was not associated with a worse overall survival outcome. Younger patients who had not previously been designated to be high risk and had not undergone earlier screening would be expected to present with more advanced stages of disease than postmenopausal women whose cancer was diagnosed on screening mammography. In the most recent screening mammography recommendations by the U.S. Prevention Services Task Force, mammogram recommendations were changed to start after age 49, to increase the interval between screenings, and to individualize the decision for women in their 40s [10]. This change was a result of the lack of available randomized data showing a lower mortality rate with screening in this age group, although recently Hellquist et al. [11] used data collected from the Swedish mammography screening program to evaluate their experience with screening women aged 40–49 years and showed a relative risk for mortality for those who underwent screening of 0.71, which was statistically significant, and estimated that 1,252 women needed to be screened to save one life. There currently are no recommendations for screening mammography in women aged <40 years unless they have been identified to be at high risk for developing breast cancer, such as women with a hereditary predisposition to develop breast cancer, such as patients with BRCA1 or BRCA2 mutations. Therefore, as routine strategies for screening are currently evaluating women aged >40 years, except in special populations, younger women who develop breast cancer are more likely to present with clinical symptoms.In the current retrospective study by Partridge et al. [12], the authors use the National Comprehensive Cancer Center Network database to evaluate 21,818 women with breast cancer, stages I–IV, of whom 2,445 were aged ≤40 years. The purpose of the study was to evaluate if age alone was considered an independent risk factor for a delay from the time of the first symptom to the time that the woman sought treatment and was initially diagnosed with breast cancer. This study therefore asks a slightly different question: does age factor into actually seeking evaluation and undergoing diagnostic procedures? Additionally, the authors evaluated other socioeconomic factors such as race, education, employment status, and type of initial sign or symptom. When evaluating age as a factor in delay to breast cancer diagnosis, age ≤40 years old was initially found to be associated with a >60-day delay in diagnosis—odds ratio (OR), 1.52; 95% confidence interval (CI), 1.39–1.67; p < .0001. However, when the multivariate model was adjusted for the initial sign or symptom, there was no longer a statistically significant difference. Also in multivariate modeling, those women who had an initial sign or symptom had a significantly greater association with a >60-day delay in diagnosis (OR, 3.31; 95% CI, 3.08–3.56). Women with screening-detected tumors were more likely to be diagnosed with an earlier stage of breast cancer (64% of women diagnosed using screening were diagnosed with stage I tumors, compared with 28% of women who presented with an initial sign or symptom).This was a very well-conducted study using a very detailed database that spans several cancer centers throughout the U.S. The study has the expected limitations of a retrospective database study, including recall bias and the lack of information regarding previous screenings and whether or not patients had been identified as having a high risk for developing breast cancer. Also, women who were included had been referred to or sought treatment at a large comprehensive cancer center and may not reflect presentation patterns elsewhere. It would be interesting to note if there is any difference in time to delay not by age but by tumor biology. If a tumor has a more aggressive growth pattern, such as triple receptor–negative or HER-2+ tumor, will that influence more substantially the time to diagnosis? For this analysis, the authors did not include tumor receptor information in their modeling. Given the understanding of these inherent limitations, the study does provide insight as to whether or not age is a barrier to seeking treatment and being diagnosed with breast cancer.The failure of age to be consistently associated with a delay in diagnosis of breast cancer in this study when controlling for stage of disease leads back to the original concern: is it age or is it biology? If a delay does not appear to be the significant factor for young breast cancer patients and thus not likely to be the underlying cause of worse outcomes, the underlying biology of the disease that commonly presents in younger women should continue to be the main target. It will also be important to identify those women diagnosed at a young age with less biologically aggressive tumors who should be spared systemic therapy. In order to improve our therapies as well as to improve patient outcomes, we need to continue to develop strategies for identifying women at higher risk for developing breast cancer, for improving knowledge on the importance of family history, for improving genetic risk assessment, for identifying known and yet to be identified hereditary cancer syndromes, and, finally, for personalizing screening, prevention, and treatment strategies.See the accompanying article on pages 775–782 of this issue.  相似文献   

20.
A meta-analysis of epidemiological studies reported no increased risk for cancer in users of thiazolidinediones; however, subanalyses showed a small 1.1- to 1.2-fold increased risk for bladder cancer with thiazolidinedione use. This analysis was probably distorted by “duplicate publication bias.”A meta-analysis of epidemiological studies reported no increased risk for cancer in users of thiazolidinediones (TZDs). Subanalyses showed a small 1.1- to 1.2-fold increased risk for bladder cancer with TZD use [1]. This analysis was probably distorted by “duplicate publication bias,” because it included three different studies that used the same data source: the United Kingdom General Practice Research Database [24]. One study evaluated breast cancer [2], and the other two studies evaluated bladder cancer [3, 4]. One of the basics of meta-analysis is that it should not include correlated data. Although the periods of data collection and choices of study design differed, the study populations in each paper had a substantial overlap. As a result, the statistical power of the meta-analysis is artificially increased. Because every study showed a positive association between TZDs and cancer, the pooled effect estimate is likely to be overestimated, in particular for bladder cancer [5]. This has previously been demonstrated in trials of the efficacy of ondansetron to prevent postoperative nausea and vomiting [6]. Sensitivity analysis (exclusion of duplicate studies) is probably a useful technique to deal with this issue. We wonder how the overall findings would have been, when only one General Practice Research Database study was included in every (sub)analysis that was presented [1].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号