首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 890 毫秒
1.
Primary vesicoureteral reflux (pVUR) is one of the most common causes of pediatric kidney failure. Linkage scans suggest that pVUR is genetically heterogeneous with two loci on chromosomes 1p13 and 2q37 under autosomal dominant inheritance. Absence of pVUR in parents of affected individuals raises the possibility of a recessive contribution to pVUR. We performed a genome-wide linkage scan in 12 large families segregating pVUR, comprising 72 affected individuals. To avoid potential misspecification of the trait locus, we performed a parametric linkage analysis using both dominant and recessive models. Analysis under the dominant model yielded no signals across the entire genome. In contrast, we identified a unique linkage peak under the recessive model on chromosome 12p11-q13 (D12S1048), which we confirmed by fine mapping. This interval achieved a peak heterogeneity LOD score of 3.6 with 60% of families linked. This heterogeneity LOD score improved to 4.5 with exclusion of two high-density pedigrees that failed to link across the entire genome. The linkage signal on chromosome 12p11-q13 originated from pedigrees of varying ethnicity, suggesting that recessive inheritance of a high frequency risk allele occurs in pVUR kindreds from many different populations. In conclusion, this study identifies a major new locus for pVUR and suggests that in addition to genetic heterogeneity, recessive contributions should be considered in all pVUR genome scans.Vesicoureteral reflux (VUR; OMIM no. 193000) is the retrograde flow of urine from the bladder to the ureters and the kidneys during micturation. Uncorrected, VUR can lead to repeated urinary tract infections, renal scarring and reflux nephropathy, accounting for up to 25% of pediatric end stage renal disease.1,2 VUR is commonly seen as an isolated disorder (primary VUR; pVUR), but it can also present in association with complex congenital abnormalities of the kidney and urinary tract or with specific syndromic disorders, such as renal-coloboma and branchio-oto-renal syndromes.38pVUR has a strong hereditary component, with monozygotic twin concordance rates of 80%.912 Sibling recurrence rates of 30% to 65% have suggested segregation of a single gene or oligogenes with large effects.9,1214 Interestingly however, the three published genome-wide linkage scans of pVUR have strongly suggested multifactorial determination.1517 Two pVUR loci have been identified with genome-wide significance on chromosomes 1p13 and 2q37 under an autosomal dominant transmission with locus heterogeneity.15,16 Multiple suggestive signals have also been reported, but remarkably, these studies show little overlap.1517 These data suggest that pVUR may be extremely heterogeneous, with mutations in different genes each accounting for a fraction of cases. The genes underlying pVUR loci have not yet been identified, but two recent studies have reported segregating mutations in the ROBO2 gene in up to 5% of pVUR families.18,19Despite evidence for genetic heterogeneity and different subtypes of disease, genetic studies have all modeled pVUR as an autosomal dominant trait.1517,20 Recessive inheritance has generally not been considered because the absence of affected parents can be explained by spontaneous resolution of pVUR with older age. However, many pVUR cohorts are composed of affected sibships or pedigrees compatible with autosomal recessive transmission, suggesting the potential for alternative modes of inheritance.912,16,17,2022 Systematic family screening to clarify the mode of inheritance is not feasible for pVUR because the standard diagnostic tool, the voiding cystourethrogram (VCUG), is invasive and would expose participants to radiation. Formal assessment of a recessive contribution in sporadic pVUR has also been difficult because studies have been conducted in populations with low consanguinity rates.912,16,17,2022 However, recent studies have identified an unexpected recessive contribution to several complex traits such as ductus arteriosus or autism.23,24 Thus, in addition to genetic heterogeneity, genes with alternative modes of transmission may segregate among pVUR families, and misspecification of the inheritance model may complicate mapping studies of this trait.Several approaches can be considered to address the difficulties imposed by complex inheritance, variable penetrance, and genetic heterogeneity. Studying large, well characterized cohorts with newer single-nucleotide polymorphism (SNP)-based technologies can maximize inheritance information across the genome and increase the power of linkage studies.25 In addition, in the setting of locus heterogeneity and uncertainty about the mode of transmission, analysis under a dominant and a recessive model has greater power compared with nonparametric methods and more often results in detection of the correct mode of transmission without incurring a significant penalty for multiple testing.2629 We combined these approaches in this study and successfully localized a major gene for VUR, which unexpectedly demonstrates autosomal recessive transmission.  相似文献   

2.
Late referral of patients with chronic kidney disease is associated with increased morbidity and mortality, but the contribution of center-to-center and geographic variability of pre-ESRD nephrology care to mortality of patients with ESRD is unknown. We evaluated the pre-ESRD care of >30,000 incident hemodialysis patients, 5088 (17.8%) of whom died during follow-up (median 365 d). Approximately half (51.3%) of incident patients had received at least 6 mo of pre-ESRD nephrology care, as reported by attending physicians. Pre-ESRD nephrology care was independently associated with survival (odds ratio 1.54; 95% confidence interval 1.45 to 1.64). There was substantial center-to-center variability in pre-ESRD care, which was associated with increased facility-specific death rates. As the proportion of patients who were in a treatment center and receiving pre-ESRD nephrology care increased from lowest to highest quintile, the mortality rate decreased from 19.6 to 16.1% (P = 0.0031). In addition, treatment centers in the lowest quintile of pre-ESRD care were clustered geographically. In conclusion, pre-ESRD nephrology care is highly variable among treatment centers and geographic regions. Targeting these disparities could have substantial clinical impact, because the absence of ≥6 mo of pre-ESRD care by a nephrologist is associated with a higher risk for death.Nephrology care before starting hemodialysis (HD) is an important determinant of health status of patients with ESRD1,2 and is associated with hypoalbuminemia,3 anemia,4 absence of a functioning arteriovenous vascular access,5 reduced quality of life,6 and decreased kidney transplantation.7 Delayed care is associated with progression of kidney disease8,9 and increased mortality after start of HD.1013 Early nephrology referral for individuals with chronic kidney disease (CKD) is recommended14,15 for creation of an arteriovenous fistula (AVF) 6 mo before the anticipated start of HD.16Despite these guidelines, incident patients with ESRD frequently present without antecedent nephrology care.17 Differences between treatment center and geographic areas, similar to variations reported for the care of prevalent patients with ESRD, are possible factors that might contribute to variable pre-ESRD care.1719 If clinically relevant center-to-center and geographic variations in pre-ESRD care exist, then interventions might be designed to reduce the risk for delayed or absent care. This report describes the variable prevalence and clinical consequences for both individual patients and their treatment center populations of delayed pre-ESRD nephrology care in a large population-based sample of incident patients with ESRD.  相似文献   

3.
An uncontrolled trial reported that sodium thiosulfate reduces formation of calcium kidney stones in humans, but this has not been established in a controlled human study or animal model. Using the genetic hypercalciuric rat, an animal model of calcium phosphate stone formation, we studied the effect of sodium thiosulfate on urine chemistries and stone formation. We fed genetic hypercalciuric rats normal food with or without sodium thiosulfate for 18 wk and measured urine chemistries, supersaturation, and the upper limit of metastability of urine. Eleven of 12 untreated rats formed stones compared with only three of 12 thiosulfate-treated rats (P < 0.002). Urine calcium and phosphorus were higher and urine citrate and volume were lower in the thiosulfate-treated rats, changes that would increase calcium phosphate supersaturation. Thiosulfate treatment lowered urine pH, which would lower calcium phosphate supersaturation. Overall, there were no statistically significant differences in calcium phosphate supersaturation or upper limit of metastability between thiosulfate-treated and control rats. In vitro, thiosulfate only minimally affected ionized calcium, suggesting a mechanism of action other than calcium chelation. In summary, sodium thiosulfate reduces calcium phosphate stone formation in the genetic hypercalciuric rat. Controlled trials testing the efficacy and safety of sodium thiosulfate for recurrent kidney stones in humans are needed.Nephrolithiasis is one of the most common disorders of the urinary tract, affecting approximately 12% of men and 6% of women during their lifetimes in industrialized countries.1 Approximately 80% of kidney stones are composed primarily of calcium salts. Despite the high prevalence of kidney stone disease, there has been little progress in developing new therapies to prevent stone formation, especially in patients who have formed a kidney stone and who are at significantly increased risk for forming additional stones. The lack of progress in identifying new therapies for nephrolithiasis has been disappointing to the many patients who experience recurrent stone formation.Sodium thiosulfate (STS), Na2S2O3, is a compound with a long history of medicinal use.2,3 Currently, it is used for treatment of cyanide toxicity and as a neutralizing agent to reduce the toxicity of cisplatin chemotherapy.46 The effectiveness of STS in these diseases lies in its antioxidant activity and the availability of a sulfur group for donation. In 1985, Yatzidis7 reported on using STS as treatment for recurrent calcium nephrolithiasis. In a 4-yr study of 34 patients, he reported an 80% reduction in stone rates, compared with the patients’ own pretreatment stone formation rate. Unfortunately, no follow-up, prospective, controlled trials have been performed to determine the effectiveness of STS in preventing recurrent stone formation; however, anecdotal reports of successful treatment of calciphylaxis with STS in patients with end-stage kidney disease have stimulated interest in this compound as a potential therapy for disorders of calcium deposition, including stone disease.816 The mechanism by which STS affects calcium deposition is not known.Before pursuing new studies in humans, we chose first to study this drug in the genetic hypercalciuric stone-forming (GHS) rats because 40 to 50% of humans with kidney stones will have hypercalciuria, making it the most common metabolic abnormality. The GHS rat colony has been bred for hypercalciuria and now excretes approximately 8 to 10 times more urine calcium than similarly fed control rats.17 The pathophysiology of the hypercalciuria seems similar to that in humans in that it involves intestinal hyperabsorption,18,19 reduced renal tubular reabsorption,20 and increased bone mineral lability.21,22 Virtually all of the GHS rats form kidney stones, whereas control rats have no evidence of stone formation.23 On a standard rat diet, the kidney stones formed contain only calcium and phosphate.24 Here we report the results of a controlled trial to determine whether STS reduces stone formation in an animal model of spontaneous calcium phosphate stone formation.  相似文献   

4.
Chronic kidney disease (CKD) guidelines recommend evaluating patients with GFR <60 ml/min per 1.73 m2 for complications, but little evidence supports the use of a single GFR threshold for all metabolic disorders. We used data from the NephroTest cohort, including 1038 adult patients who had stages 2 through 5 CKD and were not on dialysis, to study the occurrence of metabolic complications. GFR was measured using renal clearance of 51Cr-EDTA (mGFR) and estimated using two equations derived from the Modification of Diet in Renal Disease study. As mGFR decreased from 60 to 90 to <20 ml/min per 1.73 m2, the prevalence of hyperparathyroidism increased from 17 to 85%, anemia from 8 to 41%, hyperphosphatemia from 1 to 30%, metabolic acidosis from 2 to 39%, and hyperkalemia from 2 to 42%. Factors most strongly associated with metabolic complications, independent of mGFR, were younger age for acidosis and hyperphosphatemia, presence of diabetes for acidosis, diabetic kidney disease for anemia, and both male gender and the use of inhibitors of the renin-angiotensin system for hyperkalemia. mGFR thresholds for detecting complications with 90% sensitivity were 50, 44, 40, 39, and 37 ml/min per 1.73 m2 for hyperparathyroidism, anemia, acidosis, hyperkalemia, and hyperphosphatemia, respectively. Analysis using estimated GFR produced similar results. In summary, this study describes the onset of CKD-related complications at different levels of GFR; anemia and hyperparathyroidism occur earlier than acidosis, hyperkalemia, and hyperphosphatemia.Since the National Kidney Foundation published its definition and classification of chronic kidney disease (CKD),1 evidence has accumulated showing that it is a common disease,2,3 associated with morbidity and mortality risks far broader and higher than those of simple progression to kidney failure.46 Early detection of CKD and its metabolic complications is now a priority for delaying disease progression and for primary prevention of many CKD-associated chronic diseases, including cardiovascular, mineral, and bone diseases5,79; however, data on the natural history of these complications according to reference methods are sparse, and there is little evidence about the most appropriate timing for their detection.CKD metabolic complications, which include anemia, metabolic acidosis, and mineral and electrolyte disorders, may be asymptomatic for a long time.1021 According to Kidney Disease Outcomes Quality Initiative (K/DOQI) guidelines,1 all patients at stage 3 CKD or above (i.e., those with a GFR <60 ml/min per 1.73 m2), should be evaluated for all complications. This threshold, however, was defined from clinical and population-based studies, all of which used equation-estimated GFR (eGFR),1 a method sensitive to both the choice of equation and serum creatinine (Scr) calibration, particularly for the highest GFR values.22,23 Population-based studies, with one exception,24 have also lacked the power to search for complication-specific GFR thresholds below 60 ml/min per 1.73 m2. Moreover, although a few studies showed the influence of some patient characteristics, such as ethnic origin and diabetes, on the prevalence of various complications,2429 neither their potential impact nor the effect of clinical factors on metabolic disorders has been investigated systematically.Our primary purpose, therefore, was to define GFR thresholds, measured with a reference method (mGFR: 51Cr-EDTA renal clearance), and factors associated with CKD-related metabolic complications in a clinical cohort of 1038 patients with stages 2 through 5 CKD. Because mGFR is rarely performed in clinical practice, we also estimated these thresholds with eGFR and studied how the results differed according to method.  相似文献   

5.
Administration of activated protein C (APC) protects from renal dysfunction, but the underlying mechanism is unknown. APC exerts both antithrombotic and cytoprotective properties, the latter via modulation of protease-activated receptor-1 (PAR-1) signaling. We generated APC variants to study the relative importance of the two functions of APC in a model of LPS-induced renal microvascular dysfunction. Compared with wild-type APC, the K193E variant exhibited impaired anticoagulant activity but retained the ability to mediate PAR-1-dependent signaling. In contrast, the L8W variant retained anticoagulant activity but lost its ability to modulate PAR-1. By administering wild-type APC or these mutants in a rat model of LPS-induced injury, we found that the PAR-1 agonism, but not the anticoagulant function of APC, reversed LPS-induced systemic hypotension. In contrast, both functions of APC played a role in reversing LPS-induced decreases in renal blood flow and volume, although the effects on PAR-1-dependent signaling were more potent. Regarding potential mechanisms for these findings, APC-mediated PAR-1 agonism suppressed LPS-induced increases in the vasoactive peptide adrenomedullin and infiltration of iNOS-positive leukocytes into renal tissue. However, the anticoagulant function of APC was responsible for suppressing LPS-induced stimulation of the proinflammatory mediators ACE-1, IL-6, and IL-18, perhaps accounting for its ability to modulate renal hemodynamics. Both variants reduced active caspase-3 and abrogated LPS-induced renal dysfunction and pathology. We conclude that although PAR-1 agonism is solely responsible for APC-mediated improvement in systemic hemodynamics, both functions of APC play distinct roles in attenuating the response to injury in the kidney.Acute kidney injury (AKI) leading to renal failure is a devastating disorder,1 with a prevalence varying from 30 to 50% in the intensive care unit.2 AKI during sepsis results in significant morbidity, and is an independent risk factor for mortality.3,4 In patients with severe sepsis or shock, the reported incidence ranges from 23 to 51%57 with mortality as high as 70% versus 45% among patients with AKI alone.1,8The pathogenesis of AKI during sepsis involves hemodynamic alterations along with microvascular impairment.4 Although many factors change during sepsis, suppression of the plasma serine protease, protein C (PC), has been shown to be predictive of early death in sepsis models,9 and clinically has been associated with early death resulting from refractory shock and multiple organ failure in severe sepsis.10 Moreover, low levels of PC have been highly associated with renal dysfunction and pathology in models of AKI.11 During vascular insult, PC becomes activated by the endothelial thrombin-thrombomodulin complex, and the activated protein C (APC) exhibits both antithrombotic and cytoprotective properties. We have previously demonstrated that APC administration protects from renal dysfunction during cecal ligation and puncture and after endotoxin challenge.11,12 In addition, recombinant human APC [drotrecogin alfa (activated)] has been shown to reduce mortality in patients with severe sepsis at high risk of death.13 Although the ability of APC to protect from organ injury in vivo is well documented,11,14,15 the precise mechanism mediating the response has not been ascertained.APC exerts anticoagulant properties via feedback inhibition of thrombin by cleavage of factors Va and VIIIa.16 However, APC bound to the endothelial protein C receptor (EPCR) can also exhibit direct potent cytoprotective properties by cleaving protease-activated receptor-1 (PAR-1).17 Various cell culture studies have demonstrated that the direct modulation of PAR-1 by APC results in cytoprotection by several mechanisms, including suppression of apoptosis,18,19 leukocyte adhesion,19,20 inflammatory activation,21 and suppression of endothelial barrier disruption.22,23 In vivo, the importance of the antithrombotic activity of APC is well established in model systems24,25 and in humans.26 However, the importance of PAR-1-mediated effects of APC also has been clearly defined in protection from ischemic brain injury27 and in sepsis models.28 Hence, there has been significant debate whether the in vivo efficacy of APC is attributed primarily to its anticoagulant (inhibition of thrombin generation) or cytoprotective (PAR-1-mediated) properties.17,29The same active site of APC is responsible for inhibition of thrombin generation by the cleavage of factor Va and for PAR-1 agonism. Therefore, we sought to generate point mutations that would not affect catalytic activity, but would alter substrate recognition to distinguish the two functions. Using these variants, we examined the relative role of the two known functions of APC in a model of LPS-induced renal microvascular dysfunction.  相似文献   

6.
Donor characteristics such as age and cause of death influence the incidence of delayed graft function (DGF) and graft survival; however, the relative influence of donor characteristics (“nature”) versus transplant center characteristics (“nurture”) on deceased-donor kidney transplant outcomes is unknown. We examined the risks for DGF and allograft failure within 19,461 recipient pairs of the same donor''s kidneys using data from the US Renal Data System. For the 11,894 common-donor pairs transplanted at different centers, a recipient was twice as likely to develop DGF when the recipient of the contralateral kidney developed DGF (odds ratio [OR] 2.05; 95% confidence interval [CI] 1.82 to 2.30). Similarly, for 7567 common-donor pairs transplanted at the same center, the OR for DGF was 3.02 (95% CI 2.62 to 3.48). For pairs transplanted at the same center, there was an additional 42% risk for DGF compared with pairs transplanted at different centers. After adjustment for DGF, the within-pair ORs for allograft failure by 1 yr were 1.92 (95% CI 1.33 to 2.77) and 1.77 (95% CI 1.25 to 2.52) for recipients who underwent transplantation at the same center and different centers, respectively. These data suggest that both unmeasured donor characteristics and transplant center characteristics contribute to the risk for DGF and that the former also contribute significantly to allograft failure.Delayed graft function (DGF) is an important predictor of graft failure after kidney transplantation.13 The incidence of DGF after deceased-donor kidney transplants ranges between 23 and 50%.46 Although some studies have been mixed, several large studies have shown that DGF influences graft failure both through its association with and independent of acute rejection.5,710 DGF also adversely affects cost, length of hospitalization, and patient rehabilitation.1113 Allograft failure results in half of the deceased-donor kidneys being lost at 11 yr after transplantation.14There are many known determinants of DGF and allograft failure. Studies have implicated a number of immunologic and nonimmunologic characteristics, including donor factors, recipient factors, and the transplant procedure.4,6,1521 A limited effort has been made to evaluate the relative contribution of these risk factors by exploiting that there is variation in the response of recipients of kidneys from the same donor.18,2224 This approach is similar to studies of monozygotic twins reared apart, which seek to quantify the relative importance of environmental and genetic factors on the basis of variability within twin pairs and among twin pairs.22,25 Analyses that examine outcomes in two recipients of kidneys from the same deceased donor can be used to determine the donor''s relative contribution to the recipients’ outcomes.We retrospectively evaluated a national cohort of deceased-donor transplant recipients to understand better the complex relationship between donor (“nature”) and transplant center effects (“nurture”) associated with DGF and kidney allograft failure. We examined the within-pair correlation of these outcomes among recipients of kidneys from the same deceased donor and adjusted for transplant center effect by estimating separate odds ratios (ORs) for recipient pairs who underwent transplantation at the same transplant center and at different transplant centers. The transplant center effect was detected by determining the difference in outcomes for the paired kidneys from the same deceased donor transplanted at the same versus different centers.  相似文献   

7.
Despite the high prevalence of chronic kidney disease (CKD), relatively few individuals with CKD progress to ESRD. A better understanding of the risk factors for progression could improve the classification system of CKD and strategies for screening. We analyzed data from 65,589 adults who participated in the Nord-Trøndelag Health (HUNT 2) Study (1995 to 1997) and found 124 patients who progressed to ESRD after 10.3 yr of follow-up. In multivariable survival analysis, estimated GFR (eGFR) and albuminuria were independently and strongly associated with progression to ESRD: Hazard ratios for eGFR 45 to 59, 30 to 44, and 15 to 29 ml/min per 1.73 m2 were 6.7, 18.8, and 65.7, respectively (P < 0.001 for all), and for micro- and macroalbuminuria were 13.0 and 47.2 (P < 0.001 for both). Hypertension, diabetes, male gender, smoking, depression, obesity, cardiovascular disease, dyslipidemia, physical activity and education did not add predictive information. Time-dependent receiver operating characteristic analyses showed that considering both the urinary albumin/creatinine ratio and eGFR substantially improved diagnostic accuracy. Referral based on current stages 3 to 4 CKD (eGFR 15 to 59 ml/min per 1.73 m2) would include 4.7% of the general population and identify 69.4% of all individuals progressing to ESRD. Referral based on our classification system would include 1.4% of the general population without losing predictive power (i.e., it would detect 65.6% of all individuals progressing to ESRD). In conclusion, all levels of reduced eGFR should be complemented by quantification of urinary albumin to predict optimally progression to ESRD.Since the publication of the Kidney Disease Outcomes Quality Initiative (K/DOQI) clinical practice guidelines on the classification of chronic kidney disease in 2002,1 several studies based on this classification system have shown very high prevalence estimates of chronic kidney disease (CKD) in the general population (10 to 13%).2,3 Screening for CKD is therefore increasingly suggested1,4; however, only a small proportion of patients with stage 3 to 4 CKD progress to ESRD.5 There is an ongoing discussion on whether the current CKD criteria are appropriate.68 Developing a risk score to identify better the patients who are at increased risk for ESRD would be of major importance for the current efforts to establish clinical guidelines and public health plans for CKD.4,9,10Several predictors of progression to ESRD have been identified,9 but their independent predictive power has not been well studied either in the general population or in high-risk subgroups. Intuitively, a low estimated GFR (eGFR) is an important risk factor for ESRD, and eGFR is the backbone of the current CKD classification. High urine albumin is a well-established major risk factor for progression.9 Only a few studies have examined the renal risk as a function of the combination of eGFR and albuminuria.1114 These studies are of restricted value, however, because of exclusion of patients with diabetes14; inclusion of men only12; inclusion of only patients with diabetes13; or absence of information on potentially important risk factors, such as smoking, obesity, dyslipidemia, and cardiovascular disease.11,14CKD screening beyond patients with known hypertension or diabetes has been proposed,1,4 but such screening programs have remained unsatisfactory because of their limited predictive power. We used the data of the Second Nord-Trøndelag Health Study (HUNT 2), Norway, to improve such prediction. HUNT 2 is a large population-based study with a high participation rate.15 Our aim was to examine how accurately subsequent progression to ESRD could be predicted by a combined variable of baseline eGFR and urine albumin. We also tested whether further potential renal risk factors provided additional independent prediction.  相似文献   

8.

Background:

The relationship between cardiovascular disease (CVD) risk factors and dietary intake is unknown among individuals with spinal cord injury (SCI).

Objective:

To investigate the relationship between consumption of selected food groups (dairy, whole grains, fruits, vegetables, and meat) and CVD risk factors in individuals with chronic SCI.

Methods:

A cross-sectional substudy of individuals with SCI to assess CVD risk factors and dietary intake in comparison with age-, gender-, and race-matched able-bodied individuals enrolled in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Dietary history, blood pressure, waist circumference (WC), fasting blood glucose, high-sensitivity C-reactive protein (hs-CRP), lipids, glucose, and insulin data were collected from 100 SCI participants who were 38 to 55 years old with SCI >1 year and compared to 100 matched control participants from the CARDIA study.

Results:

Statistically significant differences between SCI and CARDIA participants were identified in WC (39.2 vs 36.2 in.; P < .001) and high-density lipoprotein cholesterol (HDL-C; 39.2 vs 47.5 mg/dL; P < .001). Blood pressure, total cholesterol, triglycerides, glucose, insulin, and hs-CRP were similar between SCI and CARDIA participants. No significant relation between CVD risk factors and selected food groups was seen in the SCI participants.

Conclusion:

SCI participants had adverse WC and HDL-C compared to controls. This study did not identify a relationship between consumption of selected food groups and CVD risk factors.Key words: cardiovascular disease risk factors, dietary intake, spinal cord injuryCardiovascular disease (CVD) is a leading cause of death in individuals with chronic spinal cord injuries (SCIs).15 This is partly because SCI is associated with several metabolic CVD risk factors, including dyslipidemia,610 glucose intolerance,6,1114 and diabetes.1517 In addition, persons with SCI exhibit elevated markers of inflammation18,19 and endothelial activation20 that are correlated with higher CVD prevalence.2123 Obesity, and specifically central obesity, another CVD risk factor,2426 is also common in this population.12,2729Dietary patterns with higher amounts of whole grains and fiber have been shown to improve lipid abnormalities,30 glucose intolerance, diabetes mellitus,3134 hypertension,35 and markers of inflammation36 in the general population. These dietary patterns are also associated with lower levels of adiposity.31 Ludwig et al reported that the strong inverse associations between dietary fiber and multiple CVD risk factors – excessive weight gain, central adiposity, elevated blood pressure, hypertriglyceridemia, low high-density lipoprotein cholesterol (HDL-C), high low-density lipoprotein cholesterol (LDL-C), and high fibrinogen – were mediated, at least in part, by insulin levels.37 Whole-grain food intake is also inversely associated with fasting insulin, insulin resistance, and the development of type 2 diabetes.32,38,39Studies in the general population have also shown a positive association between the development of metabolic syndrome as well as heart disease and consumption of a Western diet, a diet characterized by high intake of processed and red meat and low intake of fruit, vegetables, whole grains, and dairy.40,41 Red meat, which is high in saturated fat, has been shown to have an association with adverse levels of cholesterol and blood pressure and the development of obesity, metabolic syndrome, and diabetes.40,42,43Numerous studies have shown that individuals with chronic SCI have poor diet quality.4449 A Canadian study found that only 26.7% of their sample was adherent to the recommendations about the consumption of fruit, vegetables, and grains from the “Eating Well with Canada’s Food Guide.”44 Individuals with chronic SCI have also been found to have low fiber and high fat intakes when their diets were compared to dietary recommendations from the National Cholesterol Education Program,46 the 2000 Dietary Guidelines for Americans,49 and the recommended Dietary Reference Intakes and the Acceptable Macronutrient Distribution Range.47,48However, unlike in the general population, the relationship between dietary intake and obesity and CVD risk factors is unknown in the chronic SCI population. If a dietary pattern consisting of higher intake of whole grains and dietary fiber is favorably associated with obesity and CVD risk factors in individuals with chronic SCI, then trials of increased intake of whole grains and fiber intake could be conducted to document health benefits and inform recommendations. The purpose of this pilot study is to investigate the association between selected food group intake and CVD risk factors in individuals with chronic SCI as compared to age-, gender-, and race-matched able-bodied individuals enrolled in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Data will also be used to plan future studies in the relatively understudied field of CVD and nutrition in individuals with SCI.  相似文献   

9.

Background:

Functional electrical stimulation (FES) therapy has been shown to be one of the most promising approaches for improving voluntary grasping function in individuals with subacute cervical spinal cord injury (SCI).

Objective:

To determine the effectiveness of FES therapy, as compared to conventional occupational therapy (COT), in improving voluntary hand function in individuals with chronic (≥24 months post injury), incomplete (American Spinal Injury Association Impairment Scale [AIS] B-D), C4 to C7 SCI.

Methods:

Eight participants were randomized to the intervention group (FES therapy; n = 5) or the control group (COT; n = 3). Both groups received 39 hours of therapy over 13 to 16 weeks. The primary outcome measure was the Toronto Rehabilitation Institute-Hand Function Test (TRI-HFT), and the secondary outcome measures were Graded Redefined Assessment of Strength Sensibility and Prehension (GRASSP), Functional Independence Measure (FIM) self-care subscore, and Spinal Cord Independence Measure (SCIM) self-care subscore. Outcome assessments were performed at baseline, after 39 sessions of therapy, and at 6 months following the baseline assessment.

Results:

After 39 sessions of therapy, the intervention group improved by 5.8 points on the TRI-HFT’s Object Manipulation Task, whereas the control group changed by only 1.17 points. Similarly, after 39 sessions of therapy, the intervention group improved by 4.6 points on the FIM self-care subscore, whereas the control group did not change at all.

Conclusion:

The results of the pilot data justify a clinical trial to compare FES therapy and COT alone to improve voluntary hand function in individuals with chronic incomplete tetraplegia.Key words: chronic patients, functional electrical stimulation, grasping, therapy, upper limbIn the United States and Canada, there is a steady rate of incidence and an increasing rate of prevalence of individuals living with spinal cord injury (SCI). For individuals with tetraplegia, hand function is essential for achieving a high level of independence in activities of daily living.15 For the majority of individuals with tetraplegia, the recovery of hand function has been rated as their highest priority.5Traditionally, functional electrical stimulation (FES) has been used as a permanent neuroprosthesis to achieve this goal.614 More recently, researchers have worked toward development of surface FES technologies that are meant to be used as shortterm therapies rather than permanent prosthesis. This therapy is frequently called FES therapy or FET. Most of the studies published to date, where FES therapy was used to help improve upper limb function, have been done in both the subacute and chronic stroke populations1523 and 2 have been done in the subacute SCI population.13 With respect to the chronic SCI population, there are no studies to date that have looked at use of FES therapy for retraining upper limb function. In a review by Kloosterman et al,24 the authors have discussed studies that have used various combinations of therapies for improving upper extremity function in chronic SCI individuals; however, the authors found that the only study that showed significant improvements before and after was the study published by Needham-Shropshire et al.25 This study examined the effectiveness of neuromuscular stimulation (NMS)–assisted arm ergometry for strengthening triceps brachii. In this study, electrical stimulation was used to facilitate arm ergometry, and it was not used in the context of retraining reaching, grasping, and/or object manipulation.Since 2002, our team has been investigating whether FES therapy has the capacity to improve voluntary hand function in complete and incomplete subacute cervical SCI patients who are less than 180 days post injury at the time of recruitment in the study.13 In randomized controlled trials (RCTs) conducted by our team, we found that FES therapy is able to restore voluntary reaching and grasping functions in individuals with subacute C4 to C7 incomplete SCI.13 The changes observed were transformational; individuals who were unable to grasp at all were able to do so after only 40 one-hour sessions of the FES therapy, whereas the control group showed significantly less improvement. Inspired by these results, we decided to conduct a pilot RCT with chronic (≥24 months following injury) C4 to C7 SCI patients (American Spinal Injury Association Impairment Scale [AIS] B-D), which is presented in this article. The purpose of this pilot study was to determine whether the FES therapy is able to restore voluntary hand function in chronic tetraplegic individuals. Based on the results of our prior phase I1 and phase II2,3 RCTs in the subacute SCI population, we hypothesized that individuals with chronic tetraplegia who underwent the FES therapy (intervention group) may have greater improvements in voluntary hand function, especially in their ability to grasp and manipulate objects, and perform activities of daily living when compared to individuals who receive similar volume and duration of conventional occupational therapy (COT: control group).  相似文献   

10.
American Indians have a higher prevalence of albuminuria than the general population, likely resulting from a combination of environmental and genetic risk factors. To localize gene regions influencing variation in urinary albumin-to-creatinine ratio, we performed a linkage analysis and explored gene-by-diabetes, -hypertension, and -obesity interactions in a large cohort of American Indian families. We recruited >3600 individuals from 13 American Indian tribes from three centers (Arizona, North and South Dakota, and Oklahoma). We performed multipoint variance component linkage analysis in each center as well as in the entire cohort after controlling for center effects. We used two modeling strategies: Model 1 incorporated age, gender, and interaction terms; model 2 also controlled for diabetes, BP, body mass index, HDL, LDL, triglycerides, and smoking status. We evaluated interactions with diabetes, hypertension, and obesity using additive, interaction-specific linkage and stratified analyses. Loci suggestive for linkage to urinary albumin-to-creatinine ratio included 1q, 6p, 9q, 18q, and 20p. Gene-by-diabetes interaction was present with a quantitative trait locus specific to the diabetic stratum in the Dakotas isolated on 18q21.2 to 21.3 using model 1 (logarithm of odds = 3.3). Gene-by-hypertension interaction was present with quantitative trait loci specific to the hypertensive stratum in the Dakotas on 7q21.11 using model 1 (logarithm of odds = 3.4) and 10q25.1 using model 2 (logarithm of odds = 3.3). These loci replicate findings from multiple other genome scans of kidney disease phenotypes with distinct populations and are worthy of further study.Albuminuria is a well-established risk factor for cardiovascular disease (CVD) and the development and progression of chronic kidney disease.14 For every 0.4-mg/mmol increase in urinary albumin-to-creatinine ratio (UACR), the risk for a major cardiovascular event increases by 6%.3 Individuals who have diabetes without proteinuria have a negligible annual risk for developing chronic renal insufficiency, whereas those with macroalbuminuria have a risk of 2.3% per year.5The prevalence of albuminuria in the American population is 12%.6 American Indians are at higher risk, with prevalence estimates in this population ranging between 21 and 36%.79 Risk factors for albuminuria, such as diabetes, hypertension, and obesity, are overrepresented in the American Indian population.9 Nonetheless, it is likely that genetic risk factors also contribute to the high prevalence of albuminuria in this population. An increase in the prevalence of albuminuria with increasing percentage of self-identified American Indian heritage has been reported.8,10The heritability for albuminuria ranges from approximately 0.17 to 0.20 in general populations,11,12 0.20 in families with diabetes,13 and 0.12 to 0.49 in families with hypertension,14,15 depending on ethnicity. Genome-wide scans for albuminuria have mostly yielded regions with suggestive evidence for linkage. Evidence of linkage for UACR to 20q12 was isolated by one study of Mexican Americans (logarithm of odds [LOD] = 3.5).12 Suggestive evidence of linkage of UACR to 18q22 has been replicated by four studies of distinct populations, three with diabetes and another with hypertension.1619 It has been suggested that the genes controlling urinary albumin excretion are the same in relatives both with and without diabetes.13 In individuals with diabetes, however, the degree of albuminuria is an order of magnitude higher, suggesting gene–gene or gene–environment interaction.20The goal of this study was two-fold. First, we aimed to isolate chromosomal regions influencing the phenotypic variation in urinary albumin excretion in a large, diverse cohort of American Indian families. Second, we aimed to identify evidence for gene-by-diabetes, -hypertension, or -obesity interaction on albuminuria.  相似文献   

11.
Chronic kidney disease (CKD, stages 1 to 4) affects approximately 13.1% of United States adults and leads to ESRD, cardiovascular disease, and premature death. Here, we assessed adherence to a subset of Kidney Disease Outcomes Quality Initiative preventive health care guidelines and identified associations between adherence and incident atherosclerotic heart disease (ASHD). Using the Medicare 5% data set, 1999 to 2005 (about 1.2 million patients per year), we created 3-yr rolling cohorts. We classified CKD and diabetes during year 1, assessed preventive care during year 2, and evaluated ASHD outcomes during year 3. We defined preventive care by the receipt of laboratory measurements (serum creatinine, lipids, calcium and phosphorus, parathyroid hormone, and, for patients with diabetes, hemoglobin A1c), influenza vaccination, and by at least one outpatient visit to a nephrologist. Among patients with CKD, 80% received ≥2 serum creatinine tests during the year, and only 11% received parathyroid hormone testing. Cumulative incidence of the combined ASHD outcome was 25% and 11% for patients with and without prevalent cardiovascular disease, respectively. Except for serum creatinine testing, preventive care associated with lower ASHD rates in the subsequent year, ranging from 10% lower for those who received influenza vaccinations and ≥2 A1c tests, to 43% lower for calcium-phosphorus assessment. Receiving ≥2 serum creatinine tests associated with a 13% higher rate of ASHD. A higher number of preventive measures associated with lower rates of ASHD. In summary, these data support an association between preventive measures and reduced cardiovascular morbidity and mortality.Chronic kidney disease (CKD, stages 1 to 4) is estimated to affect 13.1% (12.0% to 14.1%) of the adult noninstitutionalized civilian United States population, or 26.3 million adults according to the 2000 census.1 The prevalence rate increased approximately 30% between the early 1990s and the early 2000s.1 In 2002, the Kidney Disease Outcomes Quality Initiative (KDQOI) Clinical Practice Guidelines committee, organized by the National Kidney Foundation (NKF), noted that the three primary adverse consequences of CKD are kidney failure, cardiovascular disease (CVD), and premature death.2 The committee further noted that CVD is common, treatable, and potentially preventable in CKD patients, and that CKD appears to be a risk factor for CVD.2 In 1998, the NKF Task Force on Cardiovascular Disease in Chronic Renal Disease recommended that CKD patients be considered in the highest risk group for CVD events.3Two recent studies demonstrate increasing incidence of cardiovascular events4 and increasing prevalence of cardiovascular risk factors5 with decreasing GFR. An analysis of secondary cardiovascular events following myocardial infarction demonstrates increasing probability of subsequent cardiovascular events with decreasing GFR.6 United States Renal Data System (USRDS) analyses demonstrate hospitalization rates for congestive heart failure (CHF), ischemic heart disease, and arrhythmias two to seven times higher for Medicare patients with CKD than for those without CKD,7 and that CKD patients with no prior evidence of CVD were 60% more likely to develop CVD during the subsequent year than non-CKD patients.8,9The USRDS notes that CKD patients are three to five times more likely to die than to reach ESRD10 and that nearly half of CKD patient deaths occur out of the hospital, presumably sudden cardiac death.8 A recent meta-analysis found that the relative risk of all-cause mortality comparing CKD to non-CKD patients ranged from 0.94 to 5.00 in all cohorts analyzed and was significantly more than 1.0 in 93% of the cohorts; it also found an increasing risk of all-cause mortality with decreasing GFR.11The Healthy People 2010 initiative of the Centers for Disease Control and Prevention includes objectives intended to preserve renal function and slow CKD progression through early detection and intervention.12 The NKF has published numerous clinical practice guidelines addressing ESRD and CKD,2,1315 with the goals of identifying CKD early, slowing its progression, and reducing associated morbidity and mortality.Our objectives were to assess adherence to KDOQI recommendations for CKD care and subsequent associations between preventive care and incident atherosclerotic heart disease (ASHD) in the general Medicare population with evidence of CKD. Preventive care measures assessed include monitoring of serum creatinine, lipids, calcium-phosphorus, parathyroid hormone (PTH), glycated hemoglobin (A1c) in diabetic patients; influenza vaccinations; and outpatient nephrologist office visits. Subsequent ASHD outcomes studied include acute ischemic heart disease events, angina pectoris, cardiac arrest, coronary revascularization procedures, and all-cause death. Data were from the 5% Medicare 1999 to 2005 random sample limited data set standard analytic files.  相似文献   

12.
Enzymatic pathways involving catechol-O-methyltransferase (COMT) catabolize circulating catecholamines. A G-to-A polymorphism in the fourth exon of the COMT gene results in a valine-to-methionine amino acid substitution at codon 158, which leads to thermolability and low (“L”), as opposed to high (“H”), enzymatic activity. We enrolled 260 patients postbypass surgery to test the hypothesis that COMT gene variants impair circulating catecholamine metabolism, predisposing to shock and acute kidney injury (AKI) after cardiac surgery. In accordance with the Hardy-Weinberg equilibrium, we identified 64 (24.6%) homozygous (LL), 123 (47.3%) heterozygous (HL), and 73 (28.1%) homozygous (HH) patients. Postoperative catecholamines were higher in homozygous LL patients compared with heterozygous HL and homozygous HH patients (P < 0.01). During their intensive care stay, LL patients had both a significantly greater frequency of vasodilatory shock (LL: 69%, HL: 57%, HH: 47%; P = 0.033) and a significantly longer median duration of shock (LL: 18.5 h, HL: 14.0 h, HH: 11.0 h; P = 0.013). LL patients also had a greater frequency of AKI (LL: 31%, HL: 19.5%, HH: 13.7%; P = 0.038) and their AKI was more severe as defined by a need for renal replacement therapy (LL: 7.8%, HL: 2.4%, HH: 0%; P = 0.026). The LL genotype associated with intensive care and hospital length of stay (P < 0.001 and P = 0.002, respectively), and we observed a trend for higher mortality. Cross-validation analysis revealed a similar graded relationship of adverse outcomes by genotype. In summary, this study identifies COMT LL homozygosity as an independent risk factor for shock, AKI, and hospital stay after cardiac surgery. (ClinicalTrials.gov number, NCT00334009)Shock and acute kidney injury (AKI) are associated with increased mortality after cardiac surgery.1,2 Cardiopulmonary bypass represents a common clinical setting of sympathetic nervous system activation and cardiovascular instability. Postoperative hypotension and vasodilation with increased requirements for catecholamines occur despite adequate intravascular filling, cardiac output, and increased plasma catecholamine concentrations.1,3 High circulating catecholamine levels may contribute to persistent vasodilatation via α-adrenoceptor downregulation and desensitization,4 depression of vasopressin synthesis, and adenosine triphosphate-sensitive potassium channel activation in vascular smooth muscle cells.5 Circulating catecholamines are primarily catabolized through enzymatic pathways involving the enzyme catechol-O-methyltransferase (COMT).6 A functional G-to-A polymorphism in the fourth exon of the COMT gene results in a valine-to-methionine amino acid transition at codon 158 (COMT Val158Met polymorphism), leading to thermolability and lower (L), compared with higher (H) activity of the enzyme.7,8 Genetically determined COMT activity, among others,9,10 influences outcomes in patients with ischemic heart disease.11,12 In the kidney, COMT is essential for catecholamine degradation along the distal parts of proximal tubules and thick ascending limb of loop of Henle.13 We hypothesized that the COMT LL genotype coding for low enzyme activity would shift metabolism toward increased plasma catecholamine concentrations and predispose to increased duration of vasodilatory shock and higher AKI incidence after cardiac surgery. To test this notion, we conducted a prospective observational cohort study in cardiac surgery patients.  相似文献   

13.

Background:

The high prevalence of pain and depression in persons with spinal cord injury (SCI) is well known. However the link between pain intensity, interference, and depression, particularly in the acute period of injury, has not received sufficient attention in the literature.

Objective:

To investigate the relationship of depression, pain intensity, and pain interference in individuals undergoing acute inpatient rehabilitation for traumatic SCI.

Methods:

Participants completed a survey that included measures of depression (PHQ-9), pain intensity (“right now”), and pain interference (Brief Pain Inventory: general activity, mood, mobility, relations with others, sleep, and enjoyment of life). Demographic and injury characteristics and information about current use of antidepressants and pre-injury binge drinking also were collected. Hierarchical multiple regression was used to test depression models in 3 steps: (1) age, gender, days since injury, injury level, antidepressant use, and pre-injury binge drinking (controlling variables); (2) pain intensity; and (3) pain interference (each tested separately).

Results:

With one exception, pain interference was the only statistically significant independent variable in each of the final models. Although pain intensity accounted for only 0.2% to 1.2% of the depression variance, pain interference accounted for 13% to 26% of the variance in depression.

Conclusion:

Our results suggest that pain intensity alone is insufficient for understanding the relationship of pain and depression in acute SCI. Instead, the ways in which pain interferes with daily life appear to have a much greater bearing on depression than pain intensity alone in the acute setting.Key words: depression, pain, spinal cord injuriesThe high incidence and prevalence of pain following spinal cord injury (SCI) is well established16 and associated with numerous poor health outcomes and low quality of life (QOL).1,7,8 Although much of the literature on pain in SCI focuses on pain intensity, there is emerging interest in the role of pain interference or the extent to which pain interferes with daily activities of life.7,9 With prevalence as high as 77% in SCI, pain interference impacts life activities such as exercise, sleep, work, and household chores.2,7,1013 Pain interference also has been associated with disease management self-efficacy in SCI.14 There is a significant relationship between pain intensity and interference in persons with SCI.7 Like pain, the high prevalence of depression after SCI is well-established.1517 Depression and pain often co-occur,18,19 and their overlap ranges from 30% to 60%.19 Pain is also associated with greater duration of depressed mood.20 Pain and depression share common biological pathways and neurotransmitter mechanisms,19 and pain has been shown to attenuate the response to depression treatment.21,22Despite the interest in pain and depression after SCI and implications for the treatment of depression, their co-occurrence has received far less attention in the literature.23 Greater pain has been associated with higher levels of depression in persons with SCI,16,24 although this is not a consistent finding.25 Similarly, depression in persons with SCI who also have pain appears to be worse than for persons with non-SCI pain, suggesting that the link between pain and depression may be more intense in the context of SCI.26 In one of the few studies of pain intensity and depression in an acute SCI rehabilitation setting, Cairns et al 27 found a co-occurrence of pain and depression in 22% to 35% of patients. This work also suggested an evolution of the relationship between pain and depression over the course of the inpatient stay, such that they become associated by discharge. Craig et al28 found that pain levels at discharge from acute rehabilitation predicted depression at 2-year follow-up. Pain interference also has been associated with emotional functioning and QOL in persons with SCI1,7,29,30 and appears to mediate the relationship between ambulation and depression.31Studies of pain and depression in person with SCI are often limited methodologically to examine the independent contributions of pain intensity and interference to depression in an acute setting. For example, they include only pain intensity16,23,25,28,30; classify subjects by either pain plus depression23 or pain versus no pain8,28,30; use pain intensity and interference as predictor and outcome, respectively1; collapse pain interference domains into a single score1; or use only univariate tests (eg, correlations).7,8,25,30 In addition, the vast majority focus on the chronic period of injury. To fill a gap in knowledge, we examined the independent contributions of pain intensity and pain interference to depression, while accounting for injury and demographic characteristics, antidepressant treatment, and pre-injury binge drinking in a sample of persons with acute SCI. We hypothesized that when accounting for both pain intensity and interference in the model, interference would have an independent and significant relationship with depression, above and beyond pain intensity.  相似文献   

14.
Proteinuria and increased renal reabsorption of NaCl characterize the nephrotic syndrome. Here, we show that protein-rich urine from nephrotic rats and from patients with nephrotic syndrome activate the epithelial sodium channel (ENaC) in cultured M-1 mouse collecting duct cells and in Xenopus laevis oocytes heterologously expressing ENaC. The activation depended on urinary serine protease activity. We identified plasmin as a urinary serine protease by matrix-assisted laser desorption/ionization time of-flight mass spectrometry. Purified plasmin activated ENaC currents, and inhibitors of plasmin abolished urinary protease activity and the ability to activate ENaC. In nephrotic syndrome, tubular urokinase-type plasminogen activator likely converts filtered plasminogen to plasmin. Consistent with this, the combined application of urokinase-type plasminogen activator and plasminogen stimulated amiloride-sensitive transepithelial sodium transport in M-1 cells and increased amiloride-sensitive whole-cell currents in Xenopus laevis oocytes heterologously expressing ENaC. Activation of ENaC by plasmin involved cleavage and release of an inhibitory peptide from the ENaC γ subunit ectodomain. These data suggest that a defective glomerular filtration barrier allows passage of proteolytic enzymes that have the ability to activate ENaC.Nephrotic syndrome is characterized by proteinuria, sodium retention, and edema. Increased renal sodium reabsorption occurs in the cortical collecting duct (CCD),1,2 where a rate-limiting step in transepithelial sodium transport is the epithelial sodium channel (ENaC), which is composed of the three homologous subunits: α, β, γ.3ENaC activity is regulated by hormones, such as aldosterone and vasopressin (AVP)4,5; however, adrenalectomized rats and AVP-deficient Brattleboro rats are capable of developing nephrotic syndrome,1,6 and nephrotic patients do not consistently display elevated levels of sodium-retaining hormones,7,8 suggesting that renal sodium hyper-reabsorption is independent of systemic factors. Consistent with this, sodium retention is confined to the proteinuric kidney in the unilateral puromycin aminonucleoside (PAN) nephrotic model.2,9,10There is evidence that proteases contribute to ENaC activation by cleaving the extracellular loops of the α- and γ-subunits.1113 Proteolytic activation of ENaC by extracellular proteases critically involves the cleavage of the γ subunit,1416 which probably leads to the release of a 43-residue inhibitory peptide from the ectodomain.17 Both cleaved and noncleaved channels are present in the plasma membrane,18,19 allowing proteases such as channel activating protease 1 (CAP1/prostasin),20 trypsin,20 chymotrypsin,21 and neutrophil elastase22 to activate noncleaved channels from the extracellular side.23,24 We hypothesized that the defective glomerular filtration barrier in nephrotic syndrome allows the filtration of ENaC-activating proteins into the tubular fluid, leading to stimulation of ENaC. The hypothesis was tested in the PAN nephrotic model in rats and with urine from patients with nephrotic syndrome.  相似文献   

15.
Nephrotoxicity is common with the use of the chemotherapeutic agent cisplatin, but the cellular mechanisms that modulate the extent of injury are unknown. Cisplatin downregulates expression of the taurine transporter gene (TauT) in LLC-PK1 proximal tubular renal cells, and forced overexpression of TauT protects against cisplatin-induced apoptosis in vitro. Because the S3 segments of proximal tubules are the sites of both cisplatin-induced injury and adaptive regulation of the taurine transporter, we hypothesized that TauT functions as an anti-apoptotic gene and protects renal cells from cisplatin-induced nephrotoxicity in vivo. Here, we studied the regulation of TauT in cisplatin nephrotoxicity in a human embryonic kidney cell line and in LLC-PK1 cells, as well as in TauT transgenic mice. Cisplatin-induced activation of p53 repressed TauT and overexpression of TauT prevented the progression of cisplatin-induced apoptosis and renal dysfunction in TauT transgenic mice. Although cisplatin activated p53 and PUMA (a p53-responsive proapoptotic Bcl-2 family protein) in the kidneys of both wildtype and TauT transgenic mice, only wildtype animals demonstrated acute kidney injury. These data suggest that functional TauT plays a critical role in protecting against cisplatin-induced nephrotoxicity, possibly by attenuating a p53-dependent pathway.Acute kidney injury due to ischemic or toxic renal damage is a common disorder with mortality of approximately 50%.1,2 As a highly effective chemotherapeutic agent, cisplatin has been used to treat a wide variety of solid tumors.3 However, 25% to 35% of patients experience a significant decline in renal function after the administration of a single dose of cisplatin.4 Several mechanisms, including oxidation, inflammation, genotoxic damage, and cell cycle arrest, have been implicated in cisplatin nephrotoxicity.510Elevated levels of the tumor suppressor gene p53 have been found in the kidneys of animal models of acute kidney injury induced by cisplatin administration.11 Jiang et al.12 have demonstrated that p53 activation is an early signal in cisplatin-induced apoptosis in renal tubular cells. The Varmus group13 has found that transgenic mice overexpressing p53 undergo progressive renal failure through a novel mechanism by which p53 appears to alter cellular differentiation, rather than by growth arrest or the direct induction of apoptosis. These findings suggest that altered expression of certain p53 target gene(s) involved in renal development may be responsible for p53-induced progressive renal injury in p53 transgenic mice.Our studies have shown that TauT is negatively regulated by p53 in renal cells.14 Interestingly, the progressive renal injury seen in p53 transgenic mice is similar to that previously observed in the offspring of taurine-deficient cats, which showed ongoing kidney damage and abnormal renal and retinal development,15 suggesting that the taurine transporter gene is an important target of p53 during kidney development and renal injury. It is worth noting that cisplatin accumulates in cells from all nephron segments, but is preferentially taken up by the highly susceptible proximal tubule cells within the S3 segment, which is the site for renal adaptive regulation of TauT.16,17 A recent study showed that taurine was able to attenuate cisplatin-induced nephrotoxicity and protect renal tubular cells from tubular atrophy and apoptosis.18 Therefore, downregulation of TauT by p53 may play an important role in cisplatin-induced nephrotoxicity.  相似文献   

16.
There are no accurate, noninvasive tests to diagnose BK polyomavirus nephropathy, a common infectious complication after renal transplantation. This study evaluated whether the qualitative detection of cast-like, three-dimensional polyomavirus aggregates (“Haufen”) in the urine accurately predicts BK polyomavirus nephropathy. Using negative-staining electron microscopy, we sought Haufen in 194 urine samples from 139 control patients and in 143 samples from 21 patients with BK polyomavirus nephropathy. Haufen detection was correlated with pathology in concomitant renal biopsies and BK viruria (decoy cell shedding and viral load assessments by PCR) and BK viremia (viral load assessments by PCR). Haufen originated from renal tubules containing virally lysed cells, and the detection of Haufen in the urine correlated tightly with biopsy confirmed BK polyomavirus nephropathy (concordance rate 99%). A total of 77 of 143 urine samples from 21 of 21 patients with BK polyomavirus nephropathy (disease stages A–C) contained Haufen, and during follow-up (3 to 120 wk), their presence or absence closely mirrored the course of renal disease. All controls were Haufen-negative, however, high viremia or viruria were detected in 8% and 41% of control samples, respectively. κ statistics showed fair to good agreement of viruria and viremia with BK polyomavirus nephropathy or with Haufen shedding and demonstrated an excellent agreement between Haufen and polyomavirus nephropathy (κ 0.98). Positive and negative predictive values of Haufen for BK polyomavirus nephropathy were 97% and 100%, respectively. This study shows that shedding of urinary Haufen and not BK viremia and viruria accurately mark BK polyomavirus nephropathy. It suggests that the detection of Haufen may serve as a noninvasive means to diagnose BK polyomavirus nephropathy in the urine.BK polyomavirus nephropathy (BKN) affects 1 to 9% of renal allografts.15 No specific and potent antipolyomavirus therapy is available. Therapeutic attempts include reduction in overall immunosuppression often combined with some antiviral drugs (e.g., leflunomide, cidofovir).610 Outcome of BKN largely depends on the histologic stage at time of diagnosis. Whereas BKN early stage A fares favorably, advanced disease stage C generally results in chronic allograft failure/loss, making an early diagnosis of BKN imperative for long-term graft survival.2,7,1115The definitive diagnosis of BKN requires a renal biopsy. For facilitation of diagnostic workup, patient screening strategies are used for the clinical risk assessment. They are based on signs of BK virus (BKV) replication in urine/viruria and plasma/viremia: Urine cytology for polyomavirus inclusion-bearing “decoy cell” quantification and PCR assays for quantitative BKV load measurements.2,7,13,1618 However, these screening techniques have only limited predictive value to diagnose accurately BKN; they are not “kidney disease specific” and cannot reliably distinguish between clinically insignificant polyomavirus replication and manifest intrarenal disease (BKN).7,1924Given the dependence on renal biopsies for accurately diagnosing BKN and the limitations of currently available screening tests, we propose a new diagnostic method. In voided urine samples from patients with biopsy-proven BKN, we observed three-dimensional cast-like polyomavirus aggregates, hereafter termed “Haufen” (after the German word for “cluster or stack”) by electron microscopy (EM). Such densely arranged viral aggregates have not been described before. We hypothesized that Haufen were morphologic biomarkers of a productive intrarenal BKV infection. The data presented here support our hypothesis and show that the detection of Haufen can be clinically used to diagnose BKN accurately and noninvasively.  相似文献   

17.

Background:

Understanding the related fates of muscle density and bone quality after chronic spinal cord injury (SCI) is an important initial step in determining endocrine-metabolic risk.

Objective:

To examine the associations between muscle density and indices of bone quality at the distal lower extremity of adults with chronic SCI.

Methods:

A secondary data analysis was conducted in 70 adults with chronic SCI (C2-T12; American Spinal Injury Association Impairment Scale [AIS] A-D; ≥2 years post injury). Muscle density and cross-sectional area (CSA) and bone quality indices (trabecular bone mineral density [TbBMD] at the distal tibia [4% site] and cortical thickness [CtTh], cortical area [CtAr], cortical BMD [CtBMD], and polar moment of inertia [PMI] at the tibial shaft [66% site]) were measured using peripheral quantitative computed tomography. Calf lower extremity motor score (cLEMS) was used as a clinical measure of muscle function. Multivariable linear regression analyses were performed to determine the strength of the muscle-bone associations after adjusting for confounding variables (sex, impairment severity [AIS A/B vs AIS C/D], duration of injury, and wheelchair use).

Results:

Muscle density was positively associated with TbBMD (b = 0.85 [0.04, 1.66]), CtTh (b = 0.02 [0.001, 0.034]), and CtBMD (b = 1.70 [0.71, 2.69]) (P < .05). Muscle CSA was most strongly associated with CtAr (b = 2.50 [0.12, 4.88]) and PMI (b = 731.8 [161.7, 1301.9]) (P < .05), whereas cLEMS was most strongly associated with TbBMD (b = 7.69 [4.63, 10.76]) (P < .001).

Conclusion:

Muscle density and function were most strongly associated with TbBMD at the distal tibia in adults with chronic SCI, whereas muscle size was most strongly associated with bone size and geometry at the tibial shaft.Key words: bone mineral density, bone quality, muscle density, muscle size, osteoporosis, peripheral quantitative computed tomography, spinal cord injurySpinal cord injury (SCI) is associated with sublesional muscle atrophy,13 changes in muscle fiber type,4,5 reductions in hip and knee region bone mineral density (BMD),68 and increased central and regional adiposity after injury.9,10 Adverse changes in muscle and bone health in individuals with SCI contribute to an increased risk of osteoporosis,1113 fragility fractures,14 and endocrine-metabolic disease (eg, diabetes, dyslipidemia, heart disease).1517 Crosssectional studies have shown a higher prevalence of lower extremity fragility fractures among individuals with SCI ranging from 1% to 34%.1820 Fragility fractures are associated with negative health and functional outcomes, including an increased risk of morbidity and hospitalization,21,22 mobility limitations,23 and a reduced quality of life.24 Notably, individuals with SCI have a normal life expectancy, yet fracture rates increase annually from 1% per year in the first year to 4.6% per year in individuals greater than 20 years post injury.25,26Muscle and bone are thought to function as a muscle-bone unit, wherein muscle contractions impose loading forces on bone that produce changes in bone geometry and structure.27,28 A growing body of evidence has shown that individuals with SCI (predominantly those with motor complete injury) exhibit similar patterns of decline in muscle cross-sectional area (CSA) and BMD in the acute and subacute stages following injury.4,11,29 Prospective studies have exhibited a decrease in BMD of 1.1% to 47% per year6,7,30 and up to 73% in the 2 to 7 years following SCI.8,14,31,32 Decreases in muscle CSA have been well-documented following SCI, with greater disuse atrophy observed after complete SCI versus incomplete SCI, presumably due to the absence of voluntary muscle contractions and associated mobility limitations.1,2,16 Muscle quality is also compromised early after SCI, resulting in sublesional accumulation of adipose tissue in the chronic stage of injury3,33,34; the exact time course of this event has been poorly elucidated to date. Adipose tissue deposition within and between skeletal muscle is linked to an increase in noncontractile muscle tissue and a reduction in muscle force-generating capacity on bone.35,36 Skeletal muscle fat infiltration is up to 4 times more likely to occur in individuals with SCI,1,16,37 contributing to metabolic complications (eg, glucose intolerance),16 reduced muscle strength and function,38 and mobility limitations3 – all factors that may be associated with a deterioration in bone quality after SCI.The association between lean tissue mass and bone size (eg, BMD and bone mineral content) in individuals with SCI has been wellestablished using dual energy x-ray absorptiometry (DXA).9,10,29,34 However, DXA is unable to measure true volumetric BMD (vBMD), bone geometry, and bone structure. Peripheral quantitative computed tomography (pQCT) is an imaging technique that improves our capacity to measure indices of bone quality and muscle density and CSA at fracture-prone sites (eg, tibia).3,39 Recent evidence from cross-sectional pQCT studies has shown that muscle CSA and calf lower extremity motor score (cLEMS) were associated with indices of bone quality at the tibia in individuals with SCI.13,40 However, neither study measured muscle density (a surrogate of fatty infiltration when evaluating the functional muscle-bone unit). Fatty infiltration of muscle is common after SCI1,16,37 and may affect muscle function or the muscle-bone unit, but the association between muscle density and bone quality indices at the tibia in individuals with chronic SCI is unclear. Muscle density measured using pQCT may be an acceptable surrogate of muscle quality when it is difficult to assess muscle strength due to paralysis.3,39 Additionally, investigating which muscle outcome (muscle density, CSA, cLEMS) is most strongly associated with vBMD and bone structure may inform modifiable targets for improving bone quality and reducing fracture risk after chronic SCI.The primary objective of this secondary analysis was to examine the associations between pQCTderived calf muscle density and trabecular vBMD at the tibia among adults with chronic SCI. The secondary objective was to examine the associations between calf muscle density, CSA, and function and tibial vBMD, cortical CSA and thickness, and polar moment of inertia (PMI). First, we hypothesize that calf muscle density will be a positive correlate of trabecular and cortical vBMD, cortical CSA and thickness, and PMI at the tibia in individuals with chronic SCI. Second, we hypothesize that of the key muscle variables (cLEMS, CSA and density), calf muscle density and cLEMS will be most strongly associated with trabecular vBMD, whereas calf muscle CSA will be most strongly associated with cortical CSA and PMI.  相似文献   

18.
Connective tissue growth factor (CTGF) is an important profibrotic factor in kidney diseases. Blockade of endogenous CTGF ameliorates experimental renal damage and inhibits synthesis of extracellular matrix in cultured renal cells. CTGF regulates several cellular responses, including adhesion, migration, proliferation, and synthesis of proinflammatory factors. Here, we investigated whether CTGF participates in the inflammatory process in the kidney by evaluating the nuclear factor-kappa B (NF-κB) pathway, a key signaling system that controls inflammation and immune responses. Systemic administration of CTGF to mice for 24 h induced marked infiltration of inflammatory cells in the renal interstitium (T lymphocytes and monocytes/macrophages) and led to elevated renal NF-κB activity. Administration of CTGF increased renal expression of chemokines (MCP-1 and RANTES) and cytokines (INF-γ, IL-6, and IL-4) that recruit immune cells and promote inflammation. Treatment with a NF-κB inhibitor, parthenolide, inhibited CTGF-induced renal inflammatory responses, including the up-regulation of chemokines and cytokines. In cultured murine tubuloepithelial cells, CTGF rapidly activated the NF-κB pathway and the cascade of mitogen-activated protein kinases, demonstrating crosstalk between these signaling pathways. CTGF, via mitogen-activated protein kinase and NF-κB activation, increased proinflammatory gene expression. These data show that in addition to its profibrotic properties, CTGF contributes to the recruitment of inflammatory cells in the kidney by activating the NF-κB pathway.Connective tissue growth factor (CTGF) is a member of the C-terminal cystein-rich proteins (CCN) family of early response genes. CTGF is a 38-kD cystein-rich secreted protein that is up-regulated in proliferative disorders or fibrotic lesions in several human diseases, including skin disorders, atherosclerosis, pulmonary fibrosis, and kidney diseases.1,2 In human biopsies of different renal pathologies and in experimental models of kidney injury, renal CTGF overexpression was correlated with cellular proliferation and extracellular matrix (ECM) accumulation, both at glomerular and interstitial areas.24 In the diabetic kidney, elevated CTGF expression co-localizes with sites of epithelial-to-mesenchymal transition (EMT) on the tubular epithelium.5 In cultured renal cells, recombinant CTGF significantly increases ECM production and induces transition of tubuloepithelial cells to myofibroblasts.68 In experimental diabetic nephropathy in mice, the blockade on endogenous CTGF, by antisense oligonucleotides, has beneficial effects on renal damage progression.9 In cultured renal cells, CTGF blockade inhibits ECM accumulation and EMT caused by angiotensin II and transforming growth factor-β (TGF-β).3,10 These data suggest that CTGF could be an important target for the treatment of renal fibrosis.CTGF also induces other cellular responses. Depending on the cell type, CTGF regulates cell growth, proliferation, and apoptosis. CTGF is a downstream mediator of TGF-β-induced apoptosis of mesothelial cells,11 but contributes to the survival of hepatic stellate cells.12 CTGF may play a role as a secreted tumor suppressor protein13 or contribute to promote tumor cell growth and invasion.14 Some studies suggested that CTGF could also be involved in the inflammatory response. CTGF is a chemotactic factor for monocytes15 and regulates cellular adhesion and migration in mesangial cells.16 Moreover, in cultured mesangial cells, CTGF enhances the production of proinflammatory factors, including chemotactic molecules, and activates nuclear factor-kappa B (NF-κB).17 However, there is no data about the in vivo effect of CTGF on the renal inflammatory process.The molecular mechanisms involved in CTGF signaling are far from being understood. CTGF interacts with tyrosine kinase receptors and integrins that activate multiple signaling systems including NF-κB and mitogen-activated protein kinase (MAPK) pathways.12,1719 Although the regulation of the inflammatory response in the kidney is a complex process, the activation of NF-κB plays a pivotal role. Experimental studies have shown that NF-κB blockade by different methods, including I-κB overexpression, NF-κB decoy oligonucleotides, NF-κB inhibitors (parthenolide among others), or indirectly by statins, glucocorticoids, and antioxidants, prevents renal damage.2023 Activation of renal NF-κB has been described in human kidney diseases, associated to proinflammatory factors overexpression.24,25 We have now investigated whether CTGF could modulate the inflammatory response in the kidney and the mechanisms underlying this process, evaluating the involvement of the NF-κB signaling pathway.  相似文献   

19.
20.
Despite optimal immunosuppressive therapy, more than 50% of kidney transplants fail because of chronic allograft dysfunction. A noninvasive means to diagnose chronic allograft dysfunction may allow earlier interventions that could improve graft half-life. In this proof-of-concept study, we used mass spectrometry to analyze differences in the urinary polypeptide patterns of 32 patients with chronic allograft dysfunction (14 with pure interstitial fibrosis and tubular atrophy and 18 with chronic active antibody-mediated rejection) and 18 control subjects (eight stable recipients and 10 healthy control subjects). Unsupervised hierarchical clustering showed good segregation of samples in groups corresponding mainly to the four biomedical conditions. Moreover, the composition of the proteome of the pure interstitial fibrosis and tubular atrophy group differed from that of the chronic active antibody-mediated rejection group, and an independent validation set confirmed these results. The 14 protein ions that best discriminated between these two groups correctly identified 100% of the patients with pure interstitial fibrosis and tubular atrophy and 100% of the patients with chronic active antibody-mediated rejection. In summary, this study establishes a pattern for two histologic lesions associated with distinct graft outcomes and constitutes a first step to designing a specific, noninvasive diagnostic tool for chronic allograft dysfunction.During the past three decades, the incidence and prevalence of ESRD has increased each year all over the world.1 Kidney transplantation is the treatment of choice for ESRD because it prolongs survival,2 improves quality of life, and is less costly than dialysis3; however, despite these improvements, a substantial proportion of grafts develop progressive dysfunction and fail within a decade, even with the use of appropriate dosages of immunosuppressive drugs to prevent acute rejection.4 Chronic allograft dysfunction (CAD) causes more than 50% of graft losses.57 Although patients can return to dialysis after transplant failure, loss of a functioning graft is associated with a three-fold increase in the risk for death,2,8,9 a substantial decrease in quality of life in survivors, and a four-fold increase in cost.1,3The decline in function, often associated with hypertension and proteinuria, constitutes a clinical syndrome that has been called chronic allograft nephropathy (CAN). The histopathologic hallmarks of these patients are chronic interstitial fibrosis, tubular atrophy, vascular occlusive changes, and glomerulosclerosis, usually evaluated by the Banff working classification.10 Major outcomes discussed at the last Banff Conference included the elimination of the nonspecific term CAN and recognition of the entity “chronic active antibody-mediated rejection” (CAAR).11 The rationale for this update was the improper use of “CAN” as a generic term for all causes of chronic renal allograft dysfunction with interstitial fibrosis and tubular atrophy (IF/TA), which hampers accurate diagnosis and appropriate therapy, and increasing recognition of the role of alloantibody in chronic renal allograft deterioration and the corresponding histologic changes, making the identification of an antibody-mediated component of chronic rejection feasible.11Effective strategies to prevent renal function deterioration should focus on the early detection and treatment of patients who develop CAD. In addition to elevated serum creatinine, usually associated with proteinuria and arterial hypertension, more specific and sensitive markers are needed to identify high-risk patients or initial lesions without any changes in serum creatinine or proteinuria.5,11New analytic tools that allow rapid screening and accurate protein identification in body fluids are now emerging within the field of proteomic science. High-throughput mass spectrometry (MS) methods allow simultaneous detection of a large number of proteins in a large set of biologic tissues or samples. Protein fingerprinting MS methods using modern matrix-assisted laser desorption/ionization-time of-flight MS (MALDI-MS) instrumentation can detect hundreds of peak signals that, as a whole, could be considered a reflex of the body''s physiologic status.12 To date, MALDI-MS has been successfully used to detect patterns of substantial overexpression of proteins in cancer cells.1315 Urine seems to be an ideal source of potential biomarkers, and urine proteomic approaches have been used in numerous attempts to define biomarkers for a variety of nephro-urologic disorders.1618 The aim of this study was to evaluate whether chromatography by solid-phase extraction coupled to MS would differentiate urinary polypeptide patterns in patients with pure IF/TA, patients with CAAR, and two control groups: Healthy individuals and stable renal transplant recipients.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号