首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Despite the high prevalence of chronic kidney disease (CKD), relatively few individuals with CKD progress to ESRD. A better understanding of the risk factors for progression could improve the classification system of CKD and strategies for screening. We analyzed data from 65,589 adults who participated in the Nord-Trøndelag Health (HUNT 2) Study (1995 to 1997) and found 124 patients who progressed to ESRD after 10.3 yr of follow-up. In multivariable survival analysis, estimated GFR (eGFR) and albuminuria were independently and strongly associated with progression to ESRD: Hazard ratios for eGFR 45 to 59, 30 to 44, and 15 to 29 ml/min per 1.73 m2 were 6.7, 18.8, and 65.7, respectively (P < 0.001 for all), and for micro- and macroalbuminuria were 13.0 and 47.2 (P < 0.001 for both). Hypertension, diabetes, male gender, smoking, depression, obesity, cardiovascular disease, dyslipidemia, physical activity and education did not add predictive information. Time-dependent receiver operating characteristic analyses showed that considering both the urinary albumin/creatinine ratio and eGFR substantially improved diagnostic accuracy. Referral based on current stages 3 to 4 CKD (eGFR 15 to 59 ml/min per 1.73 m2) would include 4.7% of the general population and identify 69.4% of all individuals progressing to ESRD. Referral based on our classification system would include 1.4% of the general population without losing predictive power (i.e., it would detect 65.6% of all individuals progressing to ESRD). In conclusion, all levels of reduced eGFR should be complemented by quantification of urinary albumin to predict optimally progression to ESRD.Since the publication of the Kidney Disease Outcomes Quality Initiative (K/DOQI) clinical practice guidelines on the classification of chronic kidney disease in 2002,1 several studies based on this classification system have shown very high prevalence estimates of chronic kidney disease (CKD) in the general population (10 to 13%).2,3 Screening for CKD is therefore increasingly suggested1,4; however, only a small proportion of patients with stage 3 to 4 CKD progress to ESRD.5 There is an ongoing discussion on whether the current CKD criteria are appropriate.68 Developing a risk score to identify better the patients who are at increased risk for ESRD would be of major importance for the current efforts to establish clinical guidelines and public health plans for CKD.4,9,10Several predictors of progression to ESRD have been identified,9 but their independent predictive power has not been well studied either in the general population or in high-risk subgroups. Intuitively, a low estimated GFR (eGFR) is an important risk factor for ESRD, and eGFR is the backbone of the current CKD classification. High urine albumin is a well-established major risk factor for progression.9 Only a few studies have examined the renal risk as a function of the combination of eGFR and albuminuria.1114 These studies are of restricted value, however, because of exclusion of patients with diabetes14; inclusion of men only12; inclusion of only patients with diabetes13; or absence of information on potentially important risk factors, such as smoking, obesity, dyslipidemia, and cardiovascular disease.11,14CKD screening beyond patients with known hypertension or diabetes has been proposed,1,4 but such screening programs have remained unsatisfactory because of their limited predictive power. We used the data of the Second Nord-Trøndelag Health Study (HUNT 2), Norway, to improve such prediction. HUNT 2 is a large population-based study with a high participation rate.15 Our aim was to examine how accurately subsequent progression to ESRD could be predicted by a combined variable of baseline eGFR and urine albumin. We also tested whether further potential renal risk factors provided additional independent prediction.  相似文献   

2.
Chronic kidney disease (CKD) guidelines recommend evaluating patients with GFR <60 ml/min per 1.73 m2 for complications, but little evidence supports the use of a single GFR threshold for all metabolic disorders. We used data from the NephroTest cohort, including 1038 adult patients who had stages 2 through 5 CKD and were not on dialysis, to study the occurrence of metabolic complications. GFR was measured using renal clearance of 51Cr-EDTA (mGFR) and estimated using two equations derived from the Modification of Diet in Renal Disease study. As mGFR decreased from 60 to 90 to <20 ml/min per 1.73 m2, the prevalence of hyperparathyroidism increased from 17 to 85%, anemia from 8 to 41%, hyperphosphatemia from 1 to 30%, metabolic acidosis from 2 to 39%, and hyperkalemia from 2 to 42%. Factors most strongly associated with metabolic complications, independent of mGFR, were younger age for acidosis and hyperphosphatemia, presence of diabetes for acidosis, diabetic kidney disease for anemia, and both male gender and the use of inhibitors of the renin-angiotensin system for hyperkalemia. mGFR thresholds for detecting complications with 90% sensitivity were 50, 44, 40, 39, and 37 ml/min per 1.73 m2 for hyperparathyroidism, anemia, acidosis, hyperkalemia, and hyperphosphatemia, respectively. Analysis using estimated GFR produced similar results. In summary, this study describes the onset of CKD-related complications at different levels of GFR; anemia and hyperparathyroidism occur earlier than acidosis, hyperkalemia, and hyperphosphatemia.Since the National Kidney Foundation published its definition and classification of chronic kidney disease (CKD),1 evidence has accumulated showing that it is a common disease,2,3 associated with morbidity and mortality risks far broader and higher than those of simple progression to kidney failure.46 Early detection of CKD and its metabolic complications is now a priority for delaying disease progression and for primary prevention of many CKD-associated chronic diseases, including cardiovascular, mineral, and bone diseases5,79; however, data on the natural history of these complications according to reference methods are sparse, and there is little evidence about the most appropriate timing for their detection.CKD metabolic complications, which include anemia, metabolic acidosis, and mineral and electrolyte disorders, may be asymptomatic for a long time.1021 According to Kidney Disease Outcomes Quality Initiative (K/DOQI) guidelines,1 all patients at stage 3 CKD or above (i.e., those with a GFR <60 ml/min per 1.73 m2), should be evaluated for all complications. This threshold, however, was defined from clinical and population-based studies, all of which used equation-estimated GFR (eGFR),1 a method sensitive to both the choice of equation and serum creatinine (Scr) calibration, particularly for the highest GFR values.22,23 Population-based studies, with one exception,24 have also lacked the power to search for complication-specific GFR thresholds below 60 ml/min per 1.73 m2. Moreover, although a few studies showed the influence of some patient characteristics, such as ethnic origin and diabetes, on the prevalence of various complications,2429 neither their potential impact nor the effect of clinical factors on metabolic disorders has been investigated systematically.Our primary purpose, therefore, was to define GFR thresholds, measured with a reference method (mGFR: 51Cr-EDTA renal clearance), and factors associated with CKD-related metabolic complications in a clinical cohort of 1038 patients with stages 2 through 5 CKD. Because mGFR is rarely performed in clinical practice, we also estimated these thresholds with eGFR and studied how the results differed according to method.  相似文献   

3.
Primary vesicoureteral reflux (pVUR) is one of the most common causes of pediatric kidney failure. Linkage scans suggest that pVUR is genetically heterogeneous with two loci on chromosomes 1p13 and 2q37 under autosomal dominant inheritance. Absence of pVUR in parents of affected individuals raises the possibility of a recessive contribution to pVUR. We performed a genome-wide linkage scan in 12 large families segregating pVUR, comprising 72 affected individuals. To avoid potential misspecification of the trait locus, we performed a parametric linkage analysis using both dominant and recessive models. Analysis under the dominant model yielded no signals across the entire genome. In contrast, we identified a unique linkage peak under the recessive model on chromosome 12p11-q13 (D12S1048), which we confirmed by fine mapping. This interval achieved a peak heterogeneity LOD score of 3.6 with 60% of families linked. This heterogeneity LOD score improved to 4.5 with exclusion of two high-density pedigrees that failed to link across the entire genome. The linkage signal on chromosome 12p11-q13 originated from pedigrees of varying ethnicity, suggesting that recessive inheritance of a high frequency risk allele occurs in pVUR kindreds from many different populations. In conclusion, this study identifies a major new locus for pVUR and suggests that in addition to genetic heterogeneity, recessive contributions should be considered in all pVUR genome scans.Vesicoureteral reflux (VUR; OMIM no. 193000) is the retrograde flow of urine from the bladder to the ureters and the kidneys during micturation. Uncorrected, VUR can lead to repeated urinary tract infections, renal scarring and reflux nephropathy, accounting for up to 25% of pediatric end stage renal disease.1,2 VUR is commonly seen as an isolated disorder (primary VUR; pVUR), but it can also present in association with complex congenital abnormalities of the kidney and urinary tract or with specific syndromic disorders, such as renal-coloboma and branchio-oto-renal syndromes.38pVUR has a strong hereditary component, with monozygotic twin concordance rates of 80%.912 Sibling recurrence rates of 30% to 65% have suggested segregation of a single gene or oligogenes with large effects.9,1214 Interestingly however, the three published genome-wide linkage scans of pVUR have strongly suggested multifactorial determination.1517 Two pVUR loci have been identified with genome-wide significance on chromosomes 1p13 and 2q37 under an autosomal dominant transmission with locus heterogeneity.15,16 Multiple suggestive signals have also been reported, but remarkably, these studies show little overlap.1517 These data suggest that pVUR may be extremely heterogeneous, with mutations in different genes each accounting for a fraction of cases. The genes underlying pVUR loci have not yet been identified, but two recent studies have reported segregating mutations in the ROBO2 gene in up to 5% of pVUR families.18,19Despite evidence for genetic heterogeneity and different subtypes of disease, genetic studies have all modeled pVUR as an autosomal dominant trait.1517,20 Recessive inheritance has generally not been considered because the absence of affected parents can be explained by spontaneous resolution of pVUR with older age. However, many pVUR cohorts are composed of affected sibships or pedigrees compatible with autosomal recessive transmission, suggesting the potential for alternative modes of inheritance.912,16,17,2022 Systematic family screening to clarify the mode of inheritance is not feasible for pVUR because the standard diagnostic tool, the voiding cystourethrogram (VCUG), is invasive and would expose participants to radiation. Formal assessment of a recessive contribution in sporadic pVUR has also been difficult because studies have been conducted in populations with low consanguinity rates.912,16,17,2022 However, recent studies have identified an unexpected recessive contribution to several complex traits such as ductus arteriosus or autism.23,24 Thus, in addition to genetic heterogeneity, genes with alternative modes of transmission may segregate among pVUR families, and misspecification of the inheritance model may complicate mapping studies of this trait.Several approaches can be considered to address the difficulties imposed by complex inheritance, variable penetrance, and genetic heterogeneity. Studying large, well characterized cohorts with newer single-nucleotide polymorphism (SNP)-based technologies can maximize inheritance information across the genome and increase the power of linkage studies.25 In addition, in the setting of locus heterogeneity and uncertainty about the mode of transmission, analysis under a dominant and a recessive model has greater power compared with nonparametric methods and more often results in detection of the correct mode of transmission without incurring a significant penalty for multiple testing.2629 We combined these approaches in this study and successfully localized a major gene for VUR, which unexpectedly demonstrates autosomal recessive transmission.  相似文献   

4.
People with ESRD are at increased risk for cancer, but it is uncertain when this increased risk begins in the spectrum of chronic kidney disease (CKD). The aim of our study was to determine whether moderate CKD increases the risk for cancer among older people. We linked the Blue Mountains Eye Study, a prospective population-based cohort study of 3654 residents aged 49 to 97 yr, and the New South Wales Cancer Registry. During a mean follow-up of 10.1 yr, 711 (19.5%) cancers occurred in 3654 participants. Men but not women with at least stage 3 CKD had a significantly increased risk for cancer (test of interaction for gender P = 0.004). For men, the excess risk began at an estimated GFR (eGFR) of 55 ml/min per 1.73 m2 (adjusted hazard ratio [HR] 1.39; 95% confidence interval [CI] 1.00 to 1.92) and increased linearly as GFR declined. For every 10-ml/min decrement in eGFR, the risk for cancer increased by 29% (adjusted HR 1.29; 95% CI 1.10 to 1.53), with the greatest risk at an eGFR <40 ml/min per 1.73 m2 (adjusted HR 3.01; 95% CI 1.72 to 5.27). The risk for lung and urinary tract cancers but not prostate was higher among men with CKD. In conclusion, moderate CKD (stage 3) may be an independent risk factor for the development of cancer among older men but not women, and the effect of CKD on risk may vary for different types of cancer.Chronic kidney disease (CKD) is common in older people. Among those aged ≥50 yr, the prevalence of moderate (stage 3) CKD or worse, defined as estimated GFR (eGFR) <60 ml/min per 1.73 m2, is >20% in the United States and Australia.1,2 CKD is associated with significant morbidity and premature death. Cardiovascular complications and deaths are increased in the CKD population independent of traditional risk factors such as diabetes, hypertension, and dyslipidemia.35 Increased cancer risk is also well defined in the end-stage kidney disease (ESKD) and kidney transplant populations.68 The overall cancer incidence after transplantation is approximately three-fold greater than in the general population.Observational studies have suggested an increased cancer risk in people with early-stage CKD, before requiring dialysis or transplantation.9,10 An excess risk of 1.2 times for all cancers was reported during the 5 yr before renal replacement therapy in a population-based cohort study of dialysis and transplant patients, but inclusion was limited to those who progressed to ESKD, and comorbidity data were limited.6 Recently, an association between elevated albumin-to-creatinine ratio and cancer incidence was reported in a longitudinal population-based study of older individuals.11 Previous studies have not evaluated the threshold of CKD that is associated with an increased risk for cancer, adjusted for measurement error in estimating the severity of CKD, or determined the independent effect of CKD after accounting for known risk factors for cancer. The aim of our study was to estimate the independent effect of mild to moderately reduced kidney function on the risk for incident cancers among older people and to identify the threshold at which any excess risk begins.  相似文献   

5.
Late referral of patients with chronic kidney disease is associated with increased morbidity and mortality, but the contribution of center-to-center and geographic variability of pre-ESRD nephrology care to mortality of patients with ESRD is unknown. We evaluated the pre-ESRD care of >30,000 incident hemodialysis patients, 5088 (17.8%) of whom died during follow-up (median 365 d). Approximately half (51.3%) of incident patients had received at least 6 mo of pre-ESRD nephrology care, as reported by attending physicians. Pre-ESRD nephrology care was independently associated with survival (odds ratio 1.54; 95% confidence interval 1.45 to 1.64). There was substantial center-to-center variability in pre-ESRD care, which was associated with increased facility-specific death rates. As the proportion of patients who were in a treatment center and receiving pre-ESRD nephrology care increased from lowest to highest quintile, the mortality rate decreased from 19.6 to 16.1% (P = 0.0031). In addition, treatment centers in the lowest quintile of pre-ESRD care were clustered geographically. In conclusion, pre-ESRD nephrology care is highly variable among treatment centers and geographic regions. Targeting these disparities could have substantial clinical impact, because the absence of ≥6 mo of pre-ESRD care by a nephrologist is associated with a higher risk for death.Nephrology care before starting hemodialysis (HD) is an important determinant of health status of patients with ESRD1,2 and is associated with hypoalbuminemia,3 anemia,4 absence of a functioning arteriovenous vascular access,5 reduced quality of life,6 and decreased kidney transplantation.7 Delayed care is associated with progression of kidney disease8,9 and increased mortality after start of HD.1013 Early nephrology referral for individuals with chronic kidney disease (CKD) is recommended14,15 for creation of an arteriovenous fistula (AVF) 6 mo before the anticipated start of HD.16Despite these guidelines, incident patients with ESRD frequently present without antecedent nephrology care.17 Differences between treatment center and geographic areas, similar to variations reported for the care of prevalent patients with ESRD, are possible factors that might contribute to variable pre-ESRD care.1719 If clinically relevant center-to-center and geographic variations in pre-ESRD care exist, then interventions might be designed to reduce the risk for delayed or absent care. This report describes the variable prevalence and clinical consequences for both individual patients and their treatment center populations of delayed pre-ESRD nephrology care in a large population-based sample of incident patients with ESRD.  相似文献   

6.

Background:

The relationship between cardiovascular disease (CVD) risk factors and dietary intake is unknown among individuals with spinal cord injury (SCI).

Objective:

To investigate the relationship between consumption of selected food groups (dairy, whole grains, fruits, vegetables, and meat) and CVD risk factors in individuals with chronic SCI.

Methods:

A cross-sectional substudy of individuals with SCI to assess CVD risk factors and dietary intake in comparison with age-, gender-, and race-matched able-bodied individuals enrolled in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Dietary history, blood pressure, waist circumference (WC), fasting blood glucose, high-sensitivity C-reactive protein (hs-CRP), lipids, glucose, and insulin data were collected from 100 SCI participants who were 38 to 55 years old with SCI >1 year and compared to 100 matched control participants from the CARDIA study.

Results:

Statistically significant differences between SCI and CARDIA participants were identified in WC (39.2 vs 36.2 in.; P < .001) and high-density lipoprotein cholesterol (HDL-C; 39.2 vs 47.5 mg/dL; P < .001). Blood pressure, total cholesterol, triglycerides, glucose, insulin, and hs-CRP were similar between SCI and CARDIA participants. No significant relation between CVD risk factors and selected food groups was seen in the SCI participants.

Conclusion:

SCI participants had adverse WC and HDL-C compared to controls. This study did not identify a relationship between consumption of selected food groups and CVD risk factors.Key words: cardiovascular disease risk factors, dietary intake, spinal cord injuryCardiovascular disease (CVD) is a leading cause of death in individuals with chronic spinal cord injuries (SCIs).15 This is partly because SCI is associated with several metabolic CVD risk factors, including dyslipidemia,610 glucose intolerance,6,1114 and diabetes.1517 In addition, persons with SCI exhibit elevated markers of inflammation18,19 and endothelial activation20 that are correlated with higher CVD prevalence.2123 Obesity, and specifically central obesity, another CVD risk factor,2426 is also common in this population.12,2729Dietary patterns with higher amounts of whole grains and fiber have been shown to improve lipid abnormalities,30 glucose intolerance, diabetes mellitus,3134 hypertension,35 and markers of inflammation36 in the general population. These dietary patterns are also associated with lower levels of adiposity.31 Ludwig et al reported that the strong inverse associations between dietary fiber and multiple CVD risk factors – excessive weight gain, central adiposity, elevated blood pressure, hypertriglyceridemia, low high-density lipoprotein cholesterol (HDL-C), high low-density lipoprotein cholesterol (LDL-C), and high fibrinogen – were mediated, at least in part, by insulin levels.37 Whole-grain food intake is also inversely associated with fasting insulin, insulin resistance, and the development of type 2 diabetes.32,38,39Studies in the general population have also shown a positive association between the development of metabolic syndrome as well as heart disease and consumption of a Western diet, a diet characterized by high intake of processed and red meat and low intake of fruit, vegetables, whole grains, and dairy.40,41 Red meat, which is high in saturated fat, has been shown to have an association with adverse levels of cholesterol and blood pressure and the development of obesity, metabolic syndrome, and diabetes.40,42,43Numerous studies have shown that individuals with chronic SCI have poor diet quality.4449 A Canadian study found that only 26.7% of their sample was adherent to the recommendations about the consumption of fruit, vegetables, and grains from the “Eating Well with Canada’s Food Guide.”44 Individuals with chronic SCI have also been found to have low fiber and high fat intakes when their diets were compared to dietary recommendations from the National Cholesterol Education Program,46 the 2000 Dietary Guidelines for Americans,49 and the recommended Dietary Reference Intakes and the Acceptable Macronutrient Distribution Range.47,48However, unlike in the general population, the relationship between dietary intake and obesity and CVD risk factors is unknown in the chronic SCI population. If a dietary pattern consisting of higher intake of whole grains and dietary fiber is favorably associated with obesity and CVD risk factors in individuals with chronic SCI, then trials of increased intake of whole grains and fiber intake could be conducted to document health benefits and inform recommendations. The purpose of this pilot study is to investigate the association between selected food group intake and CVD risk factors in individuals with chronic SCI as compared to age-, gender-, and race-matched able-bodied individuals enrolled in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Data will also be used to plan future studies in the relatively understudied field of CVD and nutrition in individuals with SCI.  相似文献   

7.
Administration of activated protein C (APC) protects from renal dysfunction, but the underlying mechanism is unknown. APC exerts both antithrombotic and cytoprotective properties, the latter via modulation of protease-activated receptor-1 (PAR-1) signaling. We generated APC variants to study the relative importance of the two functions of APC in a model of LPS-induced renal microvascular dysfunction. Compared with wild-type APC, the K193E variant exhibited impaired anticoagulant activity but retained the ability to mediate PAR-1-dependent signaling. In contrast, the L8W variant retained anticoagulant activity but lost its ability to modulate PAR-1. By administering wild-type APC or these mutants in a rat model of LPS-induced injury, we found that the PAR-1 agonism, but not the anticoagulant function of APC, reversed LPS-induced systemic hypotension. In contrast, both functions of APC played a role in reversing LPS-induced decreases in renal blood flow and volume, although the effects on PAR-1-dependent signaling were more potent. Regarding potential mechanisms for these findings, APC-mediated PAR-1 agonism suppressed LPS-induced increases in the vasoactive peptide adrenomedullin and infiltration of iNOS-positive leukocytes into renal tissue. However, the anticoagulant function of APC was responsible for suppressing LPS-induced stimulation of the proinflammatory mediators ACE-1, IL-6, and IL-18, perhaps accounting for its ability to modulate renal hemodynamics. Both variants reduced active caspase-3 and abrogated LPS-induced renal dysfunction and pathology. We conclude that although PAR-1 agonism is solely responsible for APC-mediated improvement in systemic hemodynamics, both functions of APC play distinct roles in attenuating the response to injury in the kidney.Acute kidney injury (AKI) leading to renal failure is a devastating disorder,1 with a prevalence varying from 30 to 50% in the intensive care unit.2 AKI during sepsis results in significant morbidity, and is an independent risk factor for mortality.3,4 In patients with severe sepsis or shock, the reported incidence ranges from 23 to 51%57 with mortality as high as 70% versus 45% among patients with AKI alone.1,8The pathogenesis of AKI during sepsis involves hemodynamic alterations along with microvascular impairment.4 Although many factors change during sepsis, suppression of the plasma serine protease, protein C (PC), has been shown to be predictive of early death in sepsis models,9 and clinically has been associated with early death resulting from refractory shock and multiple organ failure in severe sepsis.10 Moreover, low levels of PC have been highly associated with renal dysfunction and pathology in models of AKI.11 During vascular insult, PC becomes activated by the endothelial thrombin-thrombomodulin complex, and the activated protein C (APC) exhibits both antithrombotic and cytoprotective properties. We have previously demonstrated that APC administration protects from renal dysfunction during cecal ligation and puncture and after endotoxin challenge.11,12 In addition, recombinant human APC [drotrecogin alfa (activated)] has been shown to reduce mortality in patients with severe sepsis at high risk of death.13 Although the ability of APC to protect from organ injury in vivo is well documented,11,14,15 the precise mechanism mediating the response has not been ascertained.APC exerts anticoagulant properties via feedback inhibition of thrombin by cleavage of factors Va and VIIIa.16 However, APC bound to the endothelial protein C receptor (EPCR) can also exhibit direct potent cytoprotective properties by cleaving protease-activated receptor-1 (PAR-1).17 Various cell culture studies have demonstrated that the direct modulation of PAR-1 by APC results in cytoprotection by several mechanisms, including suppression of apoptosis,18,19 leukocyte adhesion,19,20 inflammatory activation,21 and suppression of endothelial barrier disruption.22,23 In vivo, the importance of the antithrombotic activity of APC is well established in model systems24,25 and in humans.26 However, the importance of PAR-1-mediated effects of APC also has been clearly defined in protection from ischemic brain injury27 and in sepsis models.28 Hence, there has been significant debate whether the in vivo efficacy of APC is attributed primarily to its anticoagulant (inhibition of thrombin generation) or cytoprotective (PAR-1-mediated) properties.17,29The same active site of APC is responsible for inhibition of thrombin generation by the cleavage of factor Va and for PAR-1 agonism. Therefore, we sought to generate point mutations that would not affect catalytic activity, but would alter substrate recognition to distinguish the two functions. Using these variants, we examined the relative role of the two known functions of APC in a model of LPS-induced renal microvascular dysfunction.  相似文献   

8.

Background:

Functional electrical stimulation (FES) therapy has been shown to be one of the most promising approaches for improving voluntary grasping function in individuals with subacute cervical spinal cord injury (SCI).

Objective:

To determine the effectiveness of FES therapy, as compared to conventional occupational therapy (COT), in improving voluntary hand function in individuals with chronic (≥24 months post injury), incomplete (American Spinal Injury Association Impairment Scale [AIS] B-D), C4 to C7 SCI.

Methods:

Eight participants were randomized to the intervention group (FES therapy; n = 5) or the control group (COT; n = 3). Both groups received 39 hours of therapy over 13 to 16 weeks. The primary outcome measure was the Toronto Rehabilitation Institute-Hand Function Test (TRI-HFT), and the secondary outcome measures were Graded Redefined Assessment of Strength Sensibility and Prehension (GRASSP), Functional Independence Measure (FIM) self-care subscore, and Spinal Cord Independence Measure (SCIM) self-care subscore. Outcome assessments were performed at baseline, after 39 sessions of therapy, and at 6 months following the baseline assessment.

Results:

After 39 sessions of therapy, the intervention group improved by 5.8 points on the TRI-HFT’s Object Manipulation Task, whereas the control group changed by only 1.17 points. Similarly, after 39 sessions of therapy, the intervention group improved by 4.6 points on the FIM self-care subscore, whereas the control group did not change at all.

Conclusion:

The results of the pilot data justify a clinical trial to compare FES therapy and COT alone to improve voluntary hand function in individuals with chronic incomplete tetraplegia.Key words: chronic patients, functional electrical stimulation, grasping, therapy, upper limbIn the United States and Canada, there is a steady rate of incidence and an increasing rate of prevalence of individuals living with spinal cord injury (SCI). For individuals with tetraplegia, hand function is essential for achieving a high level of independence in activities of daily living.15 For the majority of individuals with tetraplegia, the recovery of hand function has been rated as their highest priority.5Traditionally, functional electrical stimulation (FES) has been used as a permanent neuroprosthesis to achieve this goal.614 More recently, researchers have worked toward development of surface FES technologies that are meant to be used as shortterm therapies rather than permanent prosthesis. This therapy is frequently called FES therapy or FET. Most of the studies published to date, where FES therapy was used to help improve upper limb function, have been done in both the subacute and chronic stroke populations1523 and 2 have been done in the subacute SCI population.13 With respect to the chronic SCI population, there are no studies to date that have looked at use of FES therapy for retraining upper limb function. In a review by Kloosterman et al,24 the authors have discussed studies that have used various combinations of therapies for improving upper extremity function in chronic SCI individuals; however, the authors found that the only study that showed significant improvements before and after was the study published by Needham-Shropshire et al.25 This study examined the effectiveness of neuromuscular stimulation (NMS)–assisted arm ergometry for strengthening triceps brachii. In this study, electrical stimulation was used to facilitate arm ergometry, and it was not used in the context of retraining reaching, grasping, and/or object manipulation.Since 2002, our team has been investigating whether FES therapy has the capacity to improve voluntary hand function in complete and incomplete subacute cervical SCI patients who are less than 180 days post injury at the time of recruitment in the study.13 In randomized controlled trials (RCTs) conducted by our team, we found that FES therapy is able to restore voluntary reaching and grasping functions in individuals with subacute C4 to C7 incomplete SCI.13 The changes observed were transformational; individuals who were unable to grasp at all were able to do so after only 40 one-hour sessions of the FES therapy, whereas the control group showed significantly less improvement. Inspired by these results, we decided to conduct a pilot RCT with chronic (≥24 months following injury) C4 to C7 SCI patients (American Spinal Injury Association Impairment Scale [AIS] B-D), which is presented in this article. The purpose of this pilot study was to determine whether the FES therapy is able to restore voluntary hand function in chronic tetraplegic individuals. Based on the results of our prior phase I1 and phase II2,3 RCTs in the subacute SCI population, we hypothesized that individuals with chronic tetraplegia who underwent the FES therapy (intervention group) may have greater improvements in voluntary hand function, especially in their ability to grasp and manipulate objects, and perform activities of daily living when compared to individuals who receive similar volume and duration of conventional occupational therapy (COT: control group).  相似文献   

9.
Reduced serum levels of the calcification inhibitor fetuin-A associate with increased cardiovascular mortality in dialysis patients. Fetuin-A–deficient mice display calcification of various tissues but notably not of the vasculature. This absence of vascular calcification may result from the protection of an intact endothelium, which becomes severely compromised in the setting of atherosclerosis. To test this hypothesis, we generated fetuin-A/apolipoprotein E (ApoE)-deficient mice and compared them with ApoE-deficient and wild-type mice with regard to atheroma formation and extraosseous calcification. We assigned mice to three treatment groups for 9 wk: (1) Standard diet, (2) high-phosphate diet, or (3) unilateral nephrectomy (causing chronic kidney disease [CKD]) plus high-phosphate diet. Serum urea, phosphate, and parathyroid hormone levels were similar in all genotypes after the interventions. Fetuin-A deficiency did not affect the extent of aortic lipid deposition, neointima formation, and coronary sclerosis observed with ApoE deficiency, but the combination of fetuin-A deficiency, hyperphosphatemia, and CKD led to a 15-fold increase in vascular calcification in this model of atherosclerosis. Fetuin-A deficiency almost exclusively promoted intimal rather than medial calcification of atheromatous lesions. High-phosphate diet and CKD also led to an increase in valvular calcification and aorta-associated apoptosis, with wild-type mice having the least, ApoE-deficient mice intermediate, and fetuin-A/ApoE-deficient mice the most. In addition, the combination of fetuin-A deficiency, high-phosphate diet, and CKD in ApoE-deficient mice greatly enhanced myocardial calcification, whereas the absence of fetuin-A did not affect the incidence of renal calcification. In conclusion, fetuin-A inhibits pathologic calcification in both the soft tissue and vasculature, even in the setting of atherosclerosis.Hemodialysis (HD) patients experience a cardiovascular mortality of up to 20% per year, and vascular calcification is a strong independent risk factor of cardiovascular death.1,2 Pathologic calcification is driven both by an elevated serum calcium phosphate product and by differentiation of vascular or mesenchymal cells into osteoblast-like cells, becoming mineralization competent.Serum is a metastable solution with respect to calcium phosphate precipitation. Once started, calcification proceeds rapidly in the presence of calcifiable templates such as collagen, elastin, and cell debris.35 Fetuin-A accounts for approximately 50% of the capacity of serum to inhibit the spontaneous apatite formation from solutions supersaturated in calcium and phosphate.6 The inhibition is achieved by rapid formation of soluble colloidal fetuin-A calcium phosphate complexes, termed calciprotein particles (CPPs).79We previously showed that HD and calciphylaxis patients have depressed fetuin-A serum levels accompanied by a reduced capacity of their serum to inhibit calcium phosphate precipitation.5 In cross-sectional studies in HD patients, fetuin-A deficiency was identified as an inflammation-related predictor of cardiovascular and all-cause mortality, respectively.10,11 In patients without chronic kidney disease (CKD), fetuin-A levels correlated inversely with advanced coronary calcification.12,13 Fetuin-A–deficient (Ahsg−/−) mice maintained on the DBA/2 background exhibit a fully penetrating phenotype with extensive soft tissue calcification, whereas C57BL/6 Ahsg−/− mice represent “borderline calcifying” mice whereby rapid calcification can be induced by additional metabolic challenges or induction of CKD.5,14Calcification of the aorta or larger vessels is conspicuously absent in Ahsg−/− mice; therefore, the role for fetuin-A as an inhibitor of vascular calcification was uncertain.15,16 Absent vascular calcification in Ahsg−/− mice may be related to the protective mechanisms by an intact endothelium, which is severely compromised in humans with atherosclerosis and thus may serve as a nidus for subsequent calcification. To test this hypothesis, we created Ahsg−/−/apolipoprotein E double-deficient (ApoE−/−) mice maintained on the C57BL/6 genetic background to dissect the contribution of fetuin-A, CKD, and an elevated calcium-phosphorus product (Ca × P) to the development of atheroma formation and vascular calcification in an established murine model of atherosclerosis.  相似文献   

10.

Background:

The high prevalence of pain and depression in persons with spinal cord injury (SCI) is well known. However the link between pain intensity, interference, and depression, particularly in the acute period of injury, has not received sufficient attention in the literature.

Objective:

To investigate the relationship of depression, pain intensity, and pain interference in individuals undergoing acute inpatient rehabilitation for traumatic SCI.

Methods:

Participants completed a survey that included measures of depression (PHQ-9), pain intensity (“right now”), and pain interference (Brief Pain Inventory: general activity, mood, mobility, relations with others, sleep, and enjoyment of life). Demographic and injury characteristics and information about current use of antidepressants and pre-injury binge drinking also were collected. Hierarchical multiple regression was used to test depression models in 3 steps: (1) age, gender, days since injury, injury level, antidepressant use, and pre-injury binge drinking (controlling variables); (2) pain intensity; and (3) pain interference (each tested separately).

Results:

With one exception, pain interference was the only statistically significant independent variable in each of the final models. Although pain intensity accounted for only 0.2% to 1.2% of the depression variance, pain interference accounted for 13% to 26% of the variance in depression.

Conclusion:

Our results suggest that pain intensity alone is insufficient for understanding the relationship of pain and depression in acute SCI. Instead, the ways in which pain interferes with daily life appear to have a much greater bearing on depression than pain intensity alone in the acute setting.Key words: depression, pain, spinal cord injuriesThe high incidence and prevalence of pain following spinal cord injury (SCI) is well established16 and associated with numerous poor health outcomes and low quality of life (QOL).1,7,8 Although much of the literature on pain in SCI focuses on pain intensity, there is emerging interest in the role of pain interference or the extent to which pain interferes with daily activities of life.7,9 With prevalence as high as 77% in SCI, pain interference impacts life activities such as exercise, sleep, work, and household chores.2,7,1013 Pain interference also has been associated with disease management self-efficacy in SCI.14 There is a significant relationship between pain intensity and interference in persons with SCI.7 Like pain, the high prevalence of depression after SCI is well-established.1517 Depression and pain often co-occur,18,19 and their overlap ranges from 30% to 60%.19 Pain is also associated with greater duration of depressed mood.20 Pain and depression share common biological pathways and neurotransmitter mechanisms,19 and pain has been shown to attenuate the response to depression treatment.21,22Despite the interest in pain and depression after SCI and implications for the treatment of depression, their co-occurrence has received far less attention in the literature.23 Greater pain has been associated with higher levels of depression in persons with SCI,16,24 although this is not a consistent finding.25 Similarly, depression in persons with SCI who also have pain appears to be worse than for persons with non-SCI pain, suggesting that the link between pain and depression may be more intense in the context of SCI.26 In one of the few studies of pain intensity and depression in an acute SCI rehabilitation setting, Cairns et al 27 found a co-occurrence of pain and depression in 22% to 35% of patients. This work also suggested an evolution of the relationship between pain and depression over the course of the inpatient stay, such that they become associated by discharge. Craig et al28 found that pain levels at discharge from acute rehabilitation predicted depression at 2-year follow-up. Pain interference also has been associated with emotional functioning and QOL in persons with SCI1,7,29,30 and appears to mediate the relationship between ambulation and depression.31Studies of pain and depression in person with SCI are often limited methodologically to examine the independent contributions of pain intensity and interference to depression in an acute setting. For example, they include only pain intensity16,23,25,28,30; classify subjects by either pain plus depression23 or pain versus no pain8,28,30; use pain intensity and interference as predictor and outcome, respectively1; collapse pain interference domains into a single score1; or use only univariate tests (eg, correlations).7,8,25,30 In addition, the vast majority focus on the chronic period of injury. To fill a gap in knowledge, we examined the independent contributions of pain intensity and pain interference to depression, while accounting for injury and demographic characteristics, antidepressant treatment, and pre-injury binge drinking in a sample of persons with acute SCI. We hypothesized that when accounting for both pain intensity and interference in the model, interference would have an independent and significant relationship with depression, above and beyond pain intensity.  相似文献   

11.
12.

Background:

Chronic spinal cord injury (SCI) is associated with an increase in risk factors for cardiovascular disease (CVD). In the general population, atherosclerosis in women occurs later than in men and usually presents differently. Associations between risk factors and incidence of CVD have not been studied in women with SCI.

Objective:

To determine which risk factors for CVD are associated with increased carotid intima-media thickness (CIMT), a common indicator of atherosclerosis, in women with SCI.

Methods:

One hundred and twenty-two females older than 18 years with traumatic SCI at least 2 years prior to entering the study were evaluated. Participants were asymptomatic and without evidence of CVD. Exclusion criteria were acute illness, overt heart disease, diabetes, and treatment with cardiac drugs, lipid-lowering medication, or antidiabetic agents. Measures for all participants were age, race, smoking status, level and completeness of injury, duration of injury, body mass index, serum lipids, fasting glucose, hemoglobin A1c, and ultrasonographic measurements of CIMT. Hierarchical multiple linear regression was conducted to predict CIMT from demographic and physiologic variables.

Results:

Several variables were significantly correlated with CIMT during univariate analyses, including glucose, hemoglobin A1c, age, and race/ethnicity; but only age was significant in the hierarchical regression analysis.

Conclusions:

Our data indicate the importance of CVD in women with SCI.Key words: age, cardiovascular disease, carotid intima-media thickness, hemoglobin A1c, risk factors, smokingThe secondary conditions of metabolic syndrome and cardiovascular disease (CVD) resulting from spinal cord injury (SCI ) are not well understood. In particular, persons with SCI have an increase in metabolic risk factors for cardiovascular disease (CVD),15 but researchers have not determined whether this increase is associated with an increased incidence of CVD. The association has not been shown in reports on mortality or prevalence rates for CVD in people with SCI612 or in the few studies that have appraised CVD in people with SCI using physiologic assessments.1318 Either the question was not addressed, or the evidence is insufficient due to low sample sizes and a lack of objective, prospective epidemiological studies assessing this question. Nevertheless, studies consistently show that metabolic syndrome is prevalent among individuals with SCI.15,12 Metabolic syndrome consists of multiple interrelated risk factors that increase the risk for atherosclerotic heart disease by 1.5- to 3-fold.19,20Compounding the uncertainty about the association of metabolic risk factors with CVD in SCI are possible gender differences.2124 Findings from studies of men with SCI might not apply to women with SCI. For example, the correlation between physical activity and high-density lipoprotein (HDL) levels in men with SCI is not found for women with SCI.25,26 Furthermore, able-bodied women develop atherosclerosis later than do able-bodied men, and they usually present differently.27 Some studies indicate that abnormal glucose metabolism may play a particularly important role in CVD in women27; data from our group suggest that this is the case in women with SCI as well.15 Although women constitute 18% to 20% of the SCI population, no studies have evaluated cardiovascular health in women with chronic SCI.Carotid intima-media thickness (CIMT) is the most robust, highly tested, and often used noninvasive endpoint for assessing the progression of subclinical atherosclerosis in men and women of all ages.2846 For people with SCI, CIMT is a reliable surrogate measure of asymptomatic CVD.15,47 The incidence of asymptomatic CVD appears to increase with the duration of SCI,15 where duration of injury is a cardiac risk factor independent of age.17 Moreover, CIMT is greater in men with SCI than in matched able-bodied controls,48 indicating a subclinical and atypical presentation of CVD. A variety of studies have confirmed the usefulness of high-resolution B-mode ultrasound measurement of CIMT for quantitation of subclinical atherosclerosis.49To better discern the association of risk factors with measures of subclinical atherosclerotic disease in women with SCI, we performed blood tests and ultrasonographic measurements of CIMT on 122 females with chronic SCI who were free of overt CVD. We tested for the 3 metabolic risk factors that are consistently identified in the varied definitions of metabolic syndrome: abnormal carbohydrate metabolism, abnormally high triglycerides, and abnormally low HDL cholesterol. We also tested for 4 other CVD risk factors: high levels of low-density lipoprotein (LDL), high total cholesterol, high body mass index (BMI), and a history of smoking.  相似文献   

13.

Background:

Understanding the related fates of muscle density and bone quality after chronic spinal cord injury (SCI) is an important initial step in determining endocrine-metabolic risk.

Objective:

To examine the associations between muscle density and indices of bone quality at the distal lower extremity of adults with chronic SCI.

Methods:

A secondary data analysis was conducted in 70 adults with chronic SCI (C2-T12; American Spinal Injury Association Impairment Scale [AIS] A-D; ≥2 years post injury). Muscle density and cross-sectional area (CSA) and bone quality indices (trabecular bone mineral density [TbBMD] at the distal tibia [4% site] and cortical thickness [CtTh], cortical area [CtAr], cortical BMD [CtBMD], and polar moment of inertia [PMI] at the tibial shaft [66% site]) were measured using peripheral quantitative computed tomography. Calf lower extremity motor score (cLEMS) was used as a clinical measure of muscle function. Multivariable linear regression analyses were performed to determine the strength of the muscle-bone associations after adjusting for confounding variables (sex, impairment severity [AIS A/B vs AIS C/D], duration of injury, and wheelchair use).

Results:

Muscle density was positively associated with TbBMD (b = 0.85 [0.04, 1.66]), CtTh (b = 0.02 [0.001, 0.034]), and CtBMD (b = 1.70 [0.71, 2.69]) (P < .05). Muscle CSA was most strongly associated with CtAr (b = 2.50 [0.12, 4.88]) and PMI (b = 731.8 [161.7, 1301.9]) (P < .05), whereas cLEMS was most strongly associated with TbBMD (b = 7.69 [4.63, 10.76]) (P < .001).

Conclusion:

Muscle density and function were most strongly associated with TbBMD at the distal tibia in adults with chronic SCI, whereas muscle size was most strongly associated with bone size and geometry at the tibial shaft.Key words: bone mineral density, bone quality, muscle density, muscle size, osteoporosis, peripheral quantitative computed tomography, spinal cord injurySpinal cord injury (SCI) is associated with sublesional muscle atrophy,13 changes in muscle fiber type,4,5 reductions in hip and knee region bone mineral density (BMD),68 and increased central and regional adiposity after injury.9,10 Adverse changes in muscle and bone health in individuals with SCI contribute to an increased risk of osteoporosis,1113 fragility fractures,14 and endocrine-metabolic disease (eg, diabetes, dyslipidemia, heart disease).1517 Crosssectional studies have shown a higher prevalence of lower extremity fragility fractures among individuals with SCI ranging from 1% to 34%.1820 Fragility fractures are associated with negative health and functional outcomes, including an increased risk of morbidity and hospitalization,21,22 mobility limitations,23 and a reduced quality of life.24 Notably, individuals with SCI have a normal life expectancy, yet fracture rates increase annually from 1% per year in the first year to 4.6% per year in individuals greater than 20 years post injury.25,26Muscle and bone are thought to function as a muscle-bone unit, wherein muscle contractions impose loading forces on bone that produce changes in bone geometry and structure.27,28 A growing body of evidence has shown that individuals with SCI (predominantly those with motor complete injury) exhibit similar patterns of decline in muscle cross-sectional area (CSA) and BMD in the acute and subacute stages following injury.4,11,29 Prospective studies have exhibited a decrease in BMD of 1.1% to 47% per year6,7,30 and up to 73% in the 2 to 7 years following SCI.8,14,31,32 Decreases in muscle CSA have been well-documented following SCI, with greater disuse atrophy observed after complete SCI versus incomplete SCI, presumably due to the absence of voluntary muscle contractions and associated mobility limitations.1,2,16 Muscle quality is also compromised early after SCI, resulting in sublesional accumulation of adipose tissue in the chronic stage of injury3,33,34; the exact time course of this event has been poorly elucidated to date. Adipose tissue deposition within and between skeletal muscle is linked to an increase in noncontractile muscle tissue and a reduction in muscle force-generating capacity on bone.35,36 Skeletal muscle fat infiltration is up to 4 times more likely to occur in individuals with SCI,1,16,37 contributing to metabolic complications (eg, glucose intolerance),16 reduced muscle strength and function,38 and mobility limitations3 – all factors that may be associated with a deterioration in bone quality after SCI.The association between lean tissue mass and bone size (eg, BMD and bone mineral content) in individuals with SCI has been wellestablished using dual energy x-ray absorptiometry (DXA).9,10,29,34 However, DXA is unable to measure true volumetric BMD (vBMD), bone geometry, and bone structure. Peripheral quantitative computed tomography (pQCT) is an imaging technique that improves our capacity to measure indices of bone quality and muscle density and CSA at fracture-prone sites (eg, tibia).3,39 Recent evidence from cross-sectional pQCT studies has shown that muscle CSA and calf lower extremity motor score (cLEMS) were associated with indices of bone quality at the tibia in individuals with SCI.13,40 However, neither study measured muscle density (a surrogate of fatty infiltration when evaluating the functional muscle-bone unit). Fatty infiltration of muscle is common after SCI1,16,37 and may affect muscle function or the muscle-bone unit, but the association between muscle density and bone quality indices at the tibia in individuals with chronic SCI is unclear. Muscle density measured using pQCT may be an acceptable surrogate of muscle quality when it is difficult to assess muscle strength due to paralysis.3,39 Additionally, investigating which muscle outcome (muscle density, CSA, cLEMS) is most strongly associated with vBMD and bone structure may inform modifiable targets for improving bone quality and reducing fracture risk after chronic SCI.The primary objective of this secondary analysis was to examine the associations between pQCTderived calf muscle density and trabecular vBMD at the tibia among adults with chronic SCI. The secondary objective was to examine the associations between calf muscle density, CSA, and function and tibial vBMD, cortical CSA and thickness, and polar moment of inertia (PMI). First, we hypothesize that calf muscle density will be a positive correlate of trabecular and cortical vBMD, cortical CSA and thickness, and PMI at the tibia in individuals with chronic SCI. Second, we hypothesize that of the key muscle variables (cLEMS, CSA and density), calf muscle density and cLEMS will be most strongly associated with trabecular vBMD, whereas calf muscle CSA will be most strongly associated with cortical CSA and PMI.  相似文献   

14.

Background:

A large percentage of individuals with spinal cord injury (SCI) report shoulder pain that can limit independence and quality of life. The pain is likely related to the demands placed on the shoulder by transfers and propulsion. Shoulder pathology has been linked to altered scapular mechanics; however, current methods to evaluate scapular movement are invasive, require ionizing radiation, are subject to skin-based motion artifacts, or require static postures.

Objective:

To investigate the feasibility of applying 3-dimensional ultrasound methods, previously used to look at scapular position in static postures, to evaluate dynamic scapular movement.

Method:

This study evaluated the feasibility of the novel application of a method combining 2-dimensional ultrasound and a motion capture system to determine 3-dimensional scapular position during dynamic arm elevation in the scapular plane with and without loading.

Results:

Incremental increases in scapular rotations were noted for extracted angles of 30°, 45°, 60°, and 75° of humeral elevation. Group differences were evaluated between a group of 16 manual wheelchair users (MWUs) and a group of age- and gender-matched able-bodied controls. MWUs had greater scapular external rotation and baseline pathology on clinical exam. MWUs also had greater anterior tilting, with this difference further accentuated during loading. The relationship between demographics and scapular positioning was also investigated, revealing that increased age, pathology on clinical exam, years since injury, and body mass index were correlated with scapular rotations associated with impingement (internal rotation, downward rotation, and anterior tilting).

Conclusion:

Individuals with SCI, as well as other populations who are susceptible to shoulder pathology, may benefit from the application of this imaging modality to quantitatively evaluate scapular positioning and effectively target therapeutic interventions.Key words: kinematics, scapula, ultrasound, wheelchair userThe shoulder is a common site of injury across many populations. Because it is the most mobile joint in the body, the high prevalence of disorders is not surprising. Individuals are at increased risk for shoulder pathology when exposed to high forces, sustained postures, and repetitive movements.1 Wheelchair users are exposed to all of these factors in activities of daily living. Among manual wheelchair users (MWUs), 35% to 67% report shoulder pain.27 In this population, the presence of shoulder dysfunction significantly affects function and decreases quality of life.8,9 With altered scapular kinematics being linked to a multitude of shoulder problems, the identification of changes in kinematics may allow for earlier detection of pathology and targeting of appropriate interventions.1025 However, evaluation of dynamic scapular movement is a challenging task, as the scapula rotates about 3 axes while also gliding underneath overlying tissue. Direct visualization of the bone is ideal but is often limited by cost, availability, and exposure to radiation, and skin-based systems are prone to error.2633The overall goal of this study was to investigate the feasibility of applying 3-dimensional ultrasound methods, previously used to look at scapular position in static postures, to evaluate dynamic scapular movement.34 The specific goals were as follows:
  1. Evaluate intermediate angles of functional elevation during dynamic movement (30°, 45°, 60°, and 75°). We hypothesize that we will see incremental increases in external rotation, upward rotation, and posterior tipping throughout the movement to maintain the distance between the acromion and humerus.
  2. Compare dynamic scapular movement between MWUs and able-bodied controls (ABs). We anticipate that the nature of wheelchair propulsion and demands of activities of daily living will elucidate differences between this population and ABs with comparably lower daily demands on the shoulder.
  3. Evaluate the effect of loading on scapular movement, as other studies have suggested that differences in kinematics are clearer in the presence of loading.10,35,36
  4. Investigate the relationship between shoulder pathology, age, years since injury, and body mass index (BMI) and scapular positioning.
  相似文献   

15.
Arteriovenous (AV) access failure resulting from venous neointimal hyperplasia is a major cause of morbidity in patients with ESRD. To understand the role of chronic kidney disease (CKD) in the development of neointimal hyperplasia, we created AV fistulae (common carotid artery to jugular vein in an end-to-side anastomosis) in mice with or without CKD (renal ablation or sham operation). At 2 and 3 wk after operation, neointimal hyperplasia at the site of the AV anastomosis increased 2-fold in animals with CKD compared with controls, but cellular proliferation in the neointimal hyperplastic lesions did not significantly differ between the groups, suggesting that the enhanced neointimal hyperplasia in the setting of CKD may be secondary to a migratory phenotype of vascular smooth muscle cells (VSMC). In ex vivo migration assays, aortic VSMC harvested from mice with CKD migrated significantly greater than VSMC harvested from control mice. Moreover, animals with CKD had higher serum levels of osteopontin, which stimulates VSMC migration. When we treated animals with bone morphogenic protein-7, which promotes VSMC differentiation, before creation of the AV anastomosis, the effect of CKD on the development of neointimal hyperplasia was eliminated. In summary, CKD accelerates development of neointimal hyperplasia at the anastomotic site of an AV fistula, and administration of bone morphogenic protein-7 neutralizes this effect.Arteriovenous (AV) access dysfunction such as stenosis and thrombosis constitute a major cause of morbidity for patients on chronic hemodialysis for end-stage kidney disease.1 While AV fistulae constructed with native vessels are the best vascular access available owing to a lower incidence of stenosis, thrombosis, and infection compared with vascular grafts or central venous catheters, its failure rate up to 66% at 2 yr2 remains unacceptably high as hemodialysis access related hospitalizations are on the rise and its cost are well over one billion dollars per annum in the United States alone.3The cause of failure is predominantly secondary to the occlusive neointimal hyperplastic (NH) lesion formation at the anastomosis and/or the outflow veins followed by in situ thrombosis.47 Unlike restenosis seen with preocclusive atherosclerotic arteries after angioplasty and stenting, neointimal (new intimal) hyperplasia is seen at the anastomosis involving an artery or a synthetic graft (e.g., expanded polytetrafluoroethylene, or ePTFE, or Dacron) and a vein in the upper extremities. Although these blood vessels are predisposed to calcification, pre-existing NH, and needle stick injury, they are usually free of atherosclerotic plaque. Therefore, directional migration of vascular smooth muscle cells (VSMCs) into the luminal surface is critical to the anastomotic NH lesion formation.8,9Several animal models with native or synthetic graft accesses have been used to gain insight into the pathologic mechanisms of NH lesion development.10,11 However, these studies lacked the critical component of chronic kidney disease (CKD), and whether CKD plays a role in NH lesion formation remains unknown. CKD has been implicated in the development of atherosclerosis along with a host of other deranged factors such as hemodynamic forces, inflammatory mediators, platelet activation, coagulation cascade, and metabolic factors.12,13 In this study, we used a murine model of CKD modified from Gagnon and Gallimore,14 to assess the effect of CKD on NH formation after AV fistula creation.  相似文献   

16.
Proteinuria and increased renal reabsorption of NaCl characterize the nephrotic syndrome. Here, we show that protein-rich urine from nephrotic rats and from patients with nephrotic syndrome activate the epithelial sodium channel (ENaC) in cultured M-1 mouse collecting duct cells and in Xenopus laevis oocytes heterologously expressing ENaC. The activation depended on urinary serine protease activity. We identified plasmin as a urinary serine protease by matrix-assisted laser desorption/ionization time of-flight mass spectrometry. Purified plasmin activated ENaC currents, and inhibitors of plasmin abolished urinary protease activity and the ability to activate ENaC. In nephrotic syndrome, tubular urokinase-type plasminogen activator likely converts filtered plasminogen to plasmin. Consistent with this, the combined application of urokinase-type plasminogen activator and plasminogen stimulated amiloride-sensitive transepithelial sodium transport in M-1 cells and increased amiloride-sensitive whole-cell currents in Xenopus laevis oocytes heterologously expressing ENaC. Activation of ENaC by plasmin involved cleavage and release of an inhibitory peptide from the ENaC γ subunit ectodomain. These data suggest that a defective glomerular filtration barrier allows passage of proteolytic enzymes that have the ability to activate ENaC.Nephrotic syndrome is characterized by proteinuria, sodium retention, and edema. Increased renal sodium reabsorption occurs in the cortical collecting duct (CCD),1,2 where a rate-limiting step in transepithelial sodium transport is the epithelial sodium channel (ENaC), which is composed of the three homologous subunits: α, β, γ.3ENaC activity is regulated by hormones, such as aldosterone and vasopressin (AVP)4,5; however, adrenalectomized rats and AVP-deficient Brattleboro rats are capable of developing nephrotic syndrome,1,6 and nephrotic patients do not consistently display elevated levels of sodium-retaining hormones,7,8 suggesting that renal sodium hyper-reabsorption is independent of systemic factors. Consistent with this, sodium retention is confined to the proteinuric kidney in the unilateral puromycin aminonucleoside (PAN) nephrotic model.2,9,10There is evidence that proteases contribute to ENaC activation by cleaving the extracellular loops of the α- and γ-subunits.1113 Proteolytic activation of ENaC by extracellular proteases critically involves the cleavage of the γ subunit,1416 which probably leads to the release of a 43-residue inhibitory peptide from the ectodomain.17 Both cleaved and noncleaved channels are present in the plasma membrane,18,19 allowing proteases such as channel activating protease 1 (CAP1/prostasin),20 trypsin,20 chymotrypsin,21 and neutrophil elastase22 to activate noncleaved channels from the extracellular side.23,24 We hypothesized that the defective glomerular filtration barrier in nephrotic syndrome allows the filtration of ENaC-activating proteins into the tubular fluid, leading to stimulation of ENaC. The hypothesis was tested in the PAN nephrotic model in rats and with urine from patients with nephrotic syndrome.  相似文献   

17.
Donor characteristics such as age and cause of death influence the incidence of delayed graft function (DGF) and graft survival; however, the relative influence of donor characteristics (“nature”) versus transplant center characteristics (“nurture”) on deceased-donor kidney transplant outcomes is unknown. We examined the risks for DGF and allograft failure within 19,461 recipient pairs of the same donor''s kidneys using data from the US Renal Data System. For the 11,894 common-donor pairs transplanted at different centers, a recipient was twice as likely to develop DGF when the recipient of the contralateral kidney developed DGF (odds ratio [OR] 2.05; 95% confidence interval [CI] 1.82 to 2.30). Similarly, for 7567 common-donor pairs transplanted at the same center, the OR for DGF was 3.02 (95% CI 2.62 to 3.48). For pairs transplanted at the same center, there was an additional 42% risk for DGF compared with pairs transplanted at different centers. After adjustment for DGF, the within-pair ORs for allograft failure by 1 yr were 1.92 (95% CI 1.33 to 2.77) and 1.77 (95% CI 1.25 to 2.52) for recipients who underwent transplantation at the same center and different centers, respectively. These data suggest that both unmeasured donor characteristics and transplant center characteristics contribute to the risk for DGF and that the former also contribute significantly to allograft failure.Delayed graft function (DGF) is an important predictor of graft failure after kidney transplantation.13 The incidence of DGF after deceased-donor kidney transplants ranges between 23 and 50%.46 Although some studies have been mixed, several large studies have shown that DGF influences graft failure both through its association with and independent of acute rejection.5,710 DGF also adversely affects cost, length of hospitalization, and patient rehabilitation.1113 Allograft failure results in half of the deceased-donor kidneys being lost at 11 yr after transplantation.14There are many known determinants of DGF and allograft failure. Studies have implicated a number of immunologic and nonimmunologic characteristics, including donor factors, recipient factors, and the transplant procedure.4,6,1521 A limited effort has been made to evaluate the relative contribution of these risk factors by exploiting that there is variation in the response of recipients of kidneys from the same donor.18,2224 This approach is similar to studies of monozygotic twins reared apart, which seek to quantify the relative importance of environmental and genetic factors on the basis of variability within twin pairs and among twin pairs.22,25 Analyses that examine outcomes in two recipients of kidneys from the same deceased donor can be used to determine the donor''s relative contribution to the recipients’ outcomes.We retrospectively evaluated a national cohort of deceased-donor transplant recipients to understand better the complex relationship between donor (“nature”) and transplant center effects (“nurture”) associated with DGF and kidney allograft failure. We examined the within-pair correlation of these outcomes among recipients of kidneys from the same deceased donor and adjusted for transplant center effect by estimating separate odds ratios (ORs) for recipient pairs who underwent transplantation at the same transplant center and at different transplant centers. The transplant center effect was detected by determining the difference in outcomes for the paired kidneys from the same deceased donor transplanted at the same versus different centers.  相似文献   

18.

Background:

The predictors and patterns of upright mobility in children with a spinal cord injury (SCI) are poorly understood.

Objective:

The objective of this study was to develop a classification system that measures children’s ability to integrate ambulation into activities of daily living (ADLs) and to examine upright mobility patterns as a function of their score and classification on the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) exam.

Methods:

This is a cross-sectional, multicenter study that used a convenience sample of subjects who were participating in a larger study on the reliability of the ISNCSCI. A total of 183 patients between 5 and 21 years old were included in this study. Patients were asked if they had participated in upright mobility in the last month and, if so, in what environment and with what type of bracing. Patients were then categorized into 4 groups: primary ambulators (PrimA), unplanned ambulators (UnPA), planned ambulators (PlanA), and nonambulators.

Results:

Multivariate analyses found that only lower extremity strength predicted being a PrimA, whereas being an UnPA was predicted by both lower extremity strength and lack of preservation of S45 pinprick sensation. PlanA was only associated with upper extremity strength.

Conclusions:

This study introduced a classification system based on the ability of children with SCI to integrate upright mobility into their ADLs. Similar to adults, lower extremity strength was a strong predictor of independent mobility (PrimA and UnPA). Lack of pinprick predicted unplanned ambulation, but not being a PrimA. Finally, upper extremity strength was a predictor for planned ambulation.Key words: ambulation, ISNCSCI, pediatrics, spinal cord injuryAfter a spinal cord injury (SCI), learning to walk often becomes the focus of rehabilitation for children and their families.1,2 Although the majority of children with SCI do not return to full-time functional ambulation, those who accomplish some level of walking report positive outcomes such as feeling “normal” again, being eye-to-eye with peers, and having easier social interactions.3 Although not frequently reported by patients, there is some evidence of physiological benefits as well.39 Regardless of age, upright mobility has been positively associated with community participation and life satisfaction.1012 For children, upright mobility allows them to explore their physical environment, which facilitates independence and learning as part of the typical developmental process.13,14With the use of standers, walkers, and other assistive devices, as well as a variety of lower extremity orthoses, it is a reasonable expectation that some children with spinal injuries achieve upright stance and mobility.7,9,1321 However, there are 2 main challenges for clinicians and patients: understanding the factors that either encourage or discourage upright activities, and identifying how best to determine whether upright mobility is successful and meaningful. The literature on adults suggests that upright mobility is dependent on physiological and psychosocial factors. Physiological factors include the patient’s current age, neurological level, muscle strength, and comorbidities.14,2227 Psychosocial factors include satisfaction with the appearance of the gait pattern, cosmesis, social support for donning/doffing braces, and assistance with transfer and during ambulation.3,9,19,2832The identification of outcome measures that provide a meaningful indication of successful upright mobility has been difficult. The World Health Organization (WHO) describes 2 constructs for considering outcomes – capacity and performance.33 Capacity refers to maximal capability in a laboratory setting. An example of a capacity measure is the Walking Index for Spinal Cord Injury (WISCI), which is an ordinal scale used to quantify walking capacity based on assistive device, type of orthosis, and amount of assistance required.34,35 Other capacity measures include the Timed Up and Go test and the 6-minute walk test.36,37 On the other hand, performance refers to actual activity during a patient’s daily activities in typical, real-life environments.33 For example, the FIM is an observation scale that scores the patient’s typical daily performance.36,3840 The FIM is considered a burden of care measure that determines the amount of actual assistance provided to a patient during typical routines and environments, which may or may not reflect maximal ability or capacity. Performance measures provide an adequate clinical snap-shot of a patients’ daily function (evaluates what they do), whereas capacity measures are better research tools, as they are able to detect subtle changes in ambulation (evaluates what they can do).In children, no capacity outcome measures of ambulation have been tested for validity or reliability. Availability of reliable and valid performance measures is also lacking. The WeeFIM is a performance measure for children, but it is not SCI specific. It is scored on the child’s burden of care, that is, on the maximal assistance required rather than the child’s maximal independence or the highest capacity of performance during a typical day. For children, another commonly used scale is the Hoffer Scale, which relies on the physician’s or therapist’s subjective determination of the purpose of the upright mobility activities (for function or for exercise).41,42 Because parents and school systems are encouraged to integrate “exercise” ambulation into daily activities, it may not be possible to distinguish between therapeutic and functional ambulation in the home, school, or community environments. In the schools, a teacher/therapist should incorporate upright mobility into the classroom setting by donning a child’s braces and then having her/him ambulate a short distance to stand at an easel in art class or to stand upright when talking to friends during recess. In this situation, walking serves the dual purpose of being functional and therapeutic.For this study, it was decided not to rely on a subjective determination of therapeutic versus functional ambulation as the main outcome measure. Instead, we were interested in the children and adolescents who have successfully integrated independent mobility into their daily activities, regardless of frequency, distance, or purpose. Recent literature in studies of children and adolescents suggests that spontaneity is important for participation in functional and social activities. For example, a survey of patients using functional electrical stimulation for hand function found a reduction in the dependence on others for donning splints, which facilitated independence with activities of daily living (ADLs) in adolescents.4345 In a more recent study, Mulcahey et al46 found that a reduction of spontaneity in adolescents was a barrier for social activity; during cognitive interviews, children reported not participating in sleepovers due to planning their bowel/bladder programs.To date, there are no measures that integrate spontaneity of standing and/or upright mobility into the daily activities of children. Toward that aim, this study introduces a new scale that attempts to categorize children into 4 mutually exclusive groups: primary ambulators, unplanned ambulators, planned ambulators, and nonambulators. The purpose of this study was to examine ambulation patterns among children and adolescents with SCI as a function of neurological level, motor level, and injury severity, as defined by the motor, sensory, and anorectal examinations of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI). A secondary aim of the study was to determine how performance on the ISNCSCI exam was associated with the ability of children to independently integrate ambulation into their daily routines.  相似文献   

19.
Cyclosporine A (CsA) is a substrate of P-glycoprotein, an efflux transporter encoded by the ABCB1 gene. Compared with carriers of the wild-type gene, carriers of T allelic variants in exons 21 or 26 have reduced P-glycoprotein activity and, secondarily, increased intracellular concentration of CsA; therefore, carriers of T variants might be at increased risk for CsA-related adverse events. We evaluated the associations between ABCB1 genotypes (in exons 12, 21, and 26) and CsA-related outcomes in 147 renal transplant recipients who were receiving CsA-based immunosuppression and were included in the Mycophenolate Steroids Sparing study. During a median of 65.5 mo follow-up, carriers of T allelic variants in exons 21 or 26 had a three-fold risk for delayed graft function (DGF), a trend to slower recovery of renal function and lower GFR at study end, and significantly higher incidences of new-onset diabetes and cytomegalovirus reactivation compared with carriers of the wild-type genotype. T variants in both exons 21 and 26 were independently associated with 3.8- and 3.5-fold higher risk for DGF, respectively (P = 0.022 and P = 0.034). The incidence of acute rejection and the mean CsA dose and blood levels were comparable in genotype groups. In conclusion, renal transplant recipients with T allelic variants in ABCB1 exons 21 or 26 are at increased risk for CsA-related adverse events. Genetic evaluation may help to identify patients at risk and to modulate CsA therapy to optimize graft and patient outcomes.The introduction of cyclosporine A (CsA) therapy in the early 1980s opened a new era in organ transplantation. Compared with steroid- and azathioprine-based regimens, immunosuppressive protocols including this inhibitor of calcineurin—a key enzyme involved in T cell activation1—decreased the incidence of acute rejections from 40% to 50% to 20% to 30% and increased one-year survival rates of the grafts from 60% to between 80% and 90%.2 Thirty years later, CsA remains a cornerstone of immunosuppressive therapy for recipients of both renal and nonrenal transplants worldwide. However, standard recommended doses are associated with nephrotoxicity, resulting in delayed graft function (DGF) and progressive renal function deterioration in the long term.3,4 Moreover, CsA worsens glucose tolerance and lipid profile; increases systemic BP; and, similarly to other immunosuppressants, enhances the risk of opportunistic infections, lymphoproliferative disorders, and cancer.1,5To minimize side effects without increasing the risk of rejection, treatment is titrated to target CsA blood levels according to well established guidelines.6 However, the therapeutic index remains narrow, whereas the frequency and severity of CsA-related adverse effects are considerably variable among patients, even at comparable CsA levels.1,6 This suggests the possibility that a heterogeneous individual susceptibility may result in increased risk in some patients despite exposure to CsA levels that in the majority of cases are devoid of significant toxicity.6 Thus, identifying markers or predictors of individual response to CsA therapy might help to tailor CsA therapy and optimize the risk/benefit profile of CsA-based immunosuppression.Drug efficacy and tolerability are influenced by several factors, including the activity of proteins and enzymes involved in drug transport and metabolism.7 CsA is a substrate of an efflux transporter—the P-glycoprotein (P-gp) encoded by the multidrug resistance-1 gene (now referred as ABCB1)—which actively transports lipophilic drugs and other xenobiotics from the intracellular to the extracellular domain.8 This transporter is expressed in lymphocytes,9 and in other leukocytes,10 as well as in hepatocytes and on the brush border of enterocytes and proximal tubular cells.8 Reduced expression or functional inhibition of this efflux-pump invariably results in increased intracellular and tissue drug concentrations,8,9 but may have unpredictable effects on CsA blood levels. Indeed, CsA blood levels increased, decreased, or did not change in different settings, likely because of different balances between enhanced distribution into the tissue compartment and decreased excretion into the gastrointestinal lumen or urinary tract.7 Increased CsA concentrations in lymphocytes,11 polymorphonuclear cells,12 and other circulating leukocytes13 have been associated with increased production and release of reactive oxygen species (ROS). ROS production by leukocytes is the primary defense against invading micro-organisms, but has also been involved in the pathogenesis of ischemia-reperfusion damage of engrafted tissues or organs.3,14 Thus, increased intracellular CsA disposition with enhanced ROS production might amplify oxidative stress and tissue damage to the graft after reperfusion.15 Increased intralymphocyte drug concentration is also associated with more effective inhibition of lymphocyte proliferation by CsA in vitro,16 an effect that, in vivo, might translate into more effective protection against graft rejection,17 but also into excess risk of opportunistic infections, lymphoproliferative disorders, or cancer.P-glycoprotein expression and activity are reduced in carriers of one or two T allelic variants in exons 12, 21, and 26 as compared with carriers of the wild-type ABCB1 gene.8,18,19 Thus, at comparable CsA exposure, carriers of the allelic variants are expected to have higher intracellular and tissue CsA levels than wild-type carriers and, conceivably, should be exposed to more, and more severe, CsA-related events. We formally tested this hypothesis in a large cohort of renal transplant recipients prospectively monitored in the setting of a randomized, controlled, clinical trial, the Mycophenolate Steroids Sparing (MYSS) study, which aimed to compare the risk/benefit profile of mycophenolate mofetil and azathioprine therapy in immunosuppressive regimens including the CsA microemulsion Neoral.20  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号