首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Despite the high prevalence of chronic kidney disease (CKD), relatively few individuals with CKD progress to ESRD. A better understanding of the risk factors for progression could improve the classification system of CKD and strategies for screening. We analyzed data from 65,589 adults who participated in the Nord-Trøndelag Health (HUNT 2) Study (1995 to 1997) and found 124 patients who progressed to ESRD after 10.3 yr of follow-up. In multivariable survival analysis, estimated GFR (eGFR) and albuminuria were independently and strongly associated with progression to ESRD: Hazard ratios for eGFR 45 to 59, 30 to 44, and 15 to 29 ml/min per 1.73 m2 were 6.7, 18.8, and 65.7, respectively (P < 0.001 for all), and for micro- and macroalbuminuria were 13.0 and 47.2 (P < 0.001 for both). Hypertension, diabetes, male gender, smoking, depression, obesity, cardiovascular disease, dyslipidemia, physical activity and education did not add predictive information. Time-dependent receiver operating characteristic analyses showed that considering both the urinary albumin/creatinine ratio and eGFR substantially improved diagnostic accuracy. Referral based on current stages 3 to 4 CKD (eGFR 15 to 59 ml/min per 1.73 m2) would include 4.7% of the general population and identify 69.4% of all individuals progressing to ESRD. Referral based on our classification system would include 1.4% of the general population without losing predictive power (i.e., it would detect 65.6% of all individuals progressing to ESRD). In conclusion, all levels of reduced eGFR should be complemented by quantification of urinary albumin to predict optimally progression to ESRD.Since the publication of the Kidney Disease Outcomes Quality Initiative (K/DOQI) clinical practice guidelines on the classification of chronic kidney disease in 2002,1 several studies based on this classification system have shown very high prevalence estimates of chronic kidney disease (CKD) in the general population (10 to 13%).2,3 Screening for CKD is therefore increasingly suggested1,4; however, only a small proportion of patients with stage 3 to 4 CKD progress to ESRD.5 There is an ongoing discussion on whether the current CKD criteria are appropriate.68 Developing a risk score to identify better the patients who are at increased risk for ESRD would be of major importance for the current efforts to establish clinical guidelines and public health plans for CKD.4,9,10Several predictors of progression to ESRD have been identified,9 but their independent predictive power has not been well studied either in the general population or in high-risk subgroups. Intuitively, a low estimated GFR (eGFR) is an important risk factor for ESRD, and eGFR is the backbone of the current CKD classification. High urine albumin is a well-established major risk factor for progression.9 Only a few studies have examined the renal risk as a function of the combination of eGFR and albuminuria.1114 These studies are of restricted value, however, because of exclusion of patients with diabetes14; inclusion of men only12; inclusion of only patients with diabetes13; or absence of information on potentially important risk factors, such as smoking, obesity, dyslipidemia, and cardiovascular disease.11,14CKD screening beyond patients with known hypertension or diabetes has been proposed,1,4 but such screening programs have remained unsatisfactory because of their limited predictive power. We used the data of the Second Nord-Trøndelag Health Study (HUNT 2), Norway, to improve such prediction. HUNT 2 is a large population-based study with a high participation rate.15 Our aim was to examine how accurately subsequent progression to ESRD could be predicted by a combined variable of baseline eGFR and urine albumin. We also tested whether further potential renal risk factors provided additional independent prediction.  相似文献   

2.
Chronic kidney disease (CKD, stages 1 to 4) affects approximately 13.1% of United States adults and leads to ESRD, cardiovascular disease, and premature death. Here, we assessed adherence to a subset of Kidney Disease Outcomes Quality Initiative preventive health care guidelines and identified associations between adherence and incident atherosclerotic heart disease (ASHD). Using the Medicare 5% data set, 1999 to 2005 (about 1.2 million patients per year), we created 3-yr rolling cohorts. We classified CKD and diabetes during year 1, assessed preventive care during year 2, and evaluated ASHD outcomes during year 3. We defined preventive care by the receipt of laboratory measurements (serum creatinine, lipids, calcium and phosphorus, parathyroid hormone, and, for patients with diabetes, hemoglobin A1c), influenza vaccination, and by at least one outpatient visit to a nephrologist. Among patients with CKD, 80% received ≥2 serum creatinine tests during the year, and only 11% received parathyroid hormone testing. Cumulative incidence of the combined ASHD outcome was 25% and 11% for patients with and without prevalent cardiovascular disease, respectively. Except for serum creatinine testing, preventive care associated with lower ASHD rates in the subsequent year, ranging from 10% lower for those who received influenza vaccinations and ≥2 A1c tests, to 43% lower for calcium-phosphorus assessment. Receiving ≥2 serum creatinine tests associated with a 13% higher rate of ASHD. A higher number of preventive measures associated with lower rates of ASHD. In summary, these data support an association between preventive measures and reduced cardiovascular morbidity and mortality.Chronic kidney disease (CKD, stages 1 to 4) is estimated to affect 13.1% (12.0% to 14.1%) of the adult noninstitutionalized civilian United States population, or 26.3 million adults according to the 2000 census.1 The prevalence rate increased approximately 30% between the early 1990s and the early 2000s.1 In 2002, the Kidney Disease Outcomes Quality Initiative (KDQOI) Clinical Practice Guidelines committee, organized by the National Kidney Foundation (NKF), noted that the three primary adverse consequences of CKD are kidney failure, cardiovascular disease (CVD), and premature death.2 The committee further noted that CVD is common, treatable, and potentially preventable in CKD patients, and that CKD appears to be a risk factor for CVD.2 In 1998, the NKF Task Force on Cardiovascular Disease in Chronic Renal Disease recommended that CKD patients be considered in the highest risk group for CVD events.3Two recent studies demonstrate increasing incidence of cardiovascular events4 and increasing prevalence of cardiovascular risk factors5 with decreasing GFR. An analysis of secondary cardiovascular events following myocardial infarction demonstrates increasing probability of subsequent cardiovascular events with decreasing GFR.6 United States Renal Data System (USRDS) analyses demonstrate hospitalization rates for congestive heart failure (CHF), ischemic heart disease, and arrhythmias two to seven times higher for Medicare patients with CKD than for those without CKD,7 and that CKD patients with no prior evidence of CVD were 60% more likely to develop CVD during the subsequent year than non-CKD patients.8,9The USRDS notes that CKD patients are three to five times more likely to die than to reach ESRD10 and that nearly half of CKD patient deaths occur out of the hospital, presumably sudden cardiac death.8 A recent meta-analysis found that the relative risk of all-cause mortality comparing CKD to non-CKD patients ranged from 0.94 to 5.00 in all cohorts analyzed and was significantly more than 1.0 in 93% of the cohorts; it also found an increasing risk of all-cause mortality with decreasing GFR.11The Healthy People 2010 initiative of the Centers for Disease Control and Prevention includes objectives intended to preserve renal function and slow CKD progression through early detection and intervention.12 The NKF has published numerous clinical practice guidelines addressing ESRD and CKD,2,1315 with the goals of identifying CKD early, slowing its progression, and reducing associated morbidity and mortality.Our objectives were to assess adherence to KDOQI recommendations for CKD care and subsequent associations between preventive care and incident atherosclerotic heart disease (ASHD) in the general Medicare population with evidence of CKD. Preventive care measures assessed include monitoring of serum creatinine, lipids, calcium-phosphorus, parathyroid hormone (PTH), glycated hemoglobin (A1c) in diabetic patients; influenza vaccinations; and outpatient nephrologist office visits. Subsequent ASHD outcomes studied include acute ischemic heart disease events, angina pectoris, cardiac arrest, coronary revascularization procedures, and all-cause death. Data were from the 5% Medicare 1999 to 2005 random sample limited data set standard analytic files.  相似文献   

3.

Background:

The relationship between cardiovascular disease (CVD) risk factors and dietary intake is unknown among individuals with spinal cord injury (SCI).

Objective:

To investigate the relationship between consumption of selected food groups (dairy, whole grains, fruits, vegetables, and meat) and CVD risk factors in individuals with chronic SCI.

Methods:

A cross-sectional substudy of individuals with SCI to assess CVD risk factors and dietary intake in comparison with age-, gender-, and race-matched able-bodied individuals enrolled in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Dietary history, blood pressure, waist circumference (WC), fasting blood glucose, high-sensitivity C-reactive protein (hs-CRP), lipids, glucose, and insulin data were collected from 100 SCI participants who were 38 to 55 years old with SCI >1 year and compared to 100 matched control participants from the CARDIA study.

Results:

Statistically significant differences between SCI and CARDIA participants were identified in WC (39.2 vs 36.2 in.; P < .001) and high-density lipoprotein cholesterol (HDL-C; 39.2 vs 47.5 mg/dL; P < .001). Blood pressure, total cholesterol, triglycerides, glucose, insulin, and hs-CRP were similar between SCI and CARDIA participants. No significant relation between CVD risk factors and selected food groups was seen in the SCI participants.

Conclusion:

SCI participants had adverse WC and HDL-C compared to controls. This study did not identify a relationship between consumption of selected food groups and CVD risk factors.Key words: cardiovascular disease risk factors, dietary intake, spinal cord injuryCardiovascular disease (CVD) is a leading cause of death in individuals with chronic spinal cord injuries (SCIs).15 This is partly because SCI is associated with several metabolic CVD risk factors, including dyslipidemia,610 glucose intolerance,6,1114 and diabetes.1517 In addition, persons with SCI exhibit elevated markers of inflammation18,19 and endothelial activation20 that are correlated with higher CVD prevalence.2123 Obesity, and specifically central obesity, another CVD risk factor,2426 is also common in this population.12,2729Dietary patterns with higher amounts of whole grains and fiber have been shown to improve lipid abnormalities,30 glucose intolerance, diabetes mellitus,3134 hypertension,35 and markers of inflammation36 in the general population. These dietary patterns are also associated with lower levels of adiposity.31 Ludwig et al reported that the strong inverse associations between dietary fiber and multiple CVD risk factors – excessive weight gain, central adiposity, elevated blood pressure, hypertriglyceridemia, low high-density lipoprotein cholesterol (HDL-C), high low-density lipoprotein cholesterol (LDL-C), and high fibrinogen – were mediated, at least in part, by insulin levels.37 Whole-grain food intake is also inversely associated with fasting insulin, insulin resistance, and the development of type 2 diabetes.32,38,39Studies in the general population have also shown a positive association between the development of metabolic syndrome as well as heart disease and consumption of a Western diet, a diet characterized by high intake of processed and red meat and low intake of fruit, vegetables, whole grains, and dairy.40,41 Red meat, which is high in saturated fat, has been shown to have an association with adverse levels of cholesterol and blood pressure and the development of obesity, metabolic syndrome, and diabetes.40,42,43Numerous studies have shown that individuals with chronic SCI have poor diet quality.4449 A Canadian study found that only 26.7% of their sample was adherent to the recommendations about the consumption of fruit, vegetables, and grains from the “Eating Well with Canada’s Food Guide.”44 Individuals with chronic SCI have also been found to have low fiber and high fat intakes when their diets were compared to dietary recommendations from the National Cholesterol Education Program,46 the 2000 Dietary Guidelines for Americans,49 and the recommended Dietary Reference Intakes and the Acceptable Macronutrient Distribution Range.47,48However, unlike in the general population, the relationship between dietary intake and obesity and CVD risk factors is unknown in the chronic SCI population. If a dietary pattern consisting of higher intake of whole grains and dietary fiber is favorably associated with obesity and CVD risk factors in individuals with chronic SCI, then trials of increased intake of whole grains and fiber intake could be conducted to document health benefits and inform recommendations. The purpose of this pilot study is to investigate the association between selected food group intake and CVD risk factors in individuals with chronic SCI as compared to age-, gender-, and race-matched able-bodied individuals enrolled in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Data will also be used to plan future studies in the relatively understudied field of CVD and nutrition in individuals with SCI.  相似文献   

4.
People with ESRD are at increased risk for cancer, but it is uncertain when this increased risk begins in the spectrum of chronic kidney disease (CKD). The aim of our study was to determine whether moderate CKD increases the risk for cancer among older people. We linked the Blue Mountains Eye Study, a prospective population-based cohort study of 3654 residents aged 49 to 97 yr, and the New South Wales Cancer Registry. During a mean follow-up of 10.1 yr, 711 (19.5%) cancers occurred in 3654 participants. Men but not women with at least stage 3 CKD had a significantly increased risk for cancer (test of interaction for gender P = 0.004). For men, the excess risk began at an estimated GFR (eGFR) of 55 ml/min per 1.73 m2 (adjusted hazard ratio [HR] 1.39; 95% confidence interval [CI] 1.00 to 1.92) and increased linearly as GFR declined. For every 10-ml/min decrement in eGFR, the risk for cancer increased by 29% (adjusted HR 1.29; 95% CI 1.10 to 1.53), with the greatest risk at an eGFR <40 ml/min per 1.73 m2 (adjusted HR 3.01; 95% CI 1.72 to 5.27). The risk for lung and urinary tract cancers but not prostate was higher among men with CKD. In conclusion, moderate CKD (stage 3) may be an independent risk factor for the development of cancer among older men but not women, and the effect of CKD on risk may vary for different types of cancer.Chronic kidney disease (CKD) is common in older people. Among those aged ≥50 yr, the prevalence of moderate (stage 3) CKD or worse, defined as estimated GFR (eGFR) <60 ml/min per 1.73 m2, is >20% in the United States and Australia.1,2 CKD is associated with significant morbidity and premature death. Cardiovascular complications and deaths are increased in the CKD population independent of traditional risk factors such as diabetes, hypertension, and dyslipidemia.35 Increased cancer risk is also well defined in the end-stage kidney disease (ESKD) and kidney transplant populations.68 The overall cancer incidence after transplantation is approximately three-fold greater than in the general population.Observational studies have suggested an increased cancer risk in people with early-stage CKD, before requiring dialysis or transplantation.9,10 An excess risk of 1.2 times for all cancers was reported during the 5 yr before renal replacement therapy in a population-based cohort study of dialysis and transplant patients, but inclusion was limited to those who progressed to ESKD, and comorbidity data were limited.6 Recently, an association between elevated albumin-to-creatinine ratio and cancer incidence was reported in a longitudinal population-based study of older individuals.11 Previous studies have not evaluated the threshold of CKD that is associated with an increased risk for cancer, adjusted for measurement error in estimating the severity of CKD, or determined the independent effect of CKD after accounting for known risk factors for cancer. The aim of our study was to estimate the independent effect of mild to moderately reduced kidney function on the risk for incident cancers among older people and to identify the threshold at which any excess risk begins.  相似文献   

5.

Background:

The high prevalence of pain and depression in persons with spinal cord injury (SCI) is well known. However the link between pain intensity, interference, and depression, particularly in the acute period of injury, has not received sufficient attention in the literature.

Objective:

To investigate the relationship of depression, pain intensity, and pain interference in individuals undergoing acute inpatient rehabilitation for traumatic SCI.

Methods:

Participants completed a survey that included measures of depression (PHQ-9), pain intensity (“right now”), and pain interference (Brief Pain Inventory: general activity, mood, mobility, relations with others, sleep, and enjoyment of life). Demographic and injury characteristics and information about current use of antidepressants and pre-injury binge drinking also were collected. Hierarchical multiple regression was used to test depression models in 3 steps: (1) age, gender, days since injury, injury level, antidepressant use, and pre-injury binge drinking (controlling variables); (2) pain intensity; and (3) pain interference (each tested separately).

Results:

With one exception, pain interference was the only statistically significant independent variable in each of the final models. Although pain intensity accounted for only 0.2% to 1.2% of the depression variance, pain interference accounted for 13% to 26% of the variance in depression.

Conclusion:

Our results suggest that pain intensity alone is insufficient for understanding the relationship of pain and depression in acute SCI. Instead, the ways in which pain interferes with daily life appear to have a much greater bearing on depression than pain intensity alone in the acute setting.Key words: depression, pain, spinal cord injuriesThe high incidence and prevalence of pain following spinal cord injury (SCI) is well established16 and associated with numerous poor health outcomes and low quality of life (QOL).1,7,8 Although much of the literature on pain in SCI focuses on pain intensity, there is emerging interest in the role of pain interference or the extent to which pain interferes with daily activities of life.7,9 With prevalence as high as 77% in SCI, pain interference impacts life activities such as exercise, sleep, work, and household chores.2,7,1013 Pain interference also has been associated with disease management self-efficacy in SCI.14 There is a significant relationship between pain intensity and interference in persons with SCI.7 Like pain, the high prevalence of depression after SCI is well-established.1517 Depression and pain often co-occur,18,19 and their overlap ranges from 30% to 60%.19 Pain is also associated with greater duration of depressed mood.20 Pain and depression share common biological pathways and neurotransmitter mechanisms,19 and pain has been shown to attenuate the response to depression treatment.21,22Despite the interest in pain and depression after SCI and implications for the treatment of depression, their co-occurrence has received far less attention in the literature.23 Greater pain has been associated with higher levels of depression in persons with SCI,16,24 although this is not a consistent finding.25 Similarly, depression in persons with SCI who also have pain appears to be worse than for persons with non-SCI pain, suggesting that the link between pain and depression may be more intense in the context of SCI.26 In one of the few studies of pain intensity and depression in an acute SCI rehabilitation setting, Cairns et al 27 found a co-occurrence of pain and depression in 22% to 35% of patients. This work also suggested an evolution of the relationship between pain and depression over the course of the inpatient stay, such that they become associated by discharge. Craig et al28 found that pain levels at discharge from acute rehabilitation predicted depression at 2-year follow-up. Pain interference also has been associated with emotional functioning and QOL in persons with SCI1,7,29,30 and appears to mediate the relationship between ambulation and depression.31Studies of pain and depression in person with SCI are often limited methodologically to examine the independent contributions of pain intensity and interference to depression in an acute setting. For example, they include only pain intensity16,23,25,28,30; classify subjects by either pain plus depression23 or pain versus no pain8,28,30; use pain intensity and interference as predictor and outcome, respectively1; collapse pain interference domains into a single score1; or use only univariate tests (eg, correlations).7,8,25,30 In addition, the vast majority focus on the chronic period of injury. To fill a gap in knowledge, we examined the independent contributions of pain intensity and pain interference to depression, while accounting for injury and demographic characteristics, antidepressant treatment, and pre-injury binge drinking in a sample of persons with acute SCI. We hypothesized that when accounting for both pain intensity and interference in the model, interference would have an independent and significant relationship with depression, above and beyond pain intensity.  相似文献   

6.

Background:

Functional electrical stimulation (FES) therapy has been shown to be one of the most promising approaches for improving voluntary grasping function in individuals with subacute cervical spinal cord injury (SCI).

Objective:

To determine the effectiveness of FES therapy, as compared to conventional occupational therapy (COT), in improving voluntary hand function in individuals with chronic (≥24 months post injury), incomplete (American Spinal Injury Association Impairment Scale [AIS] B-D), C4 to C7 SCI.

Methods:

Eight participants were randomized to the intervention group (FES therapy; n = 5) or the control group (COT; n = 3). Both groups received 39 hours of therapy over 13 to 16 weeks. The primary outcome measure was the Toronto Rehabilitation Institute-Hand Function Test (TRI-HFT), and the secondary outcome measures were Graded Redefined Assessment of Strength Sensibility and Prehension (GRASSP), Functional Independence Measure (FIM) self-care subscore, and Spinal Cord Independence Measure (SCIM) self-care subscore. Outcome assessments were performed at baseline, after 39 sessions of therapy, and at 6 months following the baseline assessment.

Results:

After 39 sessions of therapy, the intervention group improved by 5.8 points on the TRI-HFT’s Object Manipulation Task, whereas the control group changed by only 1.17 points. Similarly, after 39 sessions of therapy, the intervention group improved by 4.6 points on the FIM self-care subscore, whereas the control group did not change at all.

Conclusion:

The results of the pilot data justify a clinical trial to compare FES therapy and COT alone to improve voluntary hand function in individuals with chronic incomplete tetraplegia.Key words: chronic patients, functional electrical stimulation, grasping, therapy, upper limbIn the United States and Canada, there is a steady rate of incidence and an increasing rate of prevalence of individuals living with spinal cord injury (SCI). For individuals with tetraplegia, hand function is essential for achieving a high level of independence in activities of daily living.15 For the majority of individuals with tetraplegia, the recovery of hand function has been rated as their highest priority.5Traditionally, functional electrical stimulation (FES) has been used as a permanent neuroprosthesis to achieve this goal.614 More recently, researchers have worked toward development of surface FES technologies that are meant to be used as shortterm therapies rather than permanent prosthesis. This therapy is frequently called FES therapy or FET. Most of the studies published to date, where FES therapy was used to help improve upper limb function, have been done in both the subacute and chronic stroke populations1523 and 2 have been done in the subacute SCI population.13 With respect to the chronic SCI population, there are no studies to date that have looked at use of FES therapy for retraining upper limb function. In a review by Kloosterman et al,24 the authors have discussed studies that have used various combinations of therapies for improving upper extremity function in chronic SCI individuals; however, the authors found that the only study that showed significant improvements before and after was the study published by Needham-Shropshire et al.25 This study examined the effectiveness of neuromuscular stimulation (NMS)–assisted arm ergometry for strengthening triceps brachii. In this study, electrical stimulation was used to facilitate arm ergometry, and it was not used in the context of retraining reaching, grasping, and/or object manipulation.Since 2002, our team has been investigating whether FES therapy has the capacity to improve voluntary hand function in complete and incomplete subacute cervical SCI patients who are less than 180 days post injury at the time of recruitment in the study.13 In randomized controlled trials (RCTs) conducted by our team, we found that FES therapy is able to restore voluntary reaching and grasping functions in individuals with subacute C4 to C7 incomplete SCI.13 The changes observed were transformational; individuals who were unable to grasp at all were able to do so after only 40 one-hour sessions of the FES therapy, whereas the control group showed significantly less improvement. Inspired by these results, we decided to conduct a pilot RCT with chronic (≥24 months following injury) C4 to C7 SCI patients (American Spinal Injury Association Impairment Scale [AIS] B-D), which is presented in this article. The purpose of this pilot study was to determine whether the FES therapy is able to restore voluntary hand function in chronic tetraplegic individuals. Based on the results of our prior phase I1 and phase II2,3 RCTs in the subacute SCI population, we hypothesized that individuals with chronic tetraplegia who underwent the FES therapy (intervention group) may have greater improvements in voluntary hand function, especially in their ability to grasp and manipulate objects, and perform activities of daily living when compared to individuals who receive similar volume and duration of conventional occupational therapy (COT: control group).  相似文献   

7.

Background:

Understanding the related fates of muscle density and bone quality after chronic spinal cord injury (SCI) is an important initial step in determining endocrine-metabolic risk.

Objective:

To examine the associations between muscle density and indices of bone quality at the distal lower extremity of adults with chronic SCI.

Methods:

A secondary data analysis was conducted in 70 adults with chronic SCI (C2-T12; American Spinal Injury Association Impairment Scale [AIS] A-D; ≥2 years post injury). Muscle density and cross-sectional area (CSA) and bone quality indices (trabecular bone mineral density [TbBMD] at the distal tibia [4% site] and cortical thickness [CtTh], cortical area [CtAr], cortical BMD [CtBMD], and polar moment of inertia [PMI] at the tibial shaft [66% site]) were measured using peripheral quantitative computed tomography. Calf lower extremity motor score (cLEMS) was used as a clinical measure of muscle function. Multivariable linear regression analyses were performed to determine the strength of the muscle-bone associations after adjusting for confounding variables (sex, impairment severity [AIS A/B vs AIS C/D], duration of injury, and wheelchair use).

Results:

Muscle density was positively associated with TbBMD (b = 0.85 [0.04, 1.66]), CtTh (b = 0.02 [0.001, 0.034]), and CtBMD (b = 1.70 [0.71, 2.69]) (P < .05). Muscle CSA was most strongly associated with CtAr (b = 2.50 [0.12, 4.88]) and PMI (b = 731.8 [161.7, 1301.9]) (P < .05), whereas cLEMS was most strongly associated with TbBMD (b = 7.69 [4.63, 10.76]) (P < .001).

Conclusion:

Muscle density and function were most strongly associated with TbBMD at the distal tibia in adults with chronic SCI, whereas muscle size was most strongly associated with bone size and geometry at the tibial shaft.Key words: bone mineral density, bone quality, muscle density, muscle size, osteoporosis, peripheral quantitative computed tomography, spinal cord injurySpinal cord injury (SCI) is associated with sublesional muscle atrophy,13 changes in muscle fiber type,4,5 reductions in hip and knee region bone mineral density (BMD),68 and increased central and regional adiposity after injury.9,10 Adverse changes in muscle and bone health in individuals with SCI contribute to an increased risk of osteoporosis,1113 fragility fractures,14 and endocrine-metabolic disease (eg, diabetes, dyslipidemia, heart disease).1517 Crosssectional studies have shown a higher prevalence of lower extremity fragility fractures among individuals with SCI ranging from 1% to 34%.1820 Fragility fractures are associated with negative health and functional outcomes, including an increased risk of morbidity and hospitalization,21,22 mobility limitations,23 and a reduced quality of life.24 Notably, individuals with SCI have a normal life expectancy, yet fracture rates increase annually from 1% per year in the first year to 4.6% per year in individuals greater than 20 years post injury.25,26Muscle and bone are thought to function as a muscle-bone unit, wherein muscle contractions impose loading forces on bone that produce changes in bone geometry and structure.27,28 A growing body of evidence has shown that individuals with SCI (predominantly those with motor complete injury) exhibit similar patterns of decline in muscle cross-sectional area (CSA) and BMD in the acute and subacute stages following injury.4,11,29 Prospective studies have exhibited a decrease in BMD of 1.1% to 47% per year6,7,30 and up to 73% in the 2 to 7 years following SCI.8,14,31,32 Decreases in muscle CSA have been well-documented following SCI, with greater disuse atrophy observed after complete SCI versus incomplete SCI, presumably due to the absence of voluntary muscle contractions and associated mobility limitations.1,2,16 Muscle quality is also compromised early after SCI, resulting in sublesional accumulation of adipose tissue in the chronic stage of injury3,33,34; the exact time course of this event has been poorly elucidated to date. Adipose tissue deposition within and between skeletal muscle is linked to an increase in noncontractile muscle tissue and a reduction in muscle force-generating capacity on bone.35,36 Skeletal muscle fat infiltration is up to 4 times more likely to occur in individuals with SCI,1,16,37 contributing to metabolic complications (eg, glucose intolerance),16 reduced muscle strength and function,38 and mobility limitations3 – all factors that may be associated with a deterioration in bone quality after SCI.The association between lean tissue mass and bone size (eg, BMD and bone mineral content) in individuals with SCI has been wellestablished using dual energy x-ray absorptiometry (DXA).9,10,29,34 However, DXA is unable to measure true volumetric BMD (vBMD), bone geometry, and bone structure. Peripheral quantitative computed tomography (pQCT) is an imaging technique that improves our capacity to measure indices of bone quality and muscle density and CSA at fracture-prone sites (eg, tibia).3,39 Recent evidence from cross-sectional pQCT studies has shown that muscle CSA and calf lower extremity motor score (cLEMS) were associated with indices of bone quality at the tibia in individuals with SCI.13,40 However, neither study measured muscle density (a surrogate of fatty infiltration when evaluating the functional muscle-bone unit). Fatty infiltration of muscle is common after SCI1,16,37 and may affect muscle function or the muscle-bone unit, but the association between muscle density and bone quality indices at the tibia in individuals with chronic SCI is unclear. Muscle density measured using pQCT may be an acceptable surrogate of muscle quality when it is difficult to assess muscle strength due to paralysis.3,39 Additionally, investigating which muscle outcome (muscle density, CSA, cLEMS) is most strongly associated with vBMD and bone structure may inform modifiable targets for improving bone quality and reducing fracture risk after chronic SCI.The primary objective of this secondary analysis was to examine the associations between pQCTderived calf muscle density and trabecular vBMD at the tibia among adults with chronic SCI. The secondary objective was to examine the associations between calf muscle density, CSA, and function and tibial vBMD, cortical CSA and thickness, and polar moment of inertia (PMI). First, we hypothesize that calf muscle density will be a positive correlate of trabecular and cortical vBMD, cortical CSA and thickness, and PMI at the tibia in individuals with chronic SCI. Second, we hypothesize that of the key muscle variables (cLEMS, CSA and density), calf muscle density and cLEMS will be most strongly associated with trabecular vBMD, whereas calf muscle CSA will be most strongly associated with cortical CSA and PMI.  相似文献   

8.

Background:

Chronic spinal cord injury (SCI) is associated with an increase in risk factors for cardiovascular disease (CVD). In the general population, atherosclerosis in women occurs later than in men and usually presents differently. Associations between risk factors and incidence of CVD have not been studied in women with SCI.

Objective:

To determine which risk factors for CVD are associated with increased carotid intima-media thickness (CIMT), a common indicator of atherosclerosis, in women with SCI.

Methods:

One hundred and twenty-two females older than 18 years with traumatic SCI at least 2 years prior to entering the study were evaluated. Participants were asymptomatic and without evidence of CVD. Exclusion criteria were acute illness, overt heart disease, diabetes, and treatment with cardiac drugs, lipid-lowering medication, or antidiabetic agents. Measures for all participants were age, race, smoking status, level and completeness of injury, duration of injury, body mass index, serum lipids, fasting glucose, hemoglobin A1c, and ultrasonographic measurements of CIMT. Hierarchical multiple linear regression was conducted to predict CIMT from demographic and physiologic variables.

Results:

Several variables were significantly correlated with CIMT during univariate analyses, including glucose, hemoglobin A1c, age, and race/ethnicity; but only age was significant in the hierarchical regression analysis.

Conclusions:

Our data indicate the importance of CVD in women with SCI.Key words: age, cardiovascular disease, carotid intima-media thickness, hemoglobin A1c, risk factors, smokingThe secondary conditions of metabolic syndrome and cardiovascular disease (CVD) resulting from spinal cord injury (SCI ) are not well understood. In particular, persons with SCI have an increase in metabolic risk factors for cardiovascular disease (CVD),15 but researchers have not determined whether this increase is associated with an increased incidence of CVD. The association has not been shown in reports on mortality or prevalence rates for CVD in people with SCI612 or in the few studies that have appraised CVD in people with SCI using physiologic assessments.1318 Either the question was not addressed, or the evidence is insufficient due to low sample sizes and a lack of objective, prospective epidemiological studies assessing this question. Nevertheless, studies consistently show that metabolic syndrome is prevalent among individuals with SCI.15,12 Metabolic syndrome consists of multiple interrelated risk factors that increase the risk for atherosclerotic heart disease by 1.5- to 3-fold.19,20Compounding the uncertainty about the association of metabolic risk factors with CVD in SCI are possible gender differences.2124 Findings from studies of men with SCI might not apply to women with SCI. For example, the correlation between physical activity and high-density lipoprotein (HDL) levels in men with SCI is not found for women with SCI.25,26 Furthermore, able-bodied women develop atherosclerosis later than do able-bodied men, and they usually present differently.27 Some studies indicate that abnormal glucose metabolism may play a particularly important role in CVD in women27; data from our group suggest that this is the case in women with SCI as well.15 Although women constitute 18% to 20% of the SCI population, no studies have evaluated cardiovascular health in women with chronic SCI.Carotid intima-media thickness (CIMT) is the most robust, highly tested, and often used noninvasive endpoint for assessing the progression of subclinical atherosclerosis in men and women of all ages.2846 For people with SCI, CIMT is a reliable surrogate measure of asymptomatic CVD.15,47 The incidence of asymptomatic CVD appears to increase with the duration of SCI,15 where duration of injury is a cardiac risk factor independent of age.17 Moreover, CIMT is greater in men with SCI than in matched able-bodied controls,48 indicating a subclinical and atypical presentation of CVD. A variety of studies have confirmed the usefulness of high-resolution B-mode ultrasound measurement of CIMT for quantitation of subclinical atherosclerosis.49To better discern the association of risk factors with measures of subclinical atherosclerotic disease in women with SCI, we performed blood tests and ultrasonographic measurements of CIMT on 122 females with chronic SCI who were free of overt CVD. We tested for the 3 metabolic risk factors that are consistently identified in the varied definitions of metabolic syndrome: abnormal carbohydrate metabolism, abnormally high triglycerides, and abnormally low HDL cholesterol. We also tested for 4 other CVD risk factors: high levels of low-density lipoprotein (LDL), high total cholesterol, high body mass index (BMI), and a history of smoking.  相似文献   

9.
Primary vesicoureteral reflux (pVUR) is one of the most common causes of pediatric kidney failure. Linkage scans suggest that pVUR is genetically heterogeneous with two loci on chromosomes 1p13 and 2q37 under autosomal dominant inheritance. Absence of pVUR in parents of affected individuals raises the possibility of a recessive contribution to pVUR. We performed a genome-wide linkage scan in 12 large families segregating pVUR, comprising 72 affected individuals. To avoid potential misspecification of the trait locus, we performed a parametric linkage analysis using both dominant and recessive models. Analysis under the dominant model yielded no signals across the entire genome. In contrast, we identified a unique linkage peak under the recessive model on chromosome 12p11-q13 (D12S1048), which we confirmed by fine mapping. This interval achieved a peak heterogeneity LOD score of 3.6 with 60% of families linked. This heterogeneity LOD score improved to 4.5 with exclusion of two high-density pedigrees that failed to link across the entire genome. The linkage signal on chromosome 12p11-q13 originated from pedigrees of varying ethnicity, suggesting that recessive inheritance of a high frequency risk allele occurs in pVUR kindreds from many different populations. In conclusion, this study identifies a major new locus for pVUR and suggests that in addition to genetic heterogeneity, recessive contributions should be considered in all pVUR genome scans.Vesicoureteral reflux (VUR; OMIM no. 193000) is the retrograde flow of urine from the bladder to the ureters and the kidneys during micturation. Uncorrected, VUR can lead to repeated urinary tract infections, renal scarring and reflux nephropathy, accounting for up to 25% of pediatric end stage renal disease.1,2 VUR is commonly seen as an isolated disorder (primary VUR; pVUR), but it can also present in association with complex congenital abnormalities of the kidney and urinary tract or with specific syndromic disorders, such as renal-coloboma and branchio-oto-renal syndromes.38pVUR has a strong hereditary component, with monozygotic twin concordance rates of 80%.912 Sibling recurrence rates of 30% to 65% have suggested segregation of a single gene or oligogenes with large effects.9,1214 Interestingly however, the three published genome-wide linkage scans of pVUR have strongly suggested multifactorial determination.1517 Two pVUR loci have been identified with genome-wide significance on chromosomes 1p13 and 2q37 under an autosomal dominant transmission with locus heterogeneity.15,16 Multiple suggestive signals have also been reported, but remarkably, these studies show little overlap.1517 These data suggest that pVUR may be extremely heterogeneous, with mutations in different genes each accounting for a fraction of cases. The genes underlying pVUR loci have not yet been identified, but two recent studies have reported segregating mutations in the ROBO2 gene in up to 5% of pVUR families.18,19Despite evidence for genetic heterogeneity and different subtypes of disease, genetic studies have all modeled pVUR as an autosomal dominant trait.1517,20 Recessive inheritance has generally not been considered because the absence of affected parents can be explained by spontaneous resolution of pVUR with older age. However, many pVUR cohorts are composed of affected sibships or pedigrees compatible with autosomal recessive transmission, suggesting the potential for alternative modes of inheritance.912,16,17,2022 Systematic family screening to clarify the mode of inheritance is not feasible for pVUR because the standard diagnostic tool, the voiding cystourethrogram (VCUG), is invasive and would expose participants to radiation. Formal assessment of a recessive contribution in sporadic pVUR has also been difficult because studies have been conducted in populations with low consanguinity rates.912,16,17,2022 However, recent studies have identified an unexpected recessive contribution to several complex traits such as ductus arteriosus or autism.23,24 Thus, in addition to genetic heterogeneity, genes with alternative modes of transmission may segregate among pVUR families, and misspecification of the inheritance model may complicate mapping studies of this trait.Several approaches can be considered to address the difficulties imposed by complex inheritance, variable penetrance, and genetic heterogeneity. Studying large, well characterized cohorts with newer single-nucleotide polymorphism (SNP)-based technologies can maximize inheritance information across the genome and increase the power of linkage studies.25 In addition, in the setting of locus heterogeneity and uncertainty about the mode of transmission, analysis under a dominant and a recessive model has greater power compared with nonparametric methods and more often results in detection of the correct mode of transmission without incurring a significant penalty for multiple testing.2629 We combined these approaches in this study and successfully localized a major gene for VUR, which unexpectedly demonstrates autosomal recessive transmission.  相似文献   

10.

Background:

The predictors and patterns of upright mobility in children with a spinal cord injury (SCI) are poorly understood.

Objective:

The objective of this study was to develop a classification system that measures children’s ability to integrate ambulation into activities of daily living (ADLs) and to examine upright mobility patterns as a function of their score and classification on the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) exam.

Methods:

This is a cross-sectional, multicenter study that used a convenience sample of subjects who were participating in a larger study on the reliability of the ISNCSCI. A total of 183 patients between 5 and 21 years old were included in this study. Patients were asked if they had participated in upright mobility in the last month and, if so, in what environment and with what type of bracing. Patients were then categorized into 4 groups: primary ambulators (PrimA), unplanned ambulators (UnPA), planned ambulators (PlanA), and nonambulators.

Results:

Multivariate analyses found that only lower extremity strength predicted being a PrimA, whereas being an UnPA was predicted by both lower extremity strength and lack of preservation of S45 pinprick sensation. PlanA was only associated with upper extremity strength.

Conclusions:

This study introduced a classification system based on the ability of children with SCI to integrate upright mobility into their ADLs. Similar to adults, lower extremity strength was a strong predictor of independent mobility (PrimA and UnPA). Lack of pinprick predicted unplanned ambulation, but not being a PrimA. Finally, upper extremity strength was a predictor for planned ambulation.Key words: ambulation, ISNCSCI, pediatrics, spinal cord injuryAfter a spinal cord injury (SCI), learning to walk often becomes the focus of rehabilitation for children and their families.1,2 Although the majority of children with SCI do not return to full-time functional ambulation, those who accomplish some level of walking report positive outcomes such as feeling “normal” again, being eye-to-eye with peers, and having easier social interactions.3 Although not frequently reported by patients, there is some evidence of physiological benefits as well.39 Regardless of age, upright mobility has been positively associated with community participation and life satisfaction.1012 For children, upright mobility allows them to explore their physical environment, which facilitates independence and learning as part of the typical developmental process.13,14With the use of standers, walkers, and other assistive devices, as well as a variety of lower extremity orthoses, it is a reasonable expectation that some children with spinal injuries achieve upright stance and mobility.7,9,1321 However, there are 2 main challenges for clinicians and patients: understanding the factors that either encourage or discourage upright activities, and identifying how best to determine whether upright mobility is successful and meaningful. The literature on adults suggests that upright mobility is dependent on physiological and psychosocial factors. Physiological factors include the patient’s current age, neurological level, muscle strength, and comorbidities.14,2227 Psychosocial factors include satisfaction with the appearance of the gait pattern, cosmesis, social support for donning/doffing braces, and assistance with transfer and during ambulation.3,9,19,2832The identification of outcome measures that provide a meaningful indication of successful upright mobility has been difficult. The World Health Organization (WHO) describes 2 constructs for considering outcomes – capacity and performance.33 Capacity refers to maximal capability in a laboratory setting. An example of a capacity measure is the Walking Index for Spinal Cord Injury (WISCI), which is an ordinal scale used to quantify walking capacity based on assistive device, type of orthosis, and amount of assistance required.34,35 Other capacity measures include the Timed Up and Go test and the 6-minute walk test.36,37 On the other hand, performance refers to actual activity during a patient’s daily activities in typical, real-life environments.33 For example, the FIM is an observation scale that scores the patient’s typical daily performance.36,3840 The FIM is considered a burden of care measure that determines the amount of actual assistance provided to a patient during typical routines and environments, which may or may not reflect maximal ability or capacity. Performance measures provide an adequate clinical snap-shot of a patients’ daily function (evaluates what they do), whereas capacity measures are better research tools, as they are able to detect subtle changes in ambulation (evaluates what they can do).In children, no capacity outcome measures of ambulation have been tested for validity or reliability. Availability of reliable and valid performance measures is also lacking. The WeeFIM is a performance measure for children, but it is not SCI specific. It is scored on the child’s burden of care, that is, on the maximal assistance required rather than the child’s maximal independence or the highest capacity of performance during a typical day. For children, another commonly used scale is the Hoffer Scale, which relies on the physician’s or therapist’s subjective determination of the purpose of the upright mobility activities (for function or for exercise).41,42 Because parents and school systems are encouraged to integrate “exercise” ambulation into daily activities, it may not be possible to distinguish between therapeutic and functional ambulation in the home, school, or community environments. In the schools, a teacher/therapist should incorporate upright mobility into the classroom setting by donning a child’s braces and then having her/him ambulate a short distance to stand at an easel in art class or to stand upright when talking to friends during recess. In this situation, walking serves the dual purpose of being functional and therapeutic.For this study, it was decided not to rely on a subjective determination of therapeutic versus functional ambulation as the main outcome measure. Instead, we were interested in the children and adolescents who have successfully integrated independent mobility into their daily activities, regardless of frequency, distance, or purpose. Recent literature in studies of children and adolescents suggests that spontaneity is important for participation in functional and social activities. For example, a survey of patients using functional electrical stimulation for hand function found a reduction in the dependence on others for donning splints, which facilitated independence with activities of daily living (ADLs) in adolescents.4345 In a more recent study, Mulcahey et al46 found that a reduction of spontaneity in adolescents was a barrier for social activity; during cognitive interviews, children reported not participating in sleepovers due to planning their bowel/bladder programs.To date, there are no measures that integrate spontaneity of standing and/or upright mobility into the daily activities of children. Toward that aim, this study introduces a new scale that attempts to categorize children into 4 mutually exclusive groups: primary ambulators, unplanned ambulators, planned ambulators, and nonambulators. The purpose of this study was to examine ambulation patterns among children and adolescents with SCI as a function of neurological level, motor level, and injury severity, as defined by the motor, sensory, and anorectal examinations of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI). A secondary aim of the study was to determine how performance on the ISNCSCI exam was associated with the ability of children to independently integrate ambulation into their daily routines.  相似文献   

11.

Background:

A large percentage of individuals with spinal cord injury (SCI) report shoulder pain that can limit independence and quality of life. The pain is likely related to the demands placed on the shoulder by transfers and propulsion. Shoulder pathology has been linked to altered scapular mechanics; however, current methods to evaluate scapular movement are invasive, require ionizing radiation, are subject to skin-based motion artifacts, or require static postures.

Objective:

To investigate the feasibility of applying 3-dimensional ultrasound methods, previously used to look at scapular position in static postures, to evaluate dynamic scapular movement.

Method:

This study evaluated the feasibility of the novel application of a method combining 2-dimensional ultrasound and a motion capture system to determine 3-dimensional scapular position during dynamic arm elevation in the scapular plane with and without loading.

Results:

Incremental increases in scapular rotations were noted for extracted angles of 30°, 45°, 60°, and 75° of humeral elevation. Group differences were evaluated between a group of 16 manual wheelchair users (MWUs) and a group of age- and gender-matched able-bodied controls. MWUs had greater scapular external rotation and baseline pathology on clinical exam. MWUs also had greater anterior tilting, with this difference further accentuated during loading. The relationship between demographics and scapular positioning was also investigated, revealing that increased age, pathology on clinical exam, years since injury, and body mass index were correlated with scapular rotations associated with impingement (internal rotation, downward rotation, and anterior tilting).

Conclusion:

Individuals with SCI, as well as other populations who are susceptible to shoulder pathology, may benefit from the application of this imaging modality to quantitatively evaluate scapular positioning and effectively target therapeutic interventions.Key words: kinematics, scapula, ultrasound, wheelchair userThe shoulder is a common site of injury across many populations. Because it is the most mobile joint in the body, the high prevalence of disorders is not surprising. Individuals are at increased risk for shoulder pathology when exposed to high forces, sustained postures, and repetitive movements.1 Wheelchair users are exposed to all of these factors in activities of daily living. Among manual wheelchair users (MWUs), 35% to 67% report shoulder pain.27 In this population, the presence of shoulder dysfunction significantly affects function and decreases quality of life.8,9 With altered scapular kinematics being linked to a multitude of shoulder problems, the identification of changes in kinematics may allow for earlier detection of pathology and targeting of appropriate interventions.1025 However, evaluation of dynamic scapular movement is a challenging task, as the scapula rotates about 3 axes while also gliding underneath overlying tissue. Direct visualization of the bone is ideal but is often limited by cost, availability, and exposure to radiation, and skin-based systems are prone to error.2633The overall goal of this study was to investigate the feasibility of applying 3-dimensional ultrasound methods, previously used to look at scapular position in static postures, to evaluate dynamic scapular movement.34 The specific goals were as follows:
  1. Evaluate intermediate angles of functional elevation during dynamic movement (30°, 45°, 60°, and 75°). We hypothesize that we will see incremental increases in external rotation, upward rotation, and posterior tipping throughout the movement to maintain the distance between the acromion and humerus.
  2. Compare dynamic scapular movement between MWUs and able-bodied controls (ABs). We anticipate that the nature of wheelchair propulsion and demands of activities of daily living will elucidate differences between this population and ABs with comparably lower daily demands on the shoulder.
  3. Evaluate the effect of loading on scapular movement, as other studies have suggested that differences in kinematics are clearer in the presence of loading.10,35,36
  4. Investigate the relationship between shoulder pathology, age, years since injury, and body mass index (BMI) and scapular positioning.
  相似文献   

12.
13.
Proteinuria and increased renal reabsorption of NaCl characterize the nephrotic syndrome. Here, we show that protein-rich urine from nephrotic rats and from patients with nephrotic syndrome activate the epithelial sodium channel (ENaC) in cultured M-1 mouse collecting duct cells and in Xenopus laevis oocytes heterologously expressing ENaC. The activation depended on urinary serine protease activity. We identified plasmin as a urinary serine protease by matrix-assisted laser desorption/ionization time of-flight mass spectrometry. Purified plasmin activated ENaC currents, and inhibitors of plasmin abolished urinary protease activity and the ability to activate ENaC. In nephrotic syndrome, tubular urokinase-type plasminogen activator likely converts filtered plasminogen to plasmin. Consistent with this, the combined application of urokinase-type plasminogen activator and plasminogen stimulated amiloride-sensitive transepithelial sodium transport in M-1 cells and increased amiloride-sensitive whole-cell currents in Xenopus laevis oocytes heterologously expressing ENaC. Activation of ENaC by plasmin involved cleavage and release of an inhibitory peptide from the ENaC γ subunit ectodomain. These data suggest that a defective glomerular filtration barrier allows passage of proteolytic enzymes that have the ability to activate ENaC.Nephrotic syndrome is characterized by proteinuria, sodium retention, and edema. Increased renal sodium reabsorption occurs in the cortical collecting duct (CCD),1,2 where a rate-limiting step in transepithelial sodium transport is the epithelial sodium channel (ENaC), which is composed of the three homologous subunits: α, β, γ.3ENaC activity is regulated by hormones, such as aldosterone and vasopressin (AVP)4,5; however, adrenalectomized rats and AVP-deficient Brattleboro rats are capable of developing nephrotic syndrome,1,6 and nephrotic patients do not consistently display elevated levels of sodium-retaining hormones,7,8 suggesting that renal sodium hyper-reabsorption is independent of systemic factors. Consistent with this, sodium retention is confined to the proteinuric kidney in the unilateral puromycin aminonucleoside (PAN) nephrotic model.2,9,10There is evidence that proteases contribute to ENaC activation by cleaving the extracellular loops of the α- and γ-subunits.1113 Proteolytic activation of ENaC by extracellular proteases critically involves the cleavage of the γ subunit,1416 which probably leads to the release of a 43-residue inhibitory peptide from the ectodomain.17 Both cleaved and noncleaved channels are present in the plasma membrane,18,19 allowing proteases such as channel activating protease 1 (CAP1/prostasin),20 trypsin,20 chymotrypsin,21 and neutrophil elastase22 to activate noncleaved channels from the extracellular side.23,24 We hypothesized that the defective glomerular filtration barrier in nephrotic syndrome allows the filtration of ENaC-activating proteins into the tubular fluid, leading to stimulation of ENaC. The hypothesis was tested in the PAN nephrotic model in rats and with urine from patients with nephrotic syndrome.  相似文献   

14.
Administration of activated protein C (APC) protects from renal dysfunction, but the underlying mechanism is unknown. APC exerts both antithrombotic and cytoprotective properties, the latter via modulation of protease-activated receptor-1 (PAR-1) signaling. We generated APC variants to study the relative importance of the two functions of APC in a model of LPS-induced renal microvascular dysfunction. Compared with wild-type APC, the K193E variant exhibited impaired anticoagulant activity but retained the ability to mediate PAR-1-dependent signaling. In contrast, the L8W variant retained anticoagulant activity but lost its ability to modulate PAR-1. By administering wild-type APC or these mutants in a rat model of LPS-induced injury, we found that the PAR-1 agonism, but not the anticoagulant function of APC, reversed LPS-induced systemic hypotension. In contrast, both functions of APC played a role in reversing LPS-induced decreases in renal blood flow and volume, although the effects on PAR-1-dependent signaling were more potent. Regarding potential mechanisms for these findings, APC-mediated PAR-1 agonism suppressed LPS-induced increases in the vasoactive peptide adrenomedullin and infiltration of iNOS-positive leukocytes into renal tissue. However, the anticoagulant function of APC was responsible for suppressing LPS-induced stimulation of the proinflammatory mediators ACE-1, IL-6, and IL-18, perhaps accounting for its ability to modulate renal hemodynamics. Both variants reduced active caspase-3 and abrogated LPS-induced renal dysfunction and pathology. We conclude that although PAR-1 agonism is solely responsible for APC-mediated improvement in systemic hemodynamics, both functions of APC play distinct roles in attenuating the response to injury in the kidney.Acute kidney injury (AKI) leading to renal failure is a devastating disorder,1 with a prevalence varying from 30 to 50% in the intensive care unit.2 AKI during sepsis results in significant morbidity, and is an independent risk factor for mortality.3,4 In patients with severe sepsis or shock, the reported incidence ranges from 23 to 51%57 with mortality as high as 70% versus 45% among patients with AKI alone.1,8The pathogenesis of AKI during sepsis involves hemodynamic alterations along with microvascular impairment.4 Although many factors change during sepsis, suppression of the plasma serine protease, protein C (PC), has been shown to be predictive of early death in sepsis models,9 and clinically has been associated with early death resulting from refractory shock and multiple organ failure in severe sepsis.10 Moreover, low levels of PC have been highly associated with renal dysfunction and pathology in models of AKI.11 During vascular insult, PC becomes activated by the endothelial thrombin-thrombomodulin complex, and the activated protein C (APC) exhibits both antithrombotic and cytoprotective properties. We have previously demonstrated that APC administration protects from renal dysfunction during cecal ligation and puncture and after endotoxin challenge.11,12 In addition, recombinant human APC [drotrecogin alfa (activated)] has been shown to reduce mortality in patients with severe sepsis at high risk of death.13 Although the ability of APC to protect from organ injury in vivo is well documented,11,14,15 the precise mechanism mediating the response has not been ascertained.APC exerts anticoagulant properties via feedback inhibition of thrombin by cleavage of factors Va and VIIIa.16 However, APC bound to the endothelial protein C receptor (EPCR) can also exhibit direct potent cytoprotective properties by cleaving protease-activated receptor-1 (PAR-1).17 Various cell culture studies have demonstrated that the direct modulation of PAR-1 by APC results in cytoprotection by several mechanisms, including suppression of apoptosis,18,19 leukocyte adhesion,19,20 inflammatory activation,21 and suppression of endothelial barrier disruption.22,23 In vivo, the importance of the antithrombotic activity of APC is well established in model systems24,25 and in humans.26 However, the importance of PAR-1-mediated effects of APC also has been clearly defined in protection from ischemic brain injury27 and in sepsis models.28 Hence, there has been significant debate whether the in vivo efficacy of APC is attributed primarily to its anticoagulant (inhibition of thrombin generation) or cytoprotective (PAR-1-mediated) properties.17,29The same active site of APC is responsible for inhibition of thrombin generation by the cleavage of factor Va and for PAR-1 agonism. Therefore, we sought to generate point mutations that would not affect catalytic activity, but would alter substrate recognition to distinguish the two functions. Using these variants, we examined the relative role of the two known functions of APC in a model of LPS-induced renal microvascular dysfunction.  相似文献   

15.
16.

Purpose:

To investigate the changes in quality of life (QOL) in persons with spinal cord injury (SCI) and their close persons during the first 2 years post injury.

Method:

Longitudinal multiple sample multiple wave panel design. Data included 292 patients recruited from Austrian British German Irish and Swiss specialist SCI rehabilitation centers and 55 of their close persons. Questionnaire booklets were administered at 6 weeks 12 weeks 1 year and 2 years after injury to both samples.

Results:

Study 1 investigated the WHOQOL-BREF domains in individuals with SCI and found differences mostly in the physical domain indicating that QOL increases for persons with SCI from onset. An effect of the culture was observed in the psychological and environmental domains with higher QOL scores in the German-speaking sample. Study 2 compared individuals with SCI to their close persons and found differences in the physical environmental and social domains over time. The scores on the psychological dimension did not significantly differ between the persons with SCI and their close persons over time.

Conclusion:

QOL measured by the WHOQOL-BREF shows that QOL changes during rehabilitation and after discharge. Apart from the physical dimension the persons with SCI and their close persons seem to experience a similar change in QOL. Further longitudinal research is suggested to clarify the mutual adjustment process of people with SCI and their close persons and to explore cultural differences in QOL between English-and German-speaking countries.Key words: close persons, quality of life, rehabilitation, spinal cord injuryAspinal cord injury (SCI) is a highly disruptive event in the life of an individual and requires a considerable coping process. Shortly after the injury, all attention is put into stabilizing the patient and from that moment the individual has to cope with challenges at physical, social, environmental, and psychological levels.The institutionalized context of the rehabilitation provides a largely standardized supportive setting that helps the person with SCI to become acquainted with the recently acquired disability. The health care professionals in collaboration with the patients and their close persons, that is, relatives or significant others, work together to prepare the transition back to everyday life.One expectation of rehabilitation is that the person with SCI will regain a satisfactory level of well-being and fulfill his or her aims in life. Many factors may facilitate the retrieval of a good quality of life (QOL). Some aspects of SCI are permanent or only susceptible to small changes (eg, the paralysis and other irrevocable neurological problems related to the injury), but many others (eg, psychological and social aspects) can be more or less actively influenced by the person with SCI.A number of recent studies regarding QOL in SCI emphasize that QOL is not strongly affected by physical variables.14 Age 37 and gender1,24,7 are also weakly related to the QOL of the persons with SCI. Physical health aspects that can explain differences in QOL are pain1,6,811 or secondary conditions such as pressure sores and dysreflexia. 4,6,11Psychological resources are strong predictors of life satisfaction and well-being. Psychological resources are personal traits and characteristics that might influence the way a person perceives and manages challenges. Positive affect,3,11 high self-efficacy,1,6,9,11 optimism,6 hope,3,11 and sense of coherence11,12 have proven to be positively associated with better QOL. More dynamic psychological processes such as appraisals or coping strategies used by the persons with SCI have also significantly contributed to predicting QOL over time. 13Studies reporting on how the environment affects perceived QOL in persons with SCI are less frequent, despite a probable relation between mobility impairments and the advantages that a well-designed, obstacle-free, secure, and friendly environment might convey to persons with SCI. 1416A supportive familial environment6 and friends17 are important. Close persons of individuals with SCI have to adjust to the new situation, such as accepting a partner with altered needs with different degrees of daily care. Close persons of persons with SCI reported that becoming a caregiver is difficult and often dramatically life changing.18 Weitzenkamp19 reports high levels of depression in spousal caregivers, with sometimes higher rates of depression than partners with SCI. He also compared caregivers of persons with SCI with non-caregiving spouses, and found that caregivers showed higher levels of depression and emotional and physical stress.Familial caregivers hold a central position in the life of persons with SCI. Their health and QOL affects the health and well-being of the persons with SCI; life with SCI involves a systemic adjustment process. Ploypetch5 indicated that the participants’ perceived QOL was affected more if the caregiver was a member of the participants’ family rather than a health professional. Lucke 20 showed that the level of QOL in persons with SCI and their caregiving spouses may vary significantly over time, with a drop at 3 months post injury and an increase in QOL in the following 6 months. As a consequence, QOL has to be seen as dynamic, reflecting the adjustment to changes occurring over time in individuals and their environment. Some longitudinal studies of persons with SCI and their QOL have been conducted,2123 but longitudinal studies involving the close persons of individuals with SCI remain scarce in the literature. 20The present report shows the changes in QOL during the first 2 years post injury in individuals with SCI and their close persons from 5 European countries. The first aim is to describe the physical, social, environmental, and psychological QOL in persons with SCI in relation to injury and sociodemographic variables.
Hypothesis 1: It is expected that QOL in persons with SCI is quite high, especially after 2 years.
Second, the experienced QOL of the individuals with SCI and their close persons are compared at 6 weeks, 12 weeks, 1 year, and 2 years post injury.
Hypothesis 2: We expected that the scores of the physical QOL domain would be lower in the persons with SCI compared to the noninjured close persons. No significant differences were expected between the scores of the psychological domain.
  相似文献   

17.

Objective:

To identify and classify tools for assessing the influence of spasticity on quality of life (QOL) after spinal cord injury (SCI).

Methods:

Electronic databases (MEDLINE/PubMed CINAHL and PsycInfo) were searched for studies published between 1975 and 2012. Dijkers’s theoretical framework on QOL was used to classify tools as either objective or subjective measures of QOL.

Results:

Sixteen studies met the inclusion criteria. Identified objective measures that were used to assess the influence of spasticity on QOL included the Short Form-36 (SF-36) the Sickness Impact Profile (SIP) and the Health Utilities Index-III (HUI-III). Subjective measures included the Quality of Life Index–SCI Version III (QLI-SCI) Life Situation Questionnaire–Revised (LSQ-R) Reciprocal Support Scale (RSS) Profile of Mood States (POMS) Spinal Cord Injury Spasticity Evaluation Tool (SCI-SET) and the Patient Reported Impact of Spasticity Measure (PRISM). A number of tools proved either to be insensitive to the presence of spasticity (QLI-SCI) or yielded mixed (SF-36) or weak (RSS LSQ-R) results. Tools that were sensitive to spasticity had limited psychometric data for use in the SCI population (HUI-III SIP POMS) although 2 were developed specifically for assessing spasticity on daily life post SCI (SCI-SET PRISM).

Conclusions:

Two condition-specific subjective measures the SCI-SET and PRISM emerged as the most promising tools for the assessment of spasticity impact on QOL after SCI. Further research should focus on establishing the psychometric properties of these measures for use in the SCI population.Key words: outcome measurement quality of life spasticity spinal cord injuryKey words: outcome, measurement, quality of life, spasticity, spinal cord injurySpasticity of the upper limbs, trunk, or lower limbs is typically experienced by individuals with an upper motor neuron spinal cord injury (SCI) following spinal shock, and the resulting spasms often negatively impact quality of life (QOL).1 Although there is great variability in definitions of spasticity, the most commonly cited definition is by Lance2(p485): “Spasticity is a motor disorder characterized by a velocity-dependent increase in tonic stretch reflexes (muscle tone) with exaggerated tendon jerks, resulting from hyperexcitability of the stretch reflex, as one component of the upper motor neuron syndrome.” A wider definition of spasticity includes increased exteroceptive reflexes, as well as loss of motor function (ie, muscle power and coordination).3 The notion is that muscle weakness and impaired coordination are not part of the spasticity syndrome itself, but rather are associated with spasticity.36 Spasticity following SCI is prevalent, with 65% to 78% of persons more than 1 year post injury reporting its occurence.7,8The decision to treat spasticity largely depends on the frequency, severity, and impact of the spasms on a person’s daily life.9,10 Treatment may include conservative physical therapy,11 with a possible combination of other modalities,1 including pharmacological treatments (eg, diazepam,12,13 baclofen,14 clonidine,14,15 tazanidine,12,13,16 dantrolene sodium,12,17 cyproheptadine,14,18 and cannabis17). Persons who do not respond to oral administration of medications may be surgically implanted with a pump for intrathecal administration of baclofen1921 or receive injections of chemodenervation agents (phenol and ethanol22 or botulinum toxin16,23). Severe recalcitrant cases require surgical intervention, including dorsal rhizotomy and cordotomy.24 Continued improvements in the definition and management of spasticity are hampered by the development of valid and reliable tools for assessing spasticity impact.1Relatedly, the valid and reliable assessment of QOL post SCI, which is an important outcome for understanding the additional burden of specific secondary health conditions that emerge post SCI and for gauging the success of rehabilitation interventions in minimizing their frequency and severity, is a challenge.25 Symptoms of spasticity may have a profound influence on an individual’s QOL,7,26 including lifestyle and sense of well-being,27 by limiting workplace participation, adding to the cost of medication, and increasing attendant care requirements.8,21,28 Despite these findings, there are problems with assessing the influence of spasticity on QOL that are related to the multidimensionality and breadth of spasticity definitions, the fluctuating nature of associated symptoms, and their clinical impact. It is therefore essential that health care professionals be made aware of available tools that are designed to assess the influence of spasticity on QOL after SCI.Furthermore, investigators should possess a broader understanding of the different conceptualizations of QOL and which tools correspond to each of these. This will help to ensure that the objectives of a study are well aligned with the selected QOL tool. Gaining prominence in the field is the notion that QOL can be measured from an objective or subjective perspective.29 Objective measures are based on the assumption that there is widespread agreement about what constitutes QOL.25 Such measures focus on external conditions and contain items that can be defined and quantified to reflect societal standards. Conversely, subjective measures are designed with the assumption that QOL can only be judged by the individuals experiencing it.30 Although there are advantages and disadvantages inherent in each measurement type,29 subjective measures give patients a means of providing health professionals with a greater understanding of QOL and its connection to their health and well-being following SCI, whereas objective measures can be used to inform decision makers on how to allocate funds and resources for various interventions.To date, no systematic reviews on the influence of spasticity on QOL, or on the appropriateness of QOL measures for assessing spasticity, have been conducted. Given the substantial influence spasticity has on QOL, there is a need to improve conceptual understandings of QOL to ensure that investigators employ appropriate research designs as well as suitable outcome measures to assess this prevalent secondary health condition. Hence, the purpose of this systematic literature review was to classify and evaluate outcome measures that are used to assess the influence of spasticity on QOL following SCI.  相似文献   

18.
Arteriovenous (AV) access failure resulting from venous neointimal hyperplasia is a major cause of morbidity in patients with ESRD. To understand the role of chronic kidney disease (CKD) in the development of neointimal hyperplasia, we created AV fistulae (common carotid artery to jugular vein in an end-to-side anastomosis) in mice with or without CKD (renal ablation or sham operation). At 2 and 3 wk after operation, neointimal hyperplasia at the site of the AV anastomosis increased 2-fold in animals with CKD compared with controls, but cellular proliferation in the neointimal hyperplastic lesions did not significantly differ between the groups, suggesting that the enhanced neointimal hyperplasia in the setting of CKD may be secondary to a migratory phenotype of vascular smooth muscle cells (VSMC). In ex vivo migration assays, aortic VSMC harvested from mice with CKD migrated significantly greater than VSMC harvested from control mice. Moreover, animals with CKD had higher serum levels of osteopontin, which stimulates VSMC migration. When we treated animals with bone morphogenic protein-7, which promotes VSMC differentiation, before creation of the AV anastomosis, the effect of CKD on the development of neointimal hyperplasia was eliminated. In summary, CKD accelerates development of neointimal hyperplasia at the anastomotic site of an AV fistula, and administration of bone morphogenic protein-7 neutralizes this effect.Arteriovenous (AV) access dysfunction such as stenosis and thrombosis constitute a major cause of morbidity for patients on chronic hemodialysis for end-stage kidney disease.1 While AV fistulae constructed with native vessels are the best vascular access available owing to a lower incidence of stenosis, thrombosis, and infection compared with vascular grafts or central venous catheters, its failure rate up to 66% at 2 yr2 remains unacceptably high as hemodialysis access related hospitalizations are on the rise and its cost are well over one billion dollars per annum in the United States alone.3The cause of failure is predominantly secondary to the occlusive neointimal hyperplastic (NH) lesion formation at the anastomosis and/or the outflow veins followed by in situ thrombosis.47 Unlike restenosis seen with preocclusive atherosclerotic arteries after angioplasty and stenting, neointimal (new intimal) hyperplasia is seen at the anastomosis involving an artery or a synthetic graft (e.g., expanded polytetrafluoroethylene, or ePTFE, or Dacron) and a vein in the upper extremities. Although these blood vessels are predisposed to calcification, pre-existing NH, and needle stick injury, they are usually free of atherosclerotic plaque. Therefore, directional migration of vascular smooth muscle cells (VSMCs) into the luminal surface is critical to the anastomotic NH lesion formation.8,9Several animal models with native or synthetic graft accesses have been used to gain insight into the pathologic mechanisms of NH lesion development.10,11 However, these studies lacked the critical component of chronic kidney disease (CKD), and whether CKD plays a role in NH lesion formation remains unknown. CKD has been implicated in the development of atherosclerosis along with a host of other deranged factors such as hemodynamic forces, inflammatory mediators, platelet activation, coagulation cascade, and metabolic factors.12,13 In this study, we used a murine model of CKD modified from Gagnon and Gallimore,14 to assess the effect of CKD on NH formation after AV fistula creation.  相似文献   

19.
In male patients with Fabry disease, an X-linked disorder of glycosphingolipid metabolism caused by deficient activity of the lysosomal enzyme α-galactosidase A, kidney dysfunction becomes apparent by the third decade of life and invariably progresses to ESRD without treatment. Here, we summarize the effects of agalsidase alfa on kidney function from three prospective, randomized, placebo-controlled trials and their open-label extension studies involving 108 adult male patients. The mean baseline GFR among 54 nonhyperfiltrating patients (measured GFR <135 ml/min per 1.73 m2) treated with placebo was 85.4 ± 29.6 ml/min per 1.73 m2; during 6 mo of placebo, the mean annualized rate of change in GFR was −7.0 ± 32.9 ml/min per 1.73 m2. Among 85 nonhyperfiltrating patients treated with agalsidase alfa, the annualized rate of change was −2.9 ± 8.7 ml/min per 1.73 m2. Treatment with agalsidase alfa did not affect proteinuria. Multivariate analysis revealed that GFR and proteinuria category (<1 or ≥1 g/d) at baseline significantly predicted the rate of decline of GFR during treatment. This summary represents the largest group of male patients who had Fabry disease and for whom the effects of enzyme replacement therapy on kidney function have been studied. These data suggest that agalsidase alfa may stabilize kidney function in these patients.Fabry disease is an X-linked disorder of glycosphingolipid metabolism caused by deficiency of the activity of the lysosomal enzyme α-galactosidase A (α-Gal A),1 resulting from one of many mutations of the gene GLA located at the Xq22.1.2 The disease occurs with an incidence of approximately 1 in 117,000 live male births,3 although recent surveys suggest that the incidence may be much higher.4 Fabry disease primarily affects male individuals; female heterozygotes are reported to experience all of the signs and symptoms of Fabry disease but with a later onset and a more variable phenotype than is seen in men.5,6 The signs and symptoms of Fabry disease are thought to be due to progressive accumulation of globotriaosylceramide (Gb3) within tissues and organs. Among other signs and symptoms, progressive kidney dysfunction is nearly universal in male individuals with Fabry disease. The initial sign of decline in kidney function is proteinuria or microalbuminuria, which has been reported in affected male individuals as young as 16 yr7 and is present in half of male individuals by age 35 yr.8 Gb3 accumulation within the glomeruli results in mesangial widening and glomerular sclerosis, with a resultant loss of filtering capacity.9 Chronic renal insufficiency (defined as serum creatinine levels ≥1.5 mg/dl) has an onset in the third decade of life and progresses rapidly to ESRD, with a reported average rate of decline in filtering capacity of 12.2 ml/min per yr (range 3.3 to 33.7 ml/min per yr) once chronic renal insufficiency has been reached.8 The average age at initiation of dialysis for ESRD in male patients with Fabry disease ranged between 36.7 and 42.0 yr.5,10 Kidney dysfunction in female patients with Fabry disease is less prevalent than and usually not as severe as that in male patients but does progress to ESRD in some cases.6,10Enzyme replacement therapy (ERT) with human α-Gal A was approved for treatment of Fabry disease in 2001 and has been reported to alleviate neuropathic pain,11 result in regression of hypertrophic cardiomyopathy,12 improve sweat function,13 reduce plasma and urine sediment Gb3 levels,11,14 and reduce microvascular endothelial Gb3 deposits.11,14 In one long-term (up to 4.5 yr) study of 25 male individuals with Fabry disease, agalsidase alfa, α-Gal A produced by gene activation in a human cell line, was reported to stabilize kidney function in patients with stage 1 or 2 chronic kidney disease15 at baseline and to slow the progression of renal dysfunction in adult male patients with stage 3 chronic kidney disease compared with historical control subjects.16 Observational studies of the patients enrolled in the Fabry Outcome Survey (FOS) suggested a similar renoprotective effect of agalsidase alfa.1719 The results of a recent, double-blind, placebo-controlled trial suggested that agalsidase beta, a recombinant form of α-Gal A produced in Chinese hamster ovary cells, slowed the progression of major clinical events in patients with Fabry disease and mild to moderate kidney disease, with the benefit being greater in patients with estimated GFR (eGFR) >55 ml/min per 1.73 m2 at baseline than in those with baseline eGFR ≤55 ml/min per 1.73 m2.20 In this report, we present a summary of the effects of agalsidase alfa on kidney function in all of the adult male patients who were enrolled in prospective, randomized, placebo-controlled clinical studies of agalsidase alfa and their open-label extension studies and who were treated for at least 12 mo.  相似文献   

20.
Chromogranin A (CHGA), a protein released from secretory granules of chromaffin cells and sympathetic nerves, triggers endothelin-1 release from endothelial cells. CHGA polymorphisms associate with an increased risk for ESRD, but whether altered CHGA–endothelium interactions may explain this association is unknown. Here, CHGA led to the release of endothelin-1 and Weibel–Palade body exocytosis in cultured human umbilical vein endothelial cells. In addition, CHGA triggered secretion of endothelin-1 from glomerular endothelial cells and TGF-β1 from mesangial cells cocultured with glomerular endothelial cells. In humans, plasma CHGA correlated positively with endothelin-1 and negatively with GFR. GFR was highly heritable in twin pairs, and common promoter haplotypes of CHGA predicted GFR. In patients with progressive hypertensive renal disease, a CHGA haplotype predicted rate of GFR decline. In conclusion, these data suggest that CHGA acts through the glomerular endothelium to regulate renal function.Chromogranin A (CHGA) is the major soluble protein released from secretory granules of chromaffin cells and sympathetic nerves,1 in which it is costored and coreleased with catecholamines. Increased serum CHGA has been detected not only in patients with essential hypertension2 but also hypertensive consequences such as cardiac3 or renal4 failure.In hypertensive ESRD, we recently reported that genetic polymorphism at CHGA influenced disease risk/susceptibility5. We also found a correlation between endothelin-1 (EDN1) and CHGA secretion; we then determined that the CHGA locus is a trans-quantitative trait locus for EDN1 secretion and found that CHGA itself can trigger EDN1 release from endothelial cells.6EDN1 may play multiple roles in chronic kidney disease, such as elevating intraglomerular pressure7,8 and triggering renal interstitial fibroblasts to proliferate with increased extracellular matrix production9. Could EDN1 secretion provide a mechanistic link between CHGA and renal dysfunction?Here, we explored the mechanism of CHGA action on EDN1 secretion from endothelial cells and investigated the effects of the association of CHGA and EDN1 on renal traits. Our results suggest a glomerular pathway whereby CHGA may alter renal function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号