首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

BACKGROUND

The pattern of linear graph schematized by visual analogue scale (VAS) score displaying pain worsening between 2 days and 2 weeks after selective nerve root block (SNRB) is called rebound pain.

PURPOSE

The purpose of this study was to determine if sodium hyaluronate and carboxymethyl cellulose solution (HA-CMC sol) injection could reduce the occurrence of rebound pain at 3 days to 2 weeks after SNRB in patients with radiculopathy compared with injection with corticosteroids and local anesthetics alone.

STUDY DESIGN/SETTING

Double blinded randomized controlled clinical trial.

PATIENT SAMPLE

A total of 44 patients (23 of 24 patients in the Guardix group and 21 of 24 patients in the control group) who finished the follow-up session were subjects of this study.

OUTCOME MEASUREMENT

Patients were asked to write down their average VAS pain scores daily for 12 weeks. Functional outcomes were assessed by Oswestry Disability Index, Roland Morris Disability Questionnaire , and Short Form-36.

METHOD

A cocktail of corticosteroids, 1% lidocaine, 0.5% Bupivacaine, and 1 mL of normal saline was used for the control group whereas a cocktail of corticosteroids, 1% lidocaine, 0.5% Bupivacaine, and 1 mL of HA-CMC solution was used for the G group. Study participants were randomized into one of two treatment regimens. They were followed up for 3 months.

RESULTS

VAS score at 2 weeks after the procedure was 4.19±1.32 in the control group, which was significantly (p<.05) higher than that (2.43±1.24) in the G group. VAS score at 6 weeks after the procedure was 4.00±1.23 in the control group and 3.22±1.45 in the G group, showing no significant (p=.077) difference between the two groups. There were no significant differences in functional outcomes at 6 or 12 weeks after the procedure.

CONCLUSIONS

Compared with conventional cocktail used for SNRB, addition of HA-CMC sol showed effective control of rebound pain at 3 days to 2 weeks after the procedure.  相似文献   

2.

Background Context

Posterior lumbar fusion (PLF) is a commonly performed procedure. The evolution of bundled payment plans is beginning to require physicians to more closely consider patient outcomes up to 90 days after an operation. Current quality metrics and other databases often consider only 30 postoperative days. The relatively new Healthcare Cost and Utilization Project Nationwide Readmissions Database (HCUP-NRD) tracks patient-linked hospital admissions data for up to one calendar year.

Purpose

To identify readmission rates within 90 days of discharge following PLF and to put this in context of 30 day readmission and baseline readmission rates.

Study Design

Retrospective study of patients in the HCUP-NRD.

Patient Sample

Any patient undergoing PLF performed in the first 9 months of 2013 were identified in the HCUP-NRD.

Outcome Measures

Readmission patterns up to a full calendar year after discharge.

Methods

PLFs performed in the first 9 months of 2013 were identified in the HCUP-NRD. Patient demographics and readmissions were tracked for 90 days after discharge. To estimate the average admission rate in an untreated population, the average daily admission rate in the last quarter of the year was calculated for a subset of PLF patients who had their operation in the first quarter of the year. This study was deemed exempt by the institution's Human Investigation Committee.

Results

Of 26,727 PLFs, 1,580 patients (5.91%) were readmitted within 30 days of discharge and 2,603 patients (9.74%) were readmitted within 90 days of discharge. Of all readmissions within 90 days, 54.56% occurred in the first 30 days. However, if only counting readmissions above the baseline admission rate of a matched population from the 4th quarter of the year (0.08% of population/day), 89.78% of 90 day readmissions occurred within the first 30 days.

Conclusions

The current study delineates readmission rates after PLF and puts this in the context of 30-day readmission rates and baseline readmission rates for those undergoing PLF. These results are important for patient counseling, planning, and preparing for potential bundled payments in spine surgery.  相似文献   

3.

BACKGROUND CONTEXT

Pedicle screw loosening is common after spinal fusion and can be associated with pseudoarthrosis and pain. With suspicion of screw loosening on standard radiographs, CT is currently considered the advanced imaging modality of choice. MRI with new metal artifact reduction techniques holds potential to be sensitive in detection of screw loosening. The sensitivity and specificity of either of the imaging modalities are yet clear.

PURPOSE

To evaluate the sensitivity and specificity of three different image modalities (standard radiographs, CT, and MRI) for detection of pedicle screw loosening.

STUDY DESIGN/SETTING

Cross-sectional diagnostic study.

PATIENT SAMPLE

Forty-one patients (159 pedicle screws) undergoing revision surgeries after lumbar spinal fusion between August 2014 and April 2017 with preoperative radiographs, CT, and MRI with spinal metal artifact reduction (STIR WARP and TSE high bandwidth sequences).

OUTCOME MEASURES

Sensitivity and specificity in detection of screw loosening for each imaging modality.

METHODS

Screw torque force was measured intraoperatively and compared with preoperative screw loosening signs such as peri-screw edema in MRI and peri-screw osteolysis in CT and radiographs. A torque force of less than 60 Ncm was used to define a screw as loosened.

RESULTS

Sensitivity and specificity in detection of screw loosening was 43.9% and 92.1% for MRI, 64.8% and 96.7% for CT, and 54.2% and 83.5% for standard radiographs, respectively.

CONCLUSIONS

Despite improvement of MRI with metal artifact reduction MRI technique, CT remains the modality of choice. Even so, CT fails to detect all loosened pedicle screws.  相似文献   

4.

BACKGROUND CONTEXT

Data on the long-term outcome after fusion for isthmic spondylolisthesis are scarce.

PURPOSE

To study patient-reported outcomes and adjacent segment degeneration (ASD) after fusion for isthmic spondylolisthesis and to compare patient-reported outcomes with a control group.

STUDY DESIGN/SETTING

A prospective study including a cross-sectional control group.

PATIENT SAMPLE

Patients with isthmic spondylolisthesis underwent posterior lumbar interbody fusion (PLIF) (n=86) or posterolateral fusion (PLF) (n=77). Patient-reported outcome data were available for 73 patients in the PLIF group and 71 in the PLF group at a mean of 11 (range 5–16) years after baseline. Seventy-seven patients in the PLIF group and 54 in the PLF group had radiographs at a mean of 14 (range 9–19) years after baseline. One hundred thirty-six randomly selected persons from the population served as controls for the patient-reported outcomes.

OUTCOME MEASURES

Patient-reported outcomes include the following: global outcome, Oswestry Disability Index, Disability Rating Index, and Short Form 36. The ASD was determined from radiographs using the University of California Los Angeles (UCLA) grading scale.

METHODS

: The chi-square test or analysis of covariance (ANCOVA) was used for group comparisons. The ANCOVA was adjusted for follow-up time, smoking, Meyerding slippage grade, teetotaler (yes/no) and, if available, the baseline level of the dependent variable.

RESULTS

There were no significant patient-reported outcome differences between the PLIF group and the PLF group. The prevalence of ASD was 42% (32/77) in the PLIF group and 26% (14/54) in the PLF group (p=.98). The patient-reported outcome data indicated lower physical function and more pain in individuals with surgically treated isthmic spondylolisthesis compared to the controls.

CONCLUSIONS

PLIF and PLF groups had similar long-term patient-reported and radiological outcomes. Individuals with isthmic spondylolisthesis have lower physical function and more pain several years after surgery when compared to the general population.  相似文献   

5.

BACKGROUND CONTEXT

Degenerative lumbar scoliosis (DLS) is an increasingly common spinal disorder of which current management is characterized by a substantial variety in treatment advice. To improve evidence-based clinical decision-making and increase uniformity and transparency of care, the Scoliosis Research Society established appropriateness criteria for surgery for DLS. In these criteria, however, the patient perspective was not formally incorporated. Since patient perspective is an increasingly important consideration in informed decision-making, embedding patient-reported outcome measures (PROMs) in the appropriateness criteria would allow for an objective and transparent patient-centered approach.

PURPOSE

To evaluate the extent that patient perspective is integrated into the appropriateness criteria of surgery for DLS.

STUDY DESIGN

Single center, retrospective, cohort study.

PATIENT SAMPLE

150 patients with symptomatic degenerative lumbar scoliosis.

Outcome Measures

The association between appropriateness for surgery and various PROMs [Visual Analogue Scale for pain, Short Form 36 (SF-36), Pain Catastrophizing Scale (PCS), Hospital Anxiety Depression Scale (HADS), and Oswestry Disability Index (ODI)].

METHODS

Medical records of all patients with symptomatic DLS were reviewed and scored according to the appropriateness criteria. To assess the association between the appropriateness criteria and the validated PROMs, analysis of variance was used to test for differences in PROMS for each of the three categories resulting from the appropriateness criteria. To assess how well PROMs can discriminate between appropriate and inappropriate, we used a logistic regression analysis. Discriminative ability was subsequently determined by computing the area under the curve (AUC), resulting from the logistic regression analysis. Spearman rank analysis was used to establish a correlation pattern between the PROMs used and the appropriateness criteria.

RESULTS

There was a significant association between the appropriateness of surgery and the PROMs. The discriminative ability for appropriateness of surgery for PROMs as a group was strong (AUC of 0.83). However, when considered in isolation, the predictive power of any individual PROMs was poor. The different categories of the appropriateness criteria significantly coincided with the PROMs used.

CONCLUSION

There is a statistically significant association between the appropriateness criteria of surgery for DLS and PROMs. Implementation of PROMs into the appropriateness criteria may lead to more transparent, quantifiable and uniform clinical decision making for DLS.  相似文献   

6.

BACKGROUND CONTEXT

Postdischarge care is a significant source of cost variability after posterior lumbar fusion surgery. However, there remains limited evidence associating postdischarge inpatient services and improved postoperative outcomes, despite the high cost of these services.

PURPOSE

To determine the association between posthospital discharge to inpatient care facilities and postoperative complications.

STUDY DESIGN

A retrospective review of all 1- to 3-level primary posterior lumbar fusion cases in the 2010-2014 National Surgical Quality Improvement Program registry was conducted. Propensity scores for discharge destination were determined based on observable baseline patient characteristics. Multivariable propensity-adjusted logistic regressions were performed to determine associations between discharge destination and postdischarge complications, with adjusted odds ratios (OR) and 95% confidence intervals (CI).

RESULTS

A total of 18,652 posterior lumbar fusion cases were identified, 15,234 (82%) were discharged home, and 3,418 (18%) were discharged to continued inpatient care. Multivariable propensity-adjusted analysis demonstrated that being discharged to inpatient facilities was independently associated with higher risk of thromboembolic complications (OR [95% CI]: 1.79 [1.13–2.85]), urinary complications, (1.79 [1.27–2.51]), and unplanned readmissions (1.43 [1.22–1.68]).

CONCLUSIONS

Discharge to continued inpatient care versus home after primary posterior lumbar fusion is independently associated with higher odds of certain major complications. To optimize clinical outcomes as well as cost savings in an era of value-based reimbursements, clinicians and hospitals should consider further investigation into carefully investigating which patients might be better served by home discharge after surgery.  相似文献   

7.

BACKGROUND CONTEXT

Adjacent segment disease (ASD) is a well-known complication after lumbar fusion. Lumbar lateral interbody fusion (LLIF) may provide an alternative method of treatment for ASD while avoiding the morbidity associated with revision surgery through a traditional posterior approach. This is the first biomechanical study to evaluate the stability of lateral-based constructs for treating ASD in existing multilevel fusion model.

PURPOSE

We aimed to evaluate the biomechanical stability of anterior column reconstruction through the less invasive lateral-based interbody techniques compared with traditional posterior spinal fusion for the treatment of ASD in existing multilevel fusion.

STUDY DESIGN/SETTING

Cadaveric biomechanical study of laterally based interbody strategies for treating ASD.

METHODS

Eighteen fresh-frozen cadaveric specimens were nondestructively loaded in flexion, extension, and lateral bending. The specimens were randomized into three different groups according to planned posterior spinal instrumented fusion (PSF): group 1: L5–S1, group 2: L4–S1, and group 3: L3–S1. In each group, ASD was considered the level cranial to the upper-instrumented vertebrae (UIV). After testing the intact spine, each specimen underwent PSF representing prior fusion in the ASD model. The adjacent segment for each specimen then underwent (1) Stand-alone LLIF, (2) LLIF?+?plate, (3) LLIF?+?single screw rod (SSR) anterior instrumentation, and (4) LLIF?+?traditional posterior extension of PSF. In all conditions, three-dimensional kinematics were tracked, and range of motion (ROM) was calculated for the comparisons.

RESULTS

ROM results were expressed as a percentage of the intact spine ROM. LLIF effectively reduces ROM in all planes of ROM. Supplementation of LLIF with plate or SSR provides further stability as compared with stand-alone LLIF. Expansion of posterior instrumentation provides the most substantial stability in all planes of ROM (p <.05). All constructs demonstrated a consistent trend of reduction in ROM between all the groups in all bending motions.

CONCLUSIONS

This biomechanical study suggests potential promise in exploring LLIF as an alternative treatment of ASD but reinforces previous studies' findings that traditional expansion of posterior instrumentation provides the most biomechanically stable construct.  相似文献   

8.

BACKGROUND

Incidental durotomy (ID) is one of the most common intraoperative complications seen in spine surgery. Conflicting evidence has been presented regarding whether or not outcomes are affected by the presence of an ID.

PURPOSE

To evaluate whether outcomes following degenerative spine surgery are affected by ID and the incidence of ID with different diagnoses and different surgical procedures.

MATERIALS

By using SweSpine, the national Swedish Spine Surgery Register, preoperative, surgical and postoperative 1-year follow-up data were obtained for 64,431 surgeries. All patients were surgically treated due to lumbar spinal stenosis (LSS) without or with concomitant degenerative spondylolisthesis (DS) or lumbar disc herniation (LDH) between 2000 and 2015. Gender, age, smoking habits, walking distance, consumption of analgesics, back and leg pain (Visual Analogue Scale [VAS]), quality of life (EuroQol [EQ5D] and Short Form 36 [SF-36]), and disability (Oswestry Disability Index [ODI]) were recorded.

RESULTS

Overall, incidence of ID during the study period was 5.0%. For the LDH, LSS, and DS subgroups, it was 2.8%, 6.5%, and 6.5%, respectively. Laminectomy was associated with a higher incidence of ID than discectomy (p<.001). ID was more common in all three subgroups if the patient had previously been subjected to spine surgery and with increasing age of the patients (p<.001). LDH patients with an ID reported a higher degree of residual leg pain, inferior mental quality of life (SF-36 MCS), and higher disability (ODI) than LDH patients without ID (all p<.001) 1-year after surgery. LSS patients with an ID reported inferior SF-36 MCS (p<.001) and DS patients with an ID had inferior SF-36 MCS and higher ODI compared to patients with the same diagnosis but without an ID (p<.001). However, these numerical differences are well below references for MCID, for all three subgroups. ID was associated with a higher frequency of patients being dissatisfied with the surgical outcome at 1-year follow-up. In patients who did not improve in back and leg pain following surgery (delta-value), ID was less common than in patients reporting improved back and leg pain from before as compared to following surgery.

CONCLUSIONS

The overall occurrence of ID in the present study was 5%, with higher figures in LSS and DS and lower figures in LDH. Higher age of the patient and previous surgery were associated with higher frequencies of ID. The outcome at 1 year following surgery was not affected to a clinically relevant extent when an ID was obtained. However, ID was associated with a higher degree of patient dissatisfaction and a longer hospital length of stay.  相似文献   

9.

BACKGROUND CONTEXT

Health literacy, defined as “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions,” has been demonstrated to affect access to care and appropriate healthcare utilization.

PURPOSE

To determine the impact of health literacy in the evaluation and management of patients with chronic low back pain.

STUDY DESIGN

Cross sectional.

PATIENT SAMPLE

Patients seen at a multisurgeon spine specialty clinic.

OUTCOME MEASURES

Oswestry Disability Index, EQ-5D, and Numeric Rating Scales (0–10) for back and leg pain.

METHODS

The Newest Vital Sign (NVS) and Health Literacy Survey, Oswestry Disability Index, EQ-5D and pain scales were administered to patients undergoing evaluation and treatment for lumbar degenerative disease in the outpatient setting. Patients were surveyed regarding their use of medication, therapy, and pain management modalities.

RESULTS

Of 201 patients approached for participation, 186 completed the health literacy surveys. Thirty (17%) were assessed as having limited literacy, 52 (28%) as possibly having limited literacy and 104 (56%) having adequate literacy based on their NVS scores. The cohort with low NVS scores also had low Health Literacy Survey Scores. Patients with limited literacy had worse back and leg pain scores compared with patients with possibly limited literacy and adequate literacy. Patients with adequate health literacy were more likely to use medications (80% vs. 53%, p?=?.017) and were more likely to see a specialist (34% vs. 17%) compared with those with limited literacy. Patients with limited health literacy were not more likely to see a chiropractor (7% vs. 7%), but reported more visits (19 vs. 8).

CONCLUSIONS

Patients with lower health literacy reported worse back and leg pain scores, indicating either more severe disease or a fundamental difference in their responses to standard health-related quality of life measures. This study also suggests that patients with limited health literacy may underutilize some resources and overutilize other resources. Further study is needed to clarify these patterns, and to examine their impact on health status and clinical outcomes.  相似文献   

10.

BACKGROUND

In modern clinical research, the accepted minimum follow-up for patient-reported outcome measures (PROMs) after lumbar spine surgery is 24 months, particularly after fusion. Recently, this minimum requirement has been called into question.

PURPOSE

We aim to quantify the concordance of 1- and 2-year PROMs to evaluate the importance of long-term follow-up after elective lumbar spine surgery.

STUDY DESIGN

Retrospective analysis of data from a prospective registry.

PATIENT SAMPLE

We identified all patients in our prospective institutional registry who underwent degenerative lumbar spine surgery with complete baseline, 12-month, and 24-month follow-up for ODI and numeric rating scales for back and leg pain (NRS-BP and NRS-LP).

OUTCOME MEASURES

Oswestry Disability Index (ODI) and NRS-BP and NRS-LP at 1 year and at 2 years.

METHODS

We evaluated concordance of 1- and 2-year change scores by means of Pearson's product-moment correlation and performed logistic regression to assess if achieving the minimum clinically important difference (MCID) at 12 months predicted 24-month MCID. Odds ratios (OR) and their 95% confidence intervals (CI), as well as model areas-under-the-curve were obtained.

RESULTS

A total of 210 patients were included. We observed excellent correlation among 12- and 24-month ODI (r?=?0.88), NRS-LP (r?=?0.76) and NRS-BP (r?=?0.72, all p <.001). Equal results were obtained when stratifying for discectomy, decompression, or fusion. Patients achieving 12-month MCID were likely to achieve 24-month MCID for ODI (OR: 3.3, 95% CI: 2.4–4.1), NRS-LP (OR: 2.99, 95% CI: 2.2–4.2) and NRS-BP (OR: 3.4, 95% CI: 2.7–4.2, all p <.001) with excellent areas-under-the-curve values of 0.81, 0.77, and 0.84, respectively. Concordance rates between MCID at both follow-ups were 87.2%, 83.8%, and 84.2%. A post-hoc power analysis demonstrated sufficient statistical power.

CONCLUSIONS

Irrespective of the surgical procedure, 12-month PROMs for functional disability and pain severity accurately reflect those at 24 months. In support of previous literature, our results suggest that 12 months of follow-up may be sufficient for evaluating spinal patient care in clinical practice as well as in research.  相似文献   

11.

BACKGROUND CONTEXT

Spinal epidural lipomatosis (SEL) is a condition in which excess lumbar epidural fat (EF) deposition often leads to compression of the cauda equina or nerve root. Although SEL is often observed in obese adults, no systematic research investigating the potential association between SEL and metabolic syndrome has been conducted.

PURPOSE

To elucidate potential association between SEL and metabolic syndrome.

STUDY DESIGN

An observational study used data of a medical checkup.

PATIENT SAMPLE

We retrospectively reviewed data from consecutive subjects undergoing medical checkups. A total of 324 subjects (174 men and 150 women) were enrolled in this study.

OUTCOME MEASURES

The correlation of EF accumulation with demographic data and metabolic-related factors was evaluated.

METHODS

The degree of EF accumulation was evaluated based on the axial views of lumbar magnetic resonance imaging. Visceral and subcutaneous fat areas were measured at the navel level using abdominal computed tomography. Metabolic syndrome was diagnosed according to the criteria of the Japanese Society of Internal Medicine. The correlation of SEL with metabolic syndrome and metabolic-related conditions was statistically evaluated.

RESULTS

The degree of EF accumulation demonstrated a significant correlation to body mass index, abdominal circumference, and visceral fat area. However, age, body fat percentage, and subcutaneous fat area showed no correlation with the degree of EF accumulation. Logistic regression analysis revealed that metabolic syndrome (odds ratio [OR]=3.8, 95% confidence interval [CI]=1.5–9.6) was significantly associated with SEL. Among the diagnostic criteria for metabolic syndrome, visceral fat area ≥100 cm2 (OR=4.8, 95% CI=1.5–15.3) and hypertension (OR=3.5, 95% CI=1.1–11.8) were observed to be independently associated with SEL.

CONCLUSION

This is the first study to demonstrate that metabolic syndrome is associated with SEL in a relatively large, unbiased population. Our data suggest that metabolic-related conditions are potentially related to EF deposition and that SEL could be a previously unrecognized manifestation of metabolic syndrome.  相似文献   

12.

BACKGROUND CONTEXT

Transforaminal lumbar interbody fusion (TLIF) is a widely accepted surgical procedure, but cage migration (CM) and cage retropulsion (CR) are associated with poor outcomes.

PURPOSE

This study seeks to identify risk factors associated with these serious events.

STUDY DESIGN

A prospective observational longitudinal study.

PATIENT SAMPLE

Over a 5-year period, 881 lumbar levels in 784 patients were treated using TLIF at three spinal surgery centers.

OUTCOME MEASURES

We evaluated the odds ratio of the risk factors for CM with and without subsidence and CR in multivariate analysis.

METHODS

Our study classified CM into two subgroups: CM without subsidence and CM with subsidence. Cases of spinal canal and/or foramen intrusion of the cage was defined separately as CR. Patient records, operative notes, and radiographs were analyzed for factors potentially related to CM with subsidence, CM without subsidence, and CR.

RESULTS

Of 881 lumbar levels treated with TLIFs, CM without subsidence was observed in 20 (2.3%) and CM with subsidence was observed in 36 (4.1%) patients. Among the CM cases, CR was observed in 17 (17/56, 30.4%). The risk factors of CM without subsidence were osteoporosis (OR 8.73, p < .001) and use of a unilateral single cage (OR 3.57, p < .001). Osteoporosis (OR 5.77, p < .001) and endplate injury (OR 26.87, p < .001) were found to be significant risk factors for CM with subsidence. Risk factors of CR were osteoporosis (OR 7.86, p < .001), pear-shaped disc (OR 8.28, p = .001), endplate injury (OR 18.70, p < .001), unilateral single cage use (OR 4.40, p = .03), and posterior cage position (OR 6.45, p = .04). A difference in overall fusion rates was identified, with a rate of 97.1% (801 of 825) for no CM, 55.0% (11 of 20) for CM without subsidence, 41.7% (15 of 36) for CM with subsidence, and 17.6% (3 of 17) for CR at 1.5 years postoperatively.

CONCLUSIONS

Our results suggest that osteoporosis is a significant risk factor for both CM and CR. In addition, a pear-shaped disc, posterior positioning of the cage, the presence of endplate injury and the use of a single cage were correlated with the CM with and without subsidence and CR.  相似文献   

13.

BACKGROUND CONTEXT

Although facet dislocations account for only 6% of cervical trauma, the consequences are often devastating. Cervical facet dislocations are associated with a disproportionate amount of spinal cord injuries; however, neurologic examination of patients is often difficult, as patients commonly present with reduced levels of consciousness. There are limited studies that have investigated the impact of spinal canal diameter and translation on neurologic injury following facet dislocations.

PURPOSE

Review a consecutive series of patients with facet dislocations to assess the impact of sagittal diameter and translation on Spinal Cord Injury (SCI).

STUDY DESIGN

Retrospective review at a level I trauma center identified 97 patients with facet dislocations.

METHODS

Between 2004 and 2014, a retrospective review at a level I trauma center identified patients with traumatic facet dislocation. Demographic data, neurologic exams, and radiographic findings were reviewed. We assessed sagittal diameter at the injury level, as well as above and below, and translation. This study has no funding source and its authors have no potential conflicts of interest-associated biases.

RESULTS

Ninety-seven patients presented with facet dislocations. Fifty-nine (61%) presented with a SCI. Those with ASIA A averaged 8.0 mm of injury level canal diameter, and ASIA E averaged 12.6 mm (p < .001). Additionally, those with ASIA A averaged 8.0 mm of translation, and ASIA E averaged 4.2 mm (p < 0.001). Two groups were created based on their general motor function. Those with ASIA A–C averaged 8.4 mm of injury level canal diameter, and ASIA D–E averaged 12.3 mm (p < .001). Those with ASIA A–C averaged 7.8 mm of translation, and ASIA D–E averaged 4.4 mm (p < .001). Receiver operating characteristic (ROC) curves demonstrated that translation was a good predictor of ASIA A–C and canal diameter was an almost perfect predictor of ASIA D–E.

CONCLUSIONS

Our data indicate that patients with greater translation and/or a smaller canal diameter at the injury level have a higher rate of SCI. Adjacent canal diameter did not correlate with neurologic injury.  相似文献   

14.

BACKGROUND CONTEXT

Models for predicting recovery in traumatic spinal cord injury (tSCI) patients have been developed to optimize care. Several models predicting tSCI recovery have been previously validated, yet recent findings question their accuracy, particularly in patients whose prognoses are the least predictable.

PURPOSE

To compare independent ambulatory outcomes in AIS (ASIA [American Spinal Injury Association] Impairment Scale) A, B, C, and D patients, as well as in AIS B+C and AIS A+D patients by applying two existing logistic regression prediction models.

STUDY DESIGN

A prospective cohort study.

PARTICIPANT SAMPLE

Individuals with tSCI enrolled in the pan-Canadian Rick Hansen SCI Registry (RHSCIR) between 2004 and 2016 with complete neurologic examination and Functional Independence Measure (FIM) outcome data.

OUTCOME MEASURES

The FIM locomotor score was used to assess independent walking ability at 1-year follow-up.

METHODS

Two validated prediction models were evaluated for their ability to predict walking 1-year postinjury. Relative prognostic performance was compared with the area under the receiver operating curve (AUC).

RESULTS

In total, 675 tSCI patients were identified for analysis. In model 1, predictive accuracies for 675 AIS A, B, C, and D patients as measured by AUC were 0.730 (95% confidence interval [CI] 0.622–0.838), 0.691 (0.533–0.849), 0.850 (0.771–0.928), and 0.516 (0.320–0.711), respectively. In 160 AIS B+C patients, model 1 generated an AUC of 0.833 (95% CI 0.771–0.895), whereas model 2 generated an AUC of 0.821 (95% CI 0.754–0.887). The AUC for 515 AIS A+D patients was 0.954 (95% CI 0.933–0.975) with model 1 and 0.950 (0.928–0.971) with model 2. The difference in prediction accuracy between the AIS B+C cohort and the AIS A+D cohort was statistically significant using both models (p=.00034; p=.00038). The models were not statistically different in individual or subgroup analyses.

CONCLUSIONS

Previously tested prediction models demonstrated a lower predictive accuracy for AIS B+C than AIS A+D patients. These models were unable to effectively prognosticate AIS A+D patients separately; a failure that was masked when amalgamating the two patient populations. This suggests that former prediction models achieved strong prognostic accuracy by combining AIS classifications coupled with a disproportionately high proportion of AIS A+D patients.  相似文献   

15.

BACKGROUND CONTEXT

Fungal spinal epidural abscess (FSEA) is a rare entity with high morbidity and mortality. Reports describing the clinical features, diagnosis, treatment, and outcomes of FSEA are scarce in the literature.

PURPOSE

This study aimed to describe the clinical features, diagnosis, treatment, and outcomes of FSEA.

STUDY DESIGN

This study is designed as a retrospective clinical case series.

PATIENT SAMPLE

A continuous series of patients with the diagnosis of FSEA who presented at our institution from 1993 to 2016.

METHODS

We reviewed the electronic medical records of patients with SEA who were treated within our hospital system from 1993 to 2016. We only included SEA cases that were due to fungi. We also reviewed FSEA cases in the English language literature from 1952 to 2017 to analyze the features of FSEA.

RESULTS

From a database of 1,053 SEA patients, we identified 9 patients with FSEA. Aspergillus fumigatus was isolated from 2 (22%) patients, and Candida species were isolated from 7 (78%). Focal spine pain, neurologic deficit, and fever were demonstrated in 89%, 50%, and 44% of FSEA cases, respectively. Five of nine cases involved the thoracic spine, and eight were located anterior to the thecal sac. Three cases had fungemia, six had long symptom duration (>2 weeks) prior to presentation, seven had concurrent immunosuppression, and eight had vertebral osteomyelitis. Additionally, one case had residual motor deficit at last follow-up, one had S1 sensory radicular symptoms, two suffered recurrent FSEA, two died within hospitalization, and two died within 90 days after discharge.

CONCLUSIONS

In summary, the classic diagnostic triad (focal spine pain, neurologic deficit, and fever) is not of great clinical utility for FSEA. Biopsy, intraoperative tissue culture, and blood culture can be used to diagnose FSEA. The most common pathogens of FSEA are Aspergillus and Candida species. Therefore, empiric treatment for FSEA should cover these species while definitive identification is pending. FSEA is found in patients with poor baseline health status, which is the essential reason for its high mortality.  相似文献   

16.
17.

BACKGROUND CONTEXT

Both open surgical resection (OSR) and radiofrequency ablation (RFA) have been reported for spinal osteoid osteoma (OO).

PURPOSE

To verify the clinical safety and efficiency of RFA with OSR in treating spinal OO.

STUDY DESIGN

Retrospective cohort study.

PATIENT SAMPLE

Twenty-eight consecutive patients with spinal OO who underwent either RFA or OSR in our institute between September 2006 and December 2016.

OUTCOME MEASURES

The age, gender, lesion distribution, surgical time, estimated blood loss, complications, local recurrence, visual analogue scale (VAS), and the modified Frankel grade were documented.

METHODS

We retrospectively reviewed 28 patients with spinal OO who had been treated in our hospital from September 2006 to December 2016. Patients were followed at 3, 6, 12, and 24 months after the index surgery. The minimum follow-up period was 12 months. This study was funded by Peking University Third Hospital (Y71508-01) (¥ 400,000).

RESULTS

Twelve and 16 patients were treated with CT-guided percutaneous RFA and OSR, respectively. Spinal OO locations were cervical in 4, thoracic in 4, lumbar in 3, and sacral vertebra in 1 in the RFA group and cervical in 12, thoracic in 1, and lumber in 3 in the OSR group. RFA showed shorter operating time, less blood loss, and less in-hospital stay than open surgery [105.0 ± 33.8 minutes vs. 186.4 ± 53.5 minutes (p < .001), 1 (0 to 5) ml vs. 125 (30–1200) ml (p < .001) and 1 (1–3) days vs. 6 (3–10) days (p < .001), respectively]. At last follow-up, one patient underwent a secondary RFA for recurrence. VAS improvement was 7.5 (3–10) and 6.5 (4–9) (p = .945) in the RFA and OSR groups, respectively. The overall complication rate was 8.3% (1/12) and 18.8% (3/16) in the RFA and OSR groups, respectively.

CONCLUSIONS

If there is sufficient cerebrospinal fluid between the spinal OO lesion and spinal cord/nerve root (more than 1 mm), RFA is effective and safe for treatment of well-selected spinal OO, showing reduced operating time, blood loss, in-hospital stay, and complications compared to OSR. However, OSR is still recommended in cases with spinal cord/nerve root compression.  相似文献   

18.

BACKGROUND CONTEXT

Patients with pyogenic vertebral osteomyelitis (PVO) are expected to have an increased risk of bone loss. Therefore, early bisphosphonate therapy would be clinically effective for PVO patients with osteoporosis.

PURPOSE

This study aimed to investigate the effect of bisphosphonate on clinical outcomes of PVO patients with osteoporosis.

STUDY DESIGN/SETTING

A retrospective comparative study.

PATIENT SAMPLE

PVO patients with osteoporosis.

OUTCOME MEASURES

Four events of interest for Cox proportional hazard model included surgical treatment, recurrence of infection, subsequent fracture of adjacent vertebral bodies, and death.

METHODS

PVO patients were divided into three groups: group A (initiation of bisphosphonate within 6 weeks after PVO diagnosis), group B (initiation of bisphosphonate between 6 weeks and 3 months after PVO diagnosis), and group C (no treatment for osteoporosis). Cox proportional hazard model was used for the four events of interest.

RESULTS

A total of 360 PVO patients with osteoporosis were investigated for the four events of interest. Group A had significantly lower hazard ratios for undergoing later (>6 weeks after diagnosis) surgery than group C (p?=?.014) despite similar occurrences of overall surgery. A significant difference was also observed in the occurrence of subsequent fractures at adjacent vertebral bodies (p?=?.001 for model 1 and p?=?.002 for model 2). Groups A and B had significantly lower hazard ratios for subsequent fracture than group C. No significant differences were observed in the hazard ratios of recurrence and death among the three groups.

CONCLUSIONS

Early bisphosphonate treatment in PVO patients with osteoporosis was associated with a significantly lower occurrence of subsequent vertebral fracture at adjacent vertebral bodies and lower occurrence of subsequent surgery.  相似文献   

19.

BACKGROUND CONTEXT

Quantitative computed tomography (QCT) of the lumbar spine is used as an alternative to dual-energy X-ray absorptiometry in assessing bone mineral density (BMD). The average BMD of L1-L2 is the standard reportable metric used for diagnostic purposes according to current recommendations. The density of L1 and L2 has also been proposed as a reference value for the remaining lumbosacral vertebrae and is commonly used as a surrogate marker for overall bone health. Since regional BMD differences within the spine have been proposed, it is unclear if the L1-L2 average correlates with the remainder of the lumbosacral spine.

PURPOSE

The aim of this study was to determine possible BMD variations throughout the lumbosacral spine in patients undergoing lumbar fusion and to assess the correlation between the clinically used L1-L2 average and the remaining lumbosacral vertebral levels.

STUDY DESIGN/SETTING

This is a retrospective case series.

PATIENT SAMPLE

Patients undergoing posterior lumbar spinal fusion from 2014 to 2017 at a single, academic institution with available preoperative CT imaging were included in this study.

OUTCOME MEASURES

The outcome measure was BMD measured by QCT.

METHODS

Standard QCT measurements at the L1 and L2 vertebra and additional experimental measurements of L3, L4, L5, and S1 were performed. Subjects with missing preoperative lumbar spine CT imaging were excluded. The correlations between the L1-L2 average and the other vertebral bodies of the lumbosacral spine (L3, L4, L5, S1) were evaluated.

RESULTS

In total, 296 consecutive patients (55.4% female, mean age of 63.1 years) with available preoperative CT were included. The vertebral BMD values showed a gradual decrease from L1 to L3 and increase from L4 to S1 (L1=118.8 mg/cm3, L2=116.6 mg/cm3, L3=112.5 mg/cm3, L4=122.4 mg/cm3, L5=135.3 mg/cm3, S1=157.4 mg/cm3). There was strong correlation between the L1-L2 average and the average of the other lumbosacral vertebrae (L3-S1) with a Pearson's correlation coefficient (r=0.85). We also analyzed the correlation between the L1-L2 average and each individual lumbosacral vertebra. Similar relationships were observed (r value, 0.67–0.87), with the strongest correlation between the L1-L2 average and L3 (r=0.87).

CONCLUSIONS

Our data demonstrate regional BMD differences throughout the lumbosacral spine. Nevertheless, there is high correlation between the clinically used L1-L2 average and the BMD values in the other lumbosacral vertebrae. We, therefore, conclude the standard clinically used L1-L2 BMD average is a useful bone quantity measure of the entire lumbosacral spine in patients undergoing lumbar spinal fusion.  相似文献   

20.

PURPOSE

To characterize the gross, histologic, and systemic changes caused by implantation of metal fragments commonly used in commercial bullets into the intervertebral disc.

BACKGROUND CONTEXT

Long-term complications of retained bullet fragments in the spine have been documented in the literature; however, the impact of different metal projectiles on the intervertebral disc has not been described. This study was performed to assess the local effects of the metallic bullet fragments on the intervertebral disc and their systemic effects regarding metal ion concentrations in serum and solid organs.

STUDY DESIGN

Animal Model Study.

METHODS

Funding for this project was provided by the Cervical Spine Research Society in the amount of $10,000. Copper, lead, and aluminum alloys from commercially available bullets were surgically implanted into sequential intervertebral discs in the lumbar spine of six canines. Kirschner wire implantation and a sham operation were performed as controls. Radiographs were performed to confirm the location of the bullets. Animals were sacrificed at 4, 6, and 9 months postimplantation. Whole blood, plasma, cerebrospinal fluid, kidney tissue, and liver tissue samples were analyzed for copper and lead concentrations. Histologic and gross samples were examined at the time of sacrifice.

RESULTS

Significant tissue reactions were noted in the discs exposed to copper and lead. Copper resulted in significantly more severe disc degeneration than either the lead or aluminum alloy. In the short interval follow-up of this study, no statistically significant trend was observed in whole blood, plasma, cerebrospinal fluid, and tissue levels.

CONCLUSION

This study demonstrates that the canine intervertebral disc is differentially susceptible to metallic fragments depending on the composition. Trends were noted for increasing levels of lead and copper in liver tissue samples although statistical significance could not be reached due to short time interval and small sample size. The metallic composition of retained fragments can be a determining factor in deciding on surgical intervention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号