共查询到20条相似文献,搜索用时 43 毫秒
1.
2.
Gamma knife treatments are usually planned manually, requiring much expertise and time. We describe a new, fully automatic method of treatment planning. The treatment volume to be planned is first compared with a database of past treatments to find volumes closely matching in size and shape. The treatment parameters of the closest matches are used as starting points for the new treatment plan. Further optimization is performed with the Nelder-Mead simplex method: the coordinates and weight of the isocenters are allowed to vary until a maximally conformal plan specific to the new treatment volume is found. The method was tested on a randomly selected set of 10 acoustic neuromas and 10 meningiomas. Typically, matching a new volume took under 30 seconds. The time for simplex optimization, on a 3 GHz Xeon processor, ranged from under a minute for small volumes (<1000 cubic mm, 2-3 isocenters), to several tens of hours for large volumes (>30,000 cubic mm, >20 isocenters). In 8/10 acoustic neuromas and 8/10 meningiomas, the automatic method found plans with conformation number equal or better than that of the manual plan. In 4/10 acoustic neuromas and 5/10 meningiomas, both overtreatment and undertreatment ratios were equal or better in automated plans. In conclusion, data-mining of past treatments can be used to derive starting parameters for treatment planning. These parameters can then be computer optimized to give good plans automatically. 相似文献
3.
The purpose of this study is to develop a simple independent dose calculation method to verify treatment plans for Leksell Gamma Knife radiosurgery. Our approach uses the total integral dose within the skull as an end point for comparison. The total integral dose is computed using a spreadsheet and is compared to that obtained from Leksell GammaPlan. It is calculated as the sum of the integral doses of 201 beams, each passing through a cylindrical volume. The average length of the cylinders is estimated from the Skull-Scaler measurement data taken before treatment. Correction factors are applied to the length of the cylinder depending on the location of a shot in the skull. The radius of the cylinder corresponds to the collimator aperture of the helmet, with a correction factor for the beam penumbra and scattering. We have tested our simple spreadsheet program using treatment plans of 40 patients treated with Gamma Knife in our center. These patients differ in geometry, size, lesion locations, collimator helmet, and treatment complexities. Results show that differences between our calculations and treatment planning results are typically within +/-3%, with a maximum difference of +/-3.8%. We demonstrate that our spreadsheet program is a convenient and effective independent method to verify treatment planning irradiation times prior to implementation of Gamma Knife radiosurgery. 相似文献
4.
Image distortion in MRI-based polymer gel dosimetry of gamma knife stereotactic radiosurgery systems
We have studied effects of MR (magnetic resonance) image distortion on polymer gel dosimetry of Gamma Knife stereotactic radiosurgery systems. MR images of BANG polymer gel phantoms were acquired by using a Hahn spin-echo sequence and a fast 3D imaging GRASS sequence. Image artifacts were studied by varying the directions of frequency encoding and the receiver bandwidth. The phantoms were also CT (computed tomography) scanned. The studies showed that the measured dose distributions are shifted by 1.8+/-0.5 mm (2 pixels) in the frequency encoding direction. The magnitude of the shift is inversely proportional to the receiver bandwidth in agreement with theory. Comparison of MRI with CT showed the same image shift. We concluded that the discrepancy is caused by MR image distortion due to a difference in susceptibility effects between the phantom and the fiducial markers of the Leksell localization box. 相似文献
5.
目的 :本文通过测量和分析伽玛刀立体定向放射外科过程中靶点位置误差 ,提出保证伽玛刀治疗质量的建议。材料和方法 :通过胶片法分别测量靶点位置在 Z轴平面和 X轴平面上的总精确度。结果 :靶点在 Z轴和 X轴平面上距离的总精确度分别为 (0 .9± 0 .3) mm和 (0 .8± 0 .2 ) m m。结论 :靶点位置总误差主要来源于定位误差和治疗机的机械误差。定期对 MRI的空间几何失真测试以及结合 CT与 MRI进行靶点立体定位相当必要。 相似文献
6.
Chaves A Lopes MC Alves CC Oliveira C Peralta L Rodrigues P Trindade A 《Medical physics》2004,31(8):2192-2204
Monte Carlo (MC) methods are nowadays often used in the field of radiotherapy. Through successive steps, radiation fields are simulated, producing source Phase Space Data (PSD) that enable a dose calculation with good accuracy. Narrow photon beams used in radiosurgery can also be simulated by MC codes. However, the poor efficiency in simulating these narrow photon beams produces PSD whose quality prevents calculating dose with the required accuracy. To overcome this difficulty, a multiple source model was developed that enhances the quality of the reconstructed PSD, reducing also the time and storage capacities. This multiple source model was based on the full MC simulation, performed with the MC code MCNP4C, of the Siemens Mevatron KD2 (6 MV mode) linear accelerator head and additional collimators. The full simulation allowed the characterization of the particles coming from the accelerator head and from the additional collimators that shape the narrow photon beams used in radiosurgery treatments. Eight relevant photon virtual sources were identified from the full characterization analysis. Spatial and energy distributions were stored in histograms for the virtual sources representing the accelerator head components and the additional collimators. The photon directions were calculated for virtual sources representing the accelerator head components whereas, for the virtual sources representing the additional collimators, they were recorded into histograms. All these histograms were included in the MC code, DPM code and using a sampling procedure that reconstructed the PSDs, dose distributions were calculated in a water phantom divided in 20000 voxels of 1 x 1 x 5 mm3. The model accurately calculates dose distributions in the water phantom for all the additional collimators; for depth dose curves, associated errors at 2sigma were lower than 2.5% until a depth of 202.5 mm for all the additional collimators and for profiles at various depths, deviations between measured and calculated values were less than 2.5% or 1 mm. 相似文献
7.
Measurement of radiation isocenter is a fundamental part of commissioning and quality assurance (QA) for a linear accelerator (linac). In this work we present an automated procedure for the analysis of the stars-shots employed in the radiation isocenter determination. Once the star-shot film has been developed and digitized, the resulting image is analyzed by scanning concentric circles centered around the intersection of the lasers that had been previously marked on the film. The center and the radius of the minimum circle intersecting the central rays are determined with an accuracy and precision better than 1% of the pixel size. The procedure is applied to the position and size determination of the radiation isocenter by means of the analysis of star-shots, placed in different planes with respect to the gantry, couch and collimator rotation axes. 相似文献
8.
Double checking of the monitor units (MU) is an important step in the quality assurance (QA) process in radiosurgery. In this paper we propose the use of an independent algorithm constructed using the ellipsoid which best fits the measurements taken with the bubble head frame. The monitor units calculated by this independent algorithm and the commercial planning system were compared in 40 patients treated with radiosurgery (57 isocenters, 320 arcs). The average relative difference was -0.2% +/- 2.1 (k=1). These results are better for the variance, -0.4% +/- 1.8 (k=1), when all the depths of the bubble head frame are measured and no arcs are calculated by extrapolation or when only one of these factors appear. If there are missing values in the bubble head frame measurements and the model is extrapolated, the variance of the results is greater, 0.4% +/- 3.9 (k=1). The algorithm is reliable as a QA tool for linac radiosurgery. 相似文献
9.
10.
Wendling M Zijp LJ McDermott LN Smit EJ Sonke JJ Mijnheer BJ van Herk M 《Medical physics》2007,34(5):1647-1654
The gamma-evaluation method is a tool by which dose distributions can be compared in a quantitative manner combining dose-difference and distance-to-agreement criteria. Since its introduction, the gamma evaluation has been used in many studies and is on the verge of becoming the preferred dose distribution comparison method, particularly for intensity-modulated radiation therapy (IMRT) verification. One major disadvantage, however, is its long computation time, which especially applies to the comparison of three-dimensional (3D) dose distributions. We present a fast algorithm for a full 3D gamma evaluation at high resolution. Both the reference and evaluated dose distributions are first resampled on the same grid. For each point of the reference dose distribution, the algorithm searches for the best point of agreement according to the gamma method in the evaluated dose distribution, which can be done at a subvoxel resolution. Speed, computer memory efficiency, and high spatial resolution are achieved by searching around each reference point with increasing distance in a sphere, which has a radius of a chosen maximum search distance and is interpolated "on-the-fly" at a chosen sample step size. The smaller the sample step size and the larger the differences between the dose distributions, the longer the gamma evaluation takes. With decreasing sample step size, statistical measures of the 3D gamma distribution converge. Two clinical examples were investigated using 3% of the prescribed dose as dose-difference and 0.3 cm as distance-to-agreement criteria. For 0.2 cm grid spacing, the change in gamma indices was negligible below a sample step size of 0.02 cm. Comparing the full 3D gamma evaluation and slice-by-slice 2D gamma evaluations ("2.5D") for these clinical examples, the gamma indices improved by searching in full 3D space, with the average gamma index decreasing by at least 8%. 相似文献
11.
A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging 总被引:1,自引:0,他引:1
Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s(-1)) that exceed our previous methods. 相似文献
12.
Eisa F Brauweiler R Peetz A Hupfer M Nowak T Kalender WA 《Physics in medicine and biology》2012,57(10):N173-N182
One of the biggest challenges in dynamic contrast-enhanced CT is the optimal synchronization of scan start and duration with contrast medium administration in order to optimize image contrast and to reduce the amount of contrast medium. We present a new optically based approach, which was developed to investigate and optimize bolus timing and shape. The time-concentration curve of an intravenously injected test bolus of a dye is measured in peripheral vessels with an optical sensor prior to the diagnostic CT scan. The curves can be used to assess bolus shapes as a function of injection protocols and to determine contrast medium arrival times. Preliminary results for phantom and animal experiments showed the expected linear behavior between dye concentration and absorption. The kinetics of the dye was compared to iodinated contrast medium and was found to be in good agreement. The contrast enhancement curves were reliably detected in three mice with individual bolus shapes and delay times of 2.1, 3.5 and 6.1 s, respectively. The optical sensor appears to be a promising approach to optimize injection protocols and contrast enhancement timing and is applicable to all modalities without implying any additional radiation dose. Clinical tests are still necessary. 相似文献
13.
生物图像数据的定量分析通常都包含像素点的检测,使用荧光显微技术的活性线粒体活动图像信噪比很低,线粒体检测和跟踪困难,尤其当线粒体的运动包含了其本身的自主运动和神经元轴突带来的扰动时,更不容易获得线粒体运动曲线。本文提出了一种活性线粒体的跟踪算法,首先对线粒体图像序列进行帧间配准,使得图像中轴突的外廓对齐,其中选取边缘角点作为线粒体的特征点,再在准确对齐轴突外廓的前提下运用帧间位移矢量跟踪线粒体粒子。本文算法已经成功应用在存在神经元轴突和线粒体同时运动的动态图像序列中,整个算法流程不需要手动提取特征点,节约时间,为医学图像处理与生物技术研究提供了一种新方法和新参考。 相似文献
14.
Current methods to calculate dose distributions with organ motion can be broadly classified as "dose convolution" and "fluence convolution" methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also corroborate the existing notion that the interfraction dose variability due to the interplay between the MLC motion and breathing motion averages out over typical multifraction treatments. Simulation with motion waveforms more representative of real breathing indicate that the motion can produce penumbral spreading asymmetric about the static dose distributions. Such calculations can help a clinician decide to use, for example, a larger margin in the superior direction than in the inferior direction. In the paper we demonstrate that a 15 min run on a single CPU can readily illustrate the effect of a patient-specific breathing waveform, and can guide the physician in making informed decisions about margin expansion and dose escalation. 相似文献
15.
Rausch T Thomas A Camp NJ Cannon-Albright LA Facelli JC 《Computers in biology and medicine》2008,38(7):826-836
This paper describes a novel algorithm to analyze genetic linkage data using pattern recognition techniques and genetic algorithms (GA). The method allows a search for regions of the chromosome that may contain genetic variations that jointly predispose individuals for a particular disease. The method uses correlation analysis, filtering theory and genetic algorithms to achieve this goal. Because current genome scans use from hundreds to hundreds of thousands of markers, two versions of the method have been implemented. The first is an exhaustive analysis version that can be used to visualize, explore, and analyze small genetic data sets for two marker correlations; the second is a GA version, which uses a parallel implementation allowing searches of higher-order correlations in large data sets. Results on simulated data sets indicate that the method can be informative in the identification of major disease loci and gene-gene interactions in genome-wide linkage data and that further exploration of these techniques is justified. The results presented for both variants of the method show that it can help genetic epidemiologists to identify promising combinations of genetic factors that might predispose to complex disorders. In particular, the correlation analysis of IBD expression patterns might hint to possible gene-gene interactions and the filtering might be a fruitful approach to distinguish true correlation signals from noise. 相似文献
16.
17.
G Gastl W Aulitzky H Tilg K Nachbaur J Troppmair R Flener C Huber 《Immunobiology》1986,172(3-5):262-268
Progress with the clinical application of interferons to neoplastic diseases has been slow and complicated by the need for attention to a new spectrum of therapeutic and toxic effects manifested by the interferons. In this report, we present a new approach to define clinically effective but atoxic doses of interferon-alpha for treatment of hairy cell leukemia. In order to find in vivo biologically active interferon doses, the biochemical marker neopterin was selected as a means to assess a cellular interferon response in vivo. Subcutaneous administration of minimal doses of recombinant interferon-alpha-2 (5-8 X 10(5) U/day), which induced maximum neopterin release in serum and urine, proved to be clinically effective: Eight of nine patients responded to this dose regimen. This response rate was comparable to that of a conventional dose schedule (3 X 10(6) U/sqm/day) which was also applied to nine patients (eight responders). Whereas no difference in the clinical efficacy between the two therapeutic strategies could be established, toxicity was clearly confined to the conventional dose regimen. These preliminary results suggest that at least in hairy cell leukemia the therapeutic dose range of interferon can be separated from the toxic. 相似文献
18.
Zhen Ma Renato Natal Jorge T. Mascarenhas João Manuel R.S. Tavares 《Medical engineering & physics》2013,35(12):1819-1824
The urinary bladder can be visualized from different views by imaging facilities such as computerized tomography and magnetic resonance imaging. Multi-view imaging can present more details of this pelvic organ and contribute to a more reliable reconstruction. Based on the information from multi-view planes, a level set based algorithm is proposed to reconstruct the 3D shape of the bladder using the cross-sectional boundaries. The algorithm provides a flexible solution to handle the discrepancies from different view planes and can obtain an accurate bladder surface with more geometric details. 相似文献
19.
Holmes TW 《Physics in medicine and biology》2001,46(1):11-27
A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a 'concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially 'head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case. 相似文献
20.
A particle swarm optimization algorithm for beam angle selection in intensity-modulated radiotherapy planning 总被引:4,自引:0,他引:4
Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated. 相似文献