首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Objective. For a follow‐up prostate biopsy procedure, it is useful to know the previous biopsy locations in anatomic relation to the current transrectal ultrasound (TRUS) scan. The goal of this study was to validate the performance of a 3‐dimensional TRUS‐guided prostate biopsy system that can accurately relocate previous biopsy sites. Methods. To correlate biopsy locations from a sequence of visits by a patient, the prostate surface data obtained from a previous visit needs to be registered to the follow‐up visits. Two interpolation methods, thin‐plate spline (TPS) and elastic warping (EW), were tested for registration of the TRUS prostate image to follow‐up scans. We validated our biopsy system using a custom‐built phantom. Beads were embedded inside the phantom and were located in each TRUS scan. We recorded the locations of the beads before and after pressures were applied to the phantom and then compared them with computer‐estimated positions to measure performance. Results. In our experiments, before system processing, the mean target registration error (TRE) ± SD was 6.4 ± 4.5 mm (range, 3–13 mm). After registration and TPS interpolation, the TRE was 5.0 ± 1.03 mm (range, 2–8 mm). After registration and EW interpolation, the TRE was 2.7 ± 0.99 mm (range, 1–4 mm). Elastic warping was significantly better than the TPS in most cases (P < .0011). For clinical applications, EW can be implemented on a graphics processing unit with an execution time of less than 2.5 seconds. Conclusions. Elastic warping interpolation yields more accurate results than the TPS for registration of TRUS prostate images. Experimental results indicate potential for clinical application of this method.  相似文献   

2.
3.

Purpose  

Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed.  相似文献   

4.
Accuracy of multiparametric MRI has greatly improved the ability of localizing tumor foci of prostate cancer. This property can be used to perform a TRUS–MR image registration, new technological advance, which allows for an overlay of an MRI onto a TRUS image to target a prostate biopsy toward a suspicious area Three types of registration have been developed: cognitive-based, sensor-based, and organ-based registration. Cognitive registration consists of aiming a suspicious area during biopsy with the knowledge of the lesion location identified on multiparametric MRI. Sensor-based registration consists of tracking in real time the TRUS probe with a magnetic device, achieving a global positioning system which overlays in real-time prostate image on both modalities. Its main limitation is that it does not take into account prostate and patient motion during biopsy. Two systems (Artemis and Uronav) have been developed to partially circumvent this drawback. Organ-based registration (Koelis) does not aim to track the TRUS probe, but the prostate itself to compute in a 3D acquisition the TRUS prostate shape, allowing for a registration with the corresponding 3D MRI shape. This system is not limited by prostate/patient motion and allows for a deformation of the organ during registration. Pros and cons of each technique and the rationale for a targeted biopsy only policy are discussed.  相似文献   

5.
Purpose  This paper presents the preliminary results of a semi-automatic method for prostate segmentation of magnetic resonance images (MRI) which aims to be incorporated in a navigation system for prostate brachytherapy. Methods  The method is based on the registration of an anatomical atlas computed from a population of 18 MRI exams onto a patient image. An hybrid registration framework which couples an intensity-based registration with a robust point-matching algorithm is used for both atlas building and atlas registration. Results  The method has been validated on the same dataset that the one used to construct the atlas using the leave-one-out method. Results gives a mean error of 3.39 mm and a standard deviation of 1.95 mm with respect to expert segmentations. Conclusions  We think that this segmentation tool may be a very valuable help to the clinician for routine quantitative image exploitation.  相似文献   

6.
A deformable registration method is described that enables automatic alignment of magnetic resonance (MR) and 3D transrectal ultrasound (TRUS) images of the prostate gland. The method employs a novel "model-to-image" registration approach in which a deformable model of the gland surface, derived from an MR image, is registered automatically to a TRUS volume by maximising the likelihood of a particular model shape given a voxel-intensity-based feature that represents an estimate of surface normal vectors at the boundary of the gland. The deformation of the surface model is constrained by a patient-specific statistical model of gland deformation, which is trained using data provided by biomechanical simulations. Each simulation predicts the motion of a volumetric finite element mesh due to the random placement of a TRUS probe in the rectum. The use of biomechanical modelling in this way also allows a dense displacement field to be calculated within the prostate, which is then used to non-rigidly warp the MR image to match the TRUS image. Using data acquired from eight patients, and anatomical landmarks to quantify the registration accuracy, the median final RMS target registration error after performing 100 MR-TRUS registrations for each patient was 2.40 mm.  相似文献   

7.
A non-rigid MR-TRUS image registration framework is proposed for prostate interventions. The registration framework consists of a convolutional neural networks (CNN) for MR prostate segmentation, a CNN for TRUS prostate segmentation and a point-cloud based network for rapid 3D point cloud matching. Volumetric prostate point clouds were generated from the segmented prostate masks using tetrahedron meshing. The point cloud matching network was trained using deformation field that was generated by finite element analysis. Therefore, the network implicitly models the underlying biomechanical constraint when performing point cloud matching. A total of 50 patients’ datasets were used for the network training and testing. Alignment of prostate shapes after registration was evaluated using three metrics including Dice similarity coefficient (DSC), mean surface distance (MSD) and Hausdorff distance (HD). Internal point-to-point registration accuracy was assessed using target registration error (TRE). Jacobian determinant and strain tensors of the predicted deformation field were calculated to analyze the physical fidelity of the deformation field. On average, the mean and standard deviation were 0.94±0.02, 0.90±0.23 mm, 2.96±1.00 mm and 1.57±0.77 mm for DSC, MSD, HD and TRE, respectively. Robustness of our method to point cloud noise was evaluated by adding different levels of noise to the query point clouds. Our results demonstrated that the proposed method could rapidly perform MR-TRUS image registration with good registration accuracy and robustness.  相似文献   

8.
Purpose  To measure and compare the clinical localization and registration errors in image-guided neurosurgery, with the purpose of revising current assumptions. Materials and methods  Twelve patients who underwent brain surgeries with a navigation system were randomly selected. A neurosurgeon localized and correlated the landmarks on preoperative MRI images and on the intraoperative physical anatomy with a tracked pointer. In the laboratory, we generated 612 scenarios in which one landmark pair was defined as the target and the remaining ones were used to compute the registration transformation. Four errors were measured: (1) fiducial localization error (FLE); (2) target registration error (TRE); (3) fiducial registration error (FRE); (4) Fitzpatrick’s target registration error estimation (F-TRE). We compared the different errors and computed their correlation. Results  The image and physical FLE ranges were 0.5–2.0 and 1.6–3.0 mm, respectively. The measured TRE, FRE and F-TRE were 4.1 ±  1.6, 3.9 ± 1.2, and 3.7 ±  2.2 mm, respectively. Low correlations of 0.19 and 0.37 were observed between the FRE and TRE and between the F-TRE and the TRE, respectively. The differences of the FRE and F-TRE from the TRE were 1.3 ±  1.0 mm (max = 5.5 mm) and 1.3 ±  1.2 mm (max = 7.3 mm), respectively. Conclusion  Contrary to common belief, the FLE presents significant variations. Moreover, both the FRE and the F-TRE are poor indicators of the TRE in image-to-patient registration.  相似文献   

9.
In many cases, radio-frequency catheter ablation of the pulmonary veins attached to the left atrium still involves fluoroscopic image guidance. Two-dimensional X-ray navigation may also take advantage of overlay images derived from static pre-operative 3D volumetric data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of static overlay images for catheter navigation. We developed a novel approach for image-based 3D motion estimation and compensation as a solution to this problem. It is based on 3D catheter tracking which, in turn, relies on 2D/3D registration. To this end, a bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3D based on bi-plane fluoroscopy. Phantom data and clinical data were used to assess model-based catheter tracking. Our phantom experiments yielded an average 2D tracking error of 1.4 mm and an average 3D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2D tracking error of 1.0 ± 0.4 mm and an average 3D tracking error of 0.8 ± 0.5 mm. These results demonstrate that model-based motion-compensation based on 2D/3D registration is both feasible and accurate.  相似文献   

10.
The combination/fusion of quantitative coronary angiography (QCA) and intravascular ultrasound (IVUS)/optical coherence tomography (OCT) depends to a great extend on the co-registration of X-ray angiography (XA) and IVUS/OCT. In this work a new and robust three-dimensional (3D) segmentation and registration approach is presented and validated. The approach starts with standard QCA of the vessel of interest in the two angiographic views (either biplane or two monoplane views). Next, the vessel of interest is reconstructed in 3D and registered with the corresponding IVUS/OCT pullback series by a distance mapping algorithm. The accuracy of the registration was retrospectively evaluated on 12 silicone phantoms with coronary stents implanted, and on 24 patients who underwent both coronary angiography and IVUS examinations of the left anterior descending artery. Stent borders or sidebranches were used as markers for the validation. While the most proximal marker was set as the baseline position for the distance mapping algorithm, the subsequent markers were used to evaluate the registration error. The correlation between the registration error and the distance from the evaluated marker to the baseline position was analyzed. The XA-IVUS registration error for the 12 phantoms was 0.03 ± 0.32 mm (P = 0.75). One OCT pullback series was excluded from the phantom study, since it did not cover the distal stent border. The XA-OCT registration error for the remaining 11 phantoms was 0.05 ± 0.25 mm (P = 0.49). For the in vivo validation, two patients were excluded due to insufficient image quality for the analysis. In total 78 sidebranches were identified from the remaining 22 patients and the registration error was evaluated on 56 markers. The registration error was 0.03 ± 0.45 mm (P = 0.67). The error was not correlated to the distance between the evaluated marker and the baseline position (P = 0.73). In conclusion, the new XA-IVUS/OCT co-registration approach is a straightforward and reliable solution to combine X-ray angiography and IVUS/OCT imaging for the assessment of the extent of coronary artery disease. It provides the interventional cardiologist with detailed information about vessel size and plaque size at every position along the vessel of interest, making this a suitable tool during the actual intervention.  相似文献   

11.
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.  相似文献   

12.
Dynamic chest radiography (2D x-ray video) is a low-dose and cost-effective functional imaging method with high temporal resolution. While the analysis of rib-cage motion has been shown to be effective for evaluating respiratory function, it has been limited to 2D. We aim at 3D rib-motion analysis for high temporal resolution while keeping the radiation dose at a level comparable to conventional examination. To achieve this, we developed a method for automatically recovering 3D rib motion based on 2D-3D registration of x-ray video and single-time-phase computed tomography. We introduce the following two novel components into the conventional intensity-based 2D–3D registration pipeline: (1) a rib-motion model based on a uniaxial joint to constrain the search space and (2) local contrast normalization (LCN) as a pre-process of x-ray video to improve the cost function of the optimization parameters, which is often called the landscape. The effects of each component on the registration results were quantitatively evaluated through experiments using simulated images and real patients’ x-ray videos obtained in a clinical setting. The rotation-angle error of the rib and the mean projection contour distance (mPCD) were used as the error metrics. The simulation experiments indicate that the proposed uniaxial joint model improved registration accuracy. By searching the rotation axis along with the rotation angle of the ribs, the rotation-angle error and mPCD significantly decreased from 2.246 ± 1.839° and 1.148 ± 0.743 mm to 1.495 ± 0.993° and 0.742 ± 0.281 mm, compared to simply applying De Troyer’s model. The real-image experiments with eight patients demonstrated that LCN improved the cost function space; thus, robustness in optimization resulting in an average mPCD of 1.255 ± 0.615 mm. We demonstrated that an anatomical-knowledge based constraint and an intensity normalization, LCN, significantly improved robustness and accuracy in rib-motion reconstruction using chest x-ray video.  相似文献   

13.

Purpose

Using existing stereoelectroencephalography (SEEG) electrode implantation surgical robot systems, it is difficult to intuitively validate registration accuracy and display the electrode entry points (EPs) and the anatomical structure around the electrode trajectories in the patient space to the surgeon. This paper proposes a prototype system that can realize video see-through augmented reality (VAR) and spatial augmented reality (SAR) for SEEG implantation. The system helps the surgeon quickly and intuitively confirm the registration accuracy, locate EPs and visualize the internal anatomical structure in the image space and patient space.

Methods

We designed and developed a projector-camera system (PCS) attached to the distal flange of a robot arm. First, system calibration is performed. Second, the PCS is used to obtain the point clouds of the surface of the patient’s head, which are utilized for patient-to-image registration. Finally, VAR is produced by merging the real-time video of the patient and the preoperative three-dimensional (3D) operational planning model. In addition, SAR is implemented by projecting the planning electrode trajectories and local anatomical structure onto the patient’s scalp.

Results

The error of registration, the electrode EPs and the target points are evaluated on a phantom. The fiducial registration error is \(0.25 \pm 0.23\) mm (max 1.22 mm), and the target registration error is \(0.62\pm 0.28\) mm (max 1.18 mm). The projection overlay error is \(0.75\pm 0.52\) mm, and the TP error after the pre-warped projection is \(0.82\pm 0.23\) mm. The TP error caused by a surgeon’s viewpoint deviation is also evaluated.

Conclusion

The presented system can help surgeons quickly verify registration accuracy during SEEG procedures and can provide accurate EP locations and internal structural information to the surgeon. With more intuitive surgical information, the surgeon may have more confidence and be able to perform surgeries with better outcomes.
  相似文献   

14.

Purpose

To evaluate the targeting accuracy of a small profile MRI-compatible pneumatic robot for needle placement that can angulate a needle insertion path into a large accessible target volume.

Methods

We extended our MRI-compatible pneumatic robot for needle placement to utilize its four degrees-of-freedom (4-DOF) mechanism with two parallel triangular structures and support transperineal prostate biopsies in a closed-bore magnetic resonance imaging (MRI) scanner. The robot is designed to guide a needle toward a lesion so that a radiologist can manually insert it in the bore. The robot is integrated with navigation software that allows an operator to plan angulated needle insertion by selecting a target and an entry point. The targeting error was evaluated while the angle between the needle insertion path and the static magnetic field was between ?5.7° and 5.7° horizontally and between ?5.7° and 4.3° vertically in the MRI scanner after sterilizing and draping the device.

Results

The robot positioned the needle for angulated insertion as specified on the navigation software with overall targeting error of 0.8 ± 0.5mm along the horizontal axis and 0.8 ± 0.8mm along the vertical axis. The two-dimensional root-mean-square targeting error on the axial slices as containing the targets was 1.4mm.

Conclusions

Our preclinical evaluation demonstrated that the MRI-compatible pneumatic robot for needle placement with the capability to angulate the needle insertion path provides targeting accuracy feasible for clinical MRI-guided prostate interventions. The clinical feasibility has to be established in a clinical study.  相似文献   

15.
Accurate and robust non-rigid registration of pre-procedure magnetic resonance (MR) imaging to intra-procedure trans-rectal ultrasound (TRUS) is critical for image-guided biopsies of prostate cancer. Prostate cancer is one of the most prevalent forms of cancer and the second leading cause of cancer-related death in men in the United States. TRUS-guided biopsy is the current clinical standard for prostate cancer diagnosis and assessment. State-of-the-art, clinical MR-TRUS image fusion relies upon semi-automated segmentations of the prostate in both the MR and the TRUS images to perform non-rigid surface-based registration of the gland. Segmentation of the prostate in TRUS imaging is itself a challenging task and prone to high variability. These segmentation errors can lead to poor registration and subsequently poor localization of biopsy targets, which may result in false-negative cancer detection. In this paper, we present a non-rigid surface registration approach to MR-TRUS fusion based on a statistical deformation model (SDM) of intra-procedural deformations derived from clinical training data. Synthetic validation experiments quantifying registration volume of interest overlaps of the PI-RADS parcellation standard and tests using clinical landmark data demonstrate that our use of an SDM for registration, with median target registration error of 2.98 mm, is significantly more accurate than the current clinical method. Furthermore, we show that the low-dimensional SDM registration results are robust to segmentation errors that are not uncommon in clinical TRUS data.  相似文献   

16.
Abstract

Objective: To assess whether previous training in surgery influences performance on da Vinci Skills Simulator and da Vinci robot.

Material and methods: In this prospective study, thirty-seven participants (11 medical students, 17 residents, and 9 attending surgeons) without previous experience in laparoscopy and robotic surgery performed 26 exercises at da Vinci Skills Simulator. Thirty-five then executed a suture using a da Vinci robot.

Results: The overall scores on the exercises at the da Vinci Skills Simulator show a similar performance among the groups with no statistically significant pair-wise differences (p?<?.05). The quality of the suturing based on the unedited videos of the test run was similar for the intermediate (7 (4, 10)) and expert group (6.5 (4.5, 10)), and poor for the untrained groups (5 (3.5, 9)), without statistically significant difference (p?<?.05).

Conclusion: This study showed, for subjects new to laparoscopy and robotic surgery, insignificant differences in the scores at the da Vinci Skills Simulator and at the da Vinci robot on inanimate models.  相似文献   

17.
We present a robotically assisted prostate brachytherapy system and test results in training phantoms and Phase-I clinical trials. The system consists of a transrectal ultrasound (TRUS) and a spatially co-registered robot, fully integrated with an FDA-approved commercial treatment planning system. The salient feature of the system is a small parallel robot affixed to the mounting posts of the template. The robot replaces the template interchangeably, using the same coordinate system. Established clinical hardware, workflow and calibration remain intact. In all phantom experiments, we recorded the first insertion attempt without adjustment. All clinically relevant locations in the prostate were reached. Non-parallel needle trajectories were achieved. The pre-insertion transverse and rotational errors (measured with a Polaris optical tracker relative to the template's coordinate frame) were 0.25 mm (STD=0.17 mm) and 0.75 degrees (STD=0.37 degrees). In phantoms, needle tip placement errors measured in TRUS were 1.04 mm (STD=0.50mm). A Phase-I clinical feasibility and safety trial has been successfully completed with the system. We encountered needle tip positioning errors of a magnitude greater than 4mm in only 2 of 179 robotically guided needles, in contrast to manual template guidance where errors of this magnitude are much more common. Further clinical trials are necessary to determine whether the apparent benefits of the robotic assistant will lead to improvements in clinical efficacy and outcomes.  相似文献   

18.
目的 采用无监督方式基于多级串联深度卷积神经网络(CNN)建立大形变图像配准网络(LDIRnet)模型,评估其配准脑部MRI及肺部CT图像的性能.方法 串联多个结构相同而参数不同的深度CNN,以端到端方式学习待配准图像之间的多个小形变场;再通过叠加小形变场计算待配准图像之间的大形变场,实现大形变图像配准.结果 配准3D...  相似文献   

19.
20.
This paper quantifies the registration and fusion display errors of augmented reality-based nasal endoscopic surgery (ARNES). We comparatively investigated the spatial calibration process for front-end endoscopy and redefined the accuracy level of a calibrated endoscope by using a calibration tool with improved structural reliability. We also studied how registration accuracy was combined with the number and distribution of the deployed fiducial points (FPs) for positioning and the measured registration time. A physically integrated ARNES prototype was customarily configured for performance evaluation in skull base tumor resection surgery with an innovative approach of dynamic endoscopic vision expansion. As advised by surgical experts in otolaryngology, we proposed a hierarchical rendering scheme to properly adapt the fused images with the required visual sensation. By constraining the rendered sight in a known depth and radius, the visual focus of the surgeon can be induced only on the anticipated critical anatomies and vessel structures to avoid misguidance. Furthermore, error analysis was conducted to examine the feasibility of hybrid optical tracking based on point cloud, which was proposed in our previous work as an in-surgery registration solution. Measured results indicated that the error of target registration for ARNES can be reduced to 0.77 ± 0.07 mm. For initial registration, our results suggest that a trade-off for a new minimal time of registration can be reached when the distribution of five FPs is considered. For in-surgery registration, our findings reveal that the intrinsic registration error is a major cause of performance loss. Rigid model and cadaver experiments confirmed that the scenic integration and display fluency of ARNES are smooth, as demonstrated by three clinical trials that surpassed practicality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号