首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
All freehand 3-D ultrasound systems have some latency between the acquisition of an image and its associated position. Previously, estimation of latency has been made by tracking a phantom in a sequence of images and correlating its motion to that recorded by the position sensor. However, tracking-based temporal calibration uses the assumption that latency is constant between scans. This paper presents a new method for temporal calibration that avoids this assumption. Temporal calibration is performed on the scan data by finding the latency at which the best alignment of the 2-D images within the reconstructed volume occurs. The mean voxel intensity variance is used as a global measure of the quality of alignment within the volume and is minimized with respect to latency for each scan. The new method is compared with previous methods using an ultrasound phantom. Finally, integration of temporal calibration with existing spatial calibration methods is discussed.  相似文献   

2.
High-speed laryngeal endoscopic systems record vocal fold vibrations during phonation in real-time. For a quantitative analysis of vocal fold dynamics a metrical scale is required to get absolute laryngeal dimensions of the recorded image sequence. For the clinical use there is no automated and stable calibration procedure up to now. A calibration method is presented that consists of a laser projection device and the corresponding image processing for the automated detection of the laser calibration marks. The laser projection device is clipped to the endoscope and projects two parallel laser lines with a known distance to each other as calibration information onto the vocal folds. Image processing methods automatically identify the pixels belonging to the projected laser lines in the image data. The line detection bases on a Radon transform approach and is a two-stage process, which successively uses temporal and spatial characteristics of the projected laser lines in the high-speed image sequence. The robustness and the applicability are demonstrated with clinical endoscopic image sequences. The combination of the laser projection device and the image processing enables the calibration of laryngeal endoscopic images within the vocal fold plane and thus provides quantitative metrical data of vocal fold dynamics.  相似文献   

3.
We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.  相似文献   

4.
Calibration is essential in freehand 3-D ultrasound to find the spatial transformation from the image coordinates to the sensor coordinate system. Calibration accuracy has significant impact on image-guided interventions. We introduce a new mathematical framework that uses differential measurements to achieve high calibration accuracy. Accurate measurements of axial differences in ultrasound images of a multi-wedge phantom are used to calculate the calibration matrix with a closed-form solution. The multi-wedge phantom has been designed based on the proposed differential framework and can be mass-produced inexpensively using a 3-D printer. The proposed method enables easy, fast and highly accurate ultrasound calibration, which is essential for most current ultrasound-guided applications and also widens the range of new applications. The precision of the method using only a single image of the phantom is comparable to that of the standard N-wire method using 50 images. The method can also directly take advantage of the fine sampling rate of radiofrequency ultrasound data to achieve very high calibration accuracy. With 100 radiofrequency ultrasound images, the method achieves a point reconstruction error of 0.09 ± 0.39 mm.  相似文献   

5.
Three-dimensional (3-D) ultrasound (US) is an emerging new technology with numerous clinical applications. Ultrasound probe calibration is an obligatory step to build 3-D volumes from 2-D images acquired in a freehand US system. The role of calibration is to find the mathematical transformation that converts the 2-D coordinates of pixels in the US image into 3-D coordinates in the frame of reference of a position sensor attached to the US probe. This article is a comprehensive review of what has been published in the field of US probe calibration for 3-D US. The article covers the topics of tracking technologies, US image acquisition, phantom design, speed of sound issues, feature extraction, least-squares minimization, temporal calibration, calibration evaluation techniques and phantom comparisons. The calibration phantoms and methods have also been classified in tables to give a better overview of the existing methods.  相似文献   

6.
A review of calibration techniques for freehand 3-D ultrasound systems   总被引:7,自引:0,他引:7  
Three-dimensional (3-D) ultrasound (US) is an emerging new technology with numerous clinical applications. Ultrasound probe calibration is an obligatory step to build 3-D volumes from 2-D images acquired in a freehand US system. The role of calibration is to find the mathematical transformation that converts the 2-D coordinates of pixels in the US image into 3-D coordinates in the frame of reference of a position sensor attached to the US probe. This article is a comprehensive review of what has been published in the field of US probe calibration for 3-D US. The article covers the topics of tracking technologies, US image acquisition, phantom design, speed of sound issues, feature extraction, least-squares minimization, temporal calibration, calibration evaluation techniques and phantom comparisons. The calibration phantoms and methods have also been classified in tables to give a better overview of the existing methods.  相似文献   

7.
Intra-operative ultrasound (US) is a popular imaging modality for its non-radiative and real-time advantages. However, it is still challenging to perform an interventional procedure under two-dimensional (2-D) US image guidance. Accordingly, the trend has been to perform three-dimensional (3-D) US image guidance by equipping the US probe with a spatial position tracking device, which requires accurate probe calibration for determining the spatial position between the B-scan image and the tracked probe. In this report, we propose a novel probe spatial calibration method by developing a calibration phantom combined with the tracking stylus. The calibration phantom is custom-designed to simplify the alignment between the stylus tip and the B-scan image plane. The spatial position of the stylus tip is tracked in real time, and its 2-D image pixel location is extracted and collected simultaneously. Gaussian distribution is used to model the spatial position of the stylus tip and the iterative closest point-based optimization algorithm is used to estimate the spatial transformation that matches these two point sets. Once the probe is calibrated, its trajectory and the B-scan image are collected and used for the volume reconstruction in our freehand 3-D US imaging system. Experimental results demonstrate that the probe calibration approach results in less than 1-mm mean point reconstruction accuracy. It requires less than 5 min for an inexperienced user to complete the probe calibration procedure with minimal training. The mockup test shows that the 3-D images are geometrically correct with 0.28°-angle accuracy and 0.40-mm distance accuracy.  相似文献   

8.
In this paper, a new positioning system is proposed for the 3-D ultrasound (US). This system combines the image registration technique and speckle decorrelation algorithm to accurately position sequential ultrasonic images without any additional positioning hardware. The speckle decorrelation algorithm estimates the relative distance of two neighboring frames and the image registration technique gets the range of the whole 3-D ultrasonic data set and makes slight modification on each frame's position. The image registration technique is based on the reference image, which is perpendicular to the 3-D ultrasonic data set. This reference image intersects each frame of the 3-D ultrasonic data set in a line. For each frame, the intersectional line is first found and then the location in the reference image can be used to estimate the position of this frame. This system uses the data set of consecutive 2-D freehand-scanned US B-mode images to construct the 3-D US volume data, and it can be integrated into the 3-D US volume rendering system.  相似文献   

9.
The use of augmented reality in minimally invasive surgery has been the subject of much research for more than a decade. The endoscopic view of the surgical scene is typically augmented with a 3D model extracted from a preoperative acquisition. However, the organs of interest often present major changes in shape and location because of the pneumoperitoneum and patient displacement. There have been numerous attempts to compensate for this distortion between the pre- and intraoperative states. Some have attempted to recover the visible surface of the organ through image analysis and register it to the preoperative data, but this has proven insufficiently robust and may be problematic with large organs. A second approach is to introduce an intraoperative 3D imaging system as a transition. Hybrid operating rooms are becoming more and more popular, so this seems to be a viable solution, but current techniques require yet another external and constraining piece of apparatus such as an optical tracking system to determine the relationship between the intraoperative images and the endoscopic view. In this article, we propose a new approach to automatically register the reconstruction from an intraoperative CT acquisition with the static endoscopic view, by locating the endoscope tip in the volume data.We first describe our method to localize the endoscope orientation in the intraoperative image using standard image processing algorithms. Secondly, we highlight that the axis of the endoscope needs a specific calibration process to ensure proper registration accuracy. In the last section, we present quantitative and qualitative results proving the feasibility and the clinical potential of our approach.  相似文献   

10.
Abstract

“Natural Orifice Translumenal Endoscopic Surgery” (NOTES) is assumed to offer significant benefits to patients, such as reduced trauma as well as reduced collateral damage. But the potential advantages of this new technology can only be achieved through safe and standardized operation methods. Several barriers, which have been identified during clinical practice in flexible intra-abdominal endoscopy, can only be solved with computer-assisted surgical (CAS) systems. In order to assist the surgeon during the intervention and enhance his visual possibilities, some of these CAS systems require 3-D information of the intervention site, for others 3-D information is even mandatory. Therefore it is evident that the definition and design of new technologies for CAS systems must be strongly considered. A 3-D endoscope, called “Multisensor-Time-of-Flight” (MUSTOF) endoscope, is actually being developed. Within these developments, an optical 3-D time-of-flight (TOF) sensor is attached to the proximal end of a common endoscope. The 3-D depth information obtained by this enhanced endoscope can furthermore be registered with preoperatively acquired 3-D volumetric datasets such as CT or MRI. These enhanced or augmented 3-D data volumes could then be used to find the transgastric or transcolonic entry point to the abdomen. Furthermore, such acquired endoscopic depth data can be used to provide better orientation within the abdomen. Moreover it can also prevent intra-operative collisions and provide an optimized field of view with the possibility for off-axis viewing. Furthermore, providing a stable horizon on video-endoscopic images, especially within non-rigid endoscopic surgery scenarios (particularly within NOTES), remains an open issue. Hence, our recently presented “endorientation” approach for automated image orientation rectification could turn out as an important contribution. It works with a tiny micro-electro-mechanical systems (MEMS) tri-axial inertial sensor that is placed on the distal tip of an endoscope. By measuring the impact of gravity on each of the three orthogonal axes the rotation angle can be estimated with some calculations out of these three acceleration values, which can be used to automatically rectify the endoscopic images using image processing methods. Using such enhanced, progressive endoscopic system extensions proposed in this article, translumenal surgery could in the future be performed in a safer and more feasible manner.  相似文献   

11.
Robust registration procedures for endoscopic imaging   总被引:1,自引:0,他引:1  
This paper presents a robust algorithm for calibration and system registration of endoscopic imaging devices. The system registration allows us to map accurately each point in the world coordinate system into the endoscope image and vice versa to obtain the world line of sight for each image pixel. The key point of our system is a robust linear algorithm based on singular value decomposition (SVD) for estimating simultaneously two unknown coordinate transformations. We show that our algorithm is superior in terms of robustness and computing efficiency to iterative procedures based on Levenberg-Marquardt optimization or on quaternion approaches. The algorithm does not require the calibration pattern to be tracked. Experimental results and simulations verify the robustness and usefulness of our approach. They give an accuracy of less than 0.7 mm and a success rate >99%. We apply the calibrated endoscope to the neurosurgical relevant case of red out, where in spite of the complete loss of vision the surgeon gets visual aids in the endoscope image at the actual position, allowing him/her to manoeuvre a coagulation fibre into the right position. Finally, we outline how our registration algorithm can be used also for standard registration applications (establish the mapping between two sets of points). We propose our algorithm as a linear, non-iterative algorithm also for projective transformations and for 2D-3D-mappings. Thus, it can be seen as a generalization of the well-known Umeyama registration algorithm.  相似文献   

12.
This paper describes a method for tracking the camera motion of a flexible endoscope, in particular a bronchoscope, using epipolar geometry analysis and intensity-based image registration. The method proposed here does not use a positional sensor attached to the endoscope. Instead, it tracks camera motion using real endoscopic (RE) video images obtained at the time of the procedure and X-ray CT images acquired before the endoscopic examination. A virtual endoscope system (VES) is used for generating virtual endoscopic (VE) images. The basic idea of this tracking method is to find the viewpoint and view direction of the VES that maximizes a similarity measure between the VE and RE images. To assist the parameter search process, camera motion is also computed directly from epipolar geometry analysis of the RE video images. The complete method consists of two steps: (a) rough estimation using epipolar geometry analysis and (b) precise estimation using intensity-based image registration. In the rough registration process, the method computes camera motion from optical flow patterns between two consecutive RE video image frames using epipolar geometry analysis. In the image registration stage, we search for the VES viewing parameters that generate the VE image that is most similar to the current RE image. The correlation coefficient and the mean square intensity difference are used for measuring image similarity. The result obtained in the rough estimation process is used for restricting the parameter search area. We applied the method to bronchoscopic video image data from three patients who had chest CT images. The method successfully tracked camera motion for about 600 consecutive frames in the best case. Visual inspection suggests that the tracking is sufficiently accurate for clinical use. Tracking results obtained by performing the method without the epipolar geometry analysis step were substantially worse. Although the method required about 20 s to process one frame, the results demonstrate the potential of image-based tracking for use in an endoscope navigation system.  相似文献   

13.
In contrast to 2-D ultrasound (US) for uniaxial plane imaging, a 3-D US imaging system can visualize a volume along three axial planes. This allows for a full view of the anatomy, which is useful for gynecological (GYN) and obstetrical (OB) applications. Unfortunately, the 3-D US has an inherent limitation in resolution compared to the 2-D US. In the case of 3-D US with a 3-D mechanical probe, for example, the image quality is comparable along the beam direction, but significant deterioration in image quality is often observed in the other two axial image planes. To address this, here we propose a novel unsupervised deep learning approach to improve 3-D US image quality. In particular, using unmatched high-quality 2-D US images as a reference, we trained a recently proposed switchable CycleGAN architecture so that every mapping plane in 3-D US can learn the image quality of 2-D US images. Thanks to the switchable architecture, our network can also provide real-time control of image enhancement level based on user preference, which is ideal for a user-centric scanner setup. Extensive experiments with clinical evaluation confirm that our method offers significantly improved image quality as well user-friendly flexibility.  相似文献   

14.

Purpose

In endoscopic surgery, surgeons must manipulate an endoscope inside the body cavity to observe a large field-of-view while estimating the distance between surgical instruments and the affected area by reference to the size or motion of the surgical instruments in 2-D endoscopic images on a monitor. Therefore, there is a risk of the endoscope or surgical instruments physically damaging body tissues. To overcome this problem, we developed a Ø7- mm 3-D endoscope that can switch between providing front and front-diagonal view 3-D images by simply rotating its sleeves.

Methods

This 3-D endoscope consists of a conventional 3-D endoscope and an outer and inner sleeve with a beam splitter and polarization plates. The beam splitter was used for visualizing both the front and front-diagonal view and was set at 25° to the outer sleeve’s distal end in order to eliminate a blind spot common to both views. Polarization plates were used to avoid overlap of the two views. We measured signal-to-noise ratio (SNR), sharpness, chromatic aberration (CA), and viewing angle of this 3-D endoscope and evaluated its feasibility in vivo.

Results

Compared to the conventional 3-D endoscope, SNR and sharpness of this 3-D endoscope decreased by 20 and 7 %, respectively. No significant difference was found in CA. The viewing angle for both the front and front-diagonal views was about 50°. In the in vivo experiment, this 3-D endoscope can provide clear 3-D images of both views by simply rotating its inner sleeve.

Conclusions

The developed 3-D endoscope can provide the front and front-diagonal view by simply rotating the inner sleeve, therefore the risk of damage to fragile body tissues can be significantly decreased.  相似文献   

15.
We propose a selective method of measurement for computing image similarities based on characteristic structure extraction and demonstrate its application to flexible endoscope navigation, in particular to a bronchoscope navigation system. Camera motion tracking is a fundamental function required for image-guided treatment or therapy systems. In recent years, an ultra-tiny electromagnetic sensor commercially became available, and many image-guided treatment or therapy systems use this sensor for tracking the camera position and orientation. However, due to space limitations, it is difficult to equip the tip of a bronchoscope with such a position sensor, especially in the case of ultra-thin bronchoscopes. Therefore, continuous image registration between real and virtual bronchoscopic images becomes an efficient tool for tracking the bronchoscope. Usually, image registration is done by calculating the image similarity between real and virtual bronchoscopic images. Since global schemes to measure image similarity, such as mutual information, squared gray-level difference, or cross correlation, average differences in intensity values over an entire region, they fail at tracking of scenes where less characteristic structures can be observed. The proposed method divides an entire image into a set of small subblocks and only selects those in which characteristic shapes are observed. Then image similarity is calculated within the selected subblocks. Selection is done by calculating feature values within each subblock. We applied our proposed method to eight pairs of chest X-ray CT images and bronchoscopic video images. The experimental results revealed that bronchoscope tracking using the proposed method could track up to 1600 consecutive bronchoscopic images (about 50 s) without external position sensors. Tracking performance was greatly improved in comparison with a standard method utilizing squared gray-level differences of the entire images.  相似文献   

16.
Tracking the position and orientation of a 3-D ultrasound transducer has many clinical applications. Tracking requires calibration to find the transformation between the tracking sensor and the ultrasound coordinates. Typically the set of image slice data are scan converted to a Cartesian volume using assumed motor geometry and a single transformation to the sensor. We propose, instead, the calibration of individual slices using a 2-D calibration technique. A best fit to a subset of slices is performed to decrease data collection time compared with that for calibration of all slices, and to reduce the influence of random errors in individual calibrations. We compare our technique with four scan conversion-based techniques: 2-D N-wire on the center slice, N-wire using a 3-D volume, N-wire using a 3-D volume including the edge points and a new closed-form planar method using a 3-D volume. The proposed multi-slice technique produced the smallest point reconstruction error (0.82 mm using a tracked stylus).  相似文献   

17.
A phantom has been developed to quickly calibrate a freehand 3-D ultrasound (US) imaging system. Calibration defines the spatial relationship between the US image plane and an external tracking device attached to the scanhead. The phantom consists of a planar array of strings and beads, and a set of out-of-plane strings that guide the user to proper scanhead orientation for imaging. When an US image plane is coincident with the plane defined by the strings, the calibration parameters are calculated by matching of homologous points in the image and phantom. The resulting precision and accuracy of the 3-D imaging system are similar to those achieved with a more complex calibration procedure. The 3-D reconstruction performance of the calibrated system is demonstrated with a magnetic tracking system, but the method could be applied to other tracking devices.  相似文献   

18.
Ultrasound images may be difficult to interpret because of the lack of an image orientation display. To resolve this problem, a three-dimensional (3-D) ultrasound scanner has been constructed that provides spatial registration and display of position and orientation of real-time images while allowing unconstrained movement of the scanning transducer. It consists of a conventional sector scanner, a 3-D acoustical spatial locater, and a personal computer. It displays within each image the line of intersection of two image planes, a reference image, and a live image. This display guides and documents image positioning. The scanner's 3-D data also provide the potential for computergraphic modeling of organs, the ability to calculate volumes using nonparallel, nonintersecting image planes, and the capability for 3-D color flow mapping and measurement of Doppler angle.  相似文献   

19.
This paper describes a new robust and fully automatic method for calibration of three-dimensional (3D) freehand ultrasound called Confhusius (CalibratiON for FreeHand UltraSound Imaging USage). 3D Freehand ultrasound consists in mounting a position sensor on a standard probe. The echographic B-scans can be localized in 3D and compounded into a volume. However, especially for quantitative use, this process dramatically requires a calibration procedure that determines its accuracy and usefulness. Calibration aims at determining the transformation (translations, rotations, scaling) between the coordinates system of the echographic images and the coordinate system of the localization system. To calibrate, we acquire images of a phantom whose 3D geometrical properties are known. We propose a robust and fully automatic calibration method based on the Hough transform and robust estimators. Experiments have been done with synthetic and real sequences, and this calibration method is shown to be easy to perform, accurate, automatic and fast enough for clinical use.  相似文献   

20.
To create a freehand three-dimensional (3-D) ultrasound (US) system for image-guided surgical procedures, an US beam calibration process must be performed. The calibration method presented in this work does not use a phantom to define in 3-D space the pixel locations in the beam. Rather, the described method is based on the spatial relationship between an optically tracked pointer and a similarly tracked US transducer. The pointer tip was placed into the US beam, and US images, physical coordinates of the pointer and the transducer location were simultaneously recorded. US image coordinates of the pointer were mapped to the physical points using two different registration methods. Two sensitivity studies were performed to determine the location and number of points needed to calibrate the beam accurately. Results showed that the beam is most efficiently calibrated with approximately 20 points collected from throughout the beam. This method of beam calibration proved to be highly accurate, yielding registration errors of approximately 0.4 mm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号