首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
Most medical images have a poorer signal to noise ratio than scenes taken with a digital camera, which often leads to incorrect diagnosis. Speckles suppression from ultrasound images is one of the most important concerns in computer-aided diagnosis. This article proposes two novel, robust and efficient ultrasound images denoising techniques. The first technique is the enhanced ultrasound images denoising (EUID) technique, which estimates automatically the speckle noise amount in the ultrasound images by estimating important input parameters of the filter and then denoising the image using the sigma filter. The second technique is the ultrasound image denoising using neural network (UIDNN) that is based on the second-order difference of pixels with adaptive threshold value in order to identify random valued speckles from images to achieve high efficient image restoration. The performances of the proposed techniques are analyzed and compared with those of other image denoising techniques. The experimental results show that the proposed techniques are valuable tools for speckles suppression, being accurate, less tedious, and preventing typical human errors associated with manual tasks in addition to preserving the edges from the image. The EUID algorithm has nearly the same peak signal to noise ratio (PSNR) as Frost and speckle-reducing anisotropic diffusion 1, whereas it achieves higher gains, on average—0.4 dB higher PSNR—than the Lee, Kuan, and anisotropic diffusion filters. The UIDNN technique outperforms all the other techniques since it can determine the noisy pixels and perform filtering for these pixels only. Generally, when relatively high levels of noise are added, the proposed algorithms show better performances than the other conventional filters.  相似文献   

2.
Kernel regression is a non-parametric estimation technique which has been successfully applied to image denoising and enhancement in recent times. Magnetic resonance 3D image denoising has two features that distinguish it from other typical image denoising applications, namely the tridimensional structure of the images and the nature of the noise, which is Rician rather than Gaussian or impulsive. Here we propose a principled way to adapt the general kernel regression framework to this particular problem. Our noise removal system is rooted on a zeroth order 3D kernel regression, which computes a weighted average of the pixels over a regression window. We propose to obtain the weights from the similarities among small sized feature vectors associated to each pixel. In turn, these features come from a second order 3D kernel regression estimation of the original image values and gradient vectors. By considering directional information in the weight computation, this approach substantially enhances the performance of the filter. Moreover, Rician noise level is automatically estimated without any need of human intervention, i.e. our method is fully automated. Experimental results over synthetic and real images demonstrate that our proposal achieves good performance with respect to the other MRI denoising filters being compared.  相似文献   

3.
Purpose  Many shrinkage functions have been introduced and applied for the wavelet shrinkage denoising of computed tomography (CT) images. However, these functions have problems in continuity of functions and cause “shrinkage artifacts”. Therefore, we designed a new and smooth shrinkage function using noise distribution. Methods  The proposed shrinkage function was designed under the following four conditions: (1) use of noise distribution, (2) shrunk coefficients having all ranges of amplitude, (3) function continuity, and (4) property of a function that is controllable by two parameters. The designed function was applied to phantom and chest CT images and denoising performance was compared with other functions. Results  In the proposed method, edge and pixel values were maintained when compared with previous functions, the occurrence of shrinkage artifacts was smaller, and high- quality denoised images were obtained. Conclusions  The proposed shrinkage function is effective for low-dose noisy CT images when using accurately selected parameters.  相似文献   

4.
MRI denoising using non-local means   总被引:1,自引:0,他引:1  
Magnetic Resonance (MR) images are affected by random noise which limits the accuracy of any quantitative measurements from the data. In the present work, a recently proposed filter for random noise removal is analyzed and adapted to reduce this noise in MR magnitude images. This parametric filter, named Non-Local Means (NLM), is highly dependent on the setting of its parameters. The aim of this paper is to find the optimal parameter selection for MR magnitude image denoising. For this purpose, experiments have been conducted to find the optimum parameters for different noise levels. Besides, the filter has been adapted to fit with specific characteristics of the noise in MR image magnitude images (i.e. Rician noise). From the results over synthetic and real images we can conclude that this filter can be successfully used for automatic MR denoising.  相似文献   

5.
The signal to noise ratio of high-speed fluorescence microscopy is heavily influenced by photon counting noise and sensor noise due to the expected low photon budget. Denoising algorithms are developed to decrease these noise fluctuations in microscopy data by incorporating additional knowledge or assumptions about imaging systems or biological specimens. One question arises: whether there exists a theoretical precision limit for the performance of a microscopy denoising algorithm. In this paper, combining Cramér-Rao Lower Bound with constraints and the low-pass-filter property of microscope systems, we develop a method to calculate a theoretical variance lower bound of microscopy image denoising. We show that this lower bound is influenced by photon count, readout noise, detection wavelength, effective pixel size and the numerical aperture of the microscope system. We demonstrate our development by comparing multiple state-of-the-art denoising algorithms to this bound. This method establishes a framework to generate theoretical performance limit, under a specific prior knowledge, or assumption, as a reference benchmark for microscopy denoising algorithms.  相似文献   

6.
MEG and EEG data contain additive correlated noise generated by environmental and physiological sources. To suppress this type of spatially coloured noise, source estimation is often performed with spatial whitening based on a measured or estimated noise covariance matrix. However, artifacts that span relatively small noise subspaces, such as cardiac, ocular, and muscle artifacts, are often explicitly removed by a variety of denoising methods (e.g., signal space projection) before source imaging. Here, we introduce a new approach, the spectral signal space projection (S(3)P) algorithm, in which time-frequency (TF)-specific spatial projectors are designed and applied to the noisy TF-transformed data, and whitened source estimation is performed in the TF domain. The approach can be used to derive spectral variants of all linear time domain whitened source estimation algorithms. The denoised sensor and source time series are obtained by the corresponding inverse TF-transform. The method is evaluated and compared with existing subspace projection and signal separation techniques using experimental data. Altogether, S(3)P provides an expanded framework for MEG/EEG data denoising and whitened source imaging in both the time and frequency/scale domains.  相似文献   

7.
目的 对比低剂量腹部和盆腔增强扫描自适应统计迭代重建(ASIR)和基于模型的迭代重建(MBIR)与常规剂量下传统的滤波反投影(FBP)重建的图像质量和剂量减低率。方法 31例患者接受腹部和盆腔增强常规剂量CT扫描、FBP重建;复查时,接受增强低剂量扫描、40%ASIR和MBIR。由2名医师通过锐利度、噪声、伪影和诊断接受度对图像进行评分,并测量噪声值和CT值,计算SNR。记录每例患者每次检查的剂量长度乘积(DLP)和CT剂量指数(CTDIvol),计算剂量减低率。结果 低剂量扫描的DLP值和CTDI值分别为(328.95±206.35)mGy/cm和(7.96±4.30)mGy,而常规剂量FBP重建的DLP值和CTDI分别为(689.27±339.63) mGy/cm和(16.81±7.19) mGy。对于腹部和盆腔脏器,低剂量MBIR图像比低剂量40%ASIR图像和常规剂量FBP图像有更低噪声值和更高SNR (P均<0.0167)。低剂量40%ASIR图像和常规剂量FBP图像的客观评价结果相似(P>0.0167)。MBIR和40%ASIR可以提高图像的密度分辨力,降低硬化伪影,且MBIR比40%ASIR图像有更低噪声和伪影,主观评价结果更佳。结论 在保证图像质量的前提下,相比常规剂量扫描、FBP重建,低剂量扫描MBIR和ASIR可以明显降低扫描剂量;相比ASIR法,MBIR法能提供更佳图像质量,具有进一步降低扫描剂量的潜力。  相似文献   

8.
To fully define the target objects of interest in clinical diagnosis, many deep convolution neural networks (CNNs) use multimodal paired registered images as inputs for segmentation tasks. However, these paired images are difficult to obtain in some cases. Furthermore, the CNNs trained on one specific modality may fail on others for images acquired with different imaging protocols and scanners. Therefore, developing a unified model that can segment the target objects from unpaired multiple modalities is significant for many clinical applications. In this work, we propose a 3D unified generative adversarial network, which unifies the any-to-any modality translation and multimodal segmentation in a single network. Since the anatomical structure is preserved during modality translation, the auxiliary translation task is used to extract the modality-invariant features and generate the additional training data implicitly. To fully utilize the segmentation-related features, we add a cross-task skip connection with feature recalibration from the translation decoder to the segmentation decoder. Experiments on abdominal organ segmentation and brain tumor segmentation indicate that our method outperforms the existing unified methods.  相似文献   

9.
Computed tomography (CT) has seen a rapid increase in use in recent years. Radiation from CT accounts for a significant proportion of total medical radiation. However, given the known harmful impact of radiation exposure to the human body, the excessive use of CT in medical environments raises concerns. Concerns over increasing CT use and its associated radiation burden have prompted efforts to reduce radiation dose during the procedure. Therefore, low-dose CT has attracted major attention in the radiology, since CT-associated x-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Therefore, several denoising methods have been developed and applied to image processing technologies with the goal of reducing image noise. Recently, deep learning applications that improve image quality by reducing the noise and artifacts have become commercially available for diagnostic imaging. Deep learning image reconstruction shows great potential as an advanced reconstruction method to improve the quality of clinical CT images. These improvements can provide significant benefit to patients regardless of their disease, and further advances are expected in the near future.  相似文献   

10.

Purpose

   Image noise in computed tomography (CT) images may have significant local variation due to tissue properties, dose, and location of the X-ray source. We developed and tested an automated tissue-based estimator method for estimating local noise in CT images.

Method

   An automated TBE method for estimating the local noise in CT image in 3 steps was developed: (1) Partition the image into homogeneous and transition regions, (2) For each pixel in the homogeneous regions, compute the standard deviation in a $15\times 15\times 1$ voxel local region using only pixels from the same homogeneous region, and (3) Interpolate the noise estimate from the homogeneous regions in the transition regions. Noise-aware fat segmentation was implemented. Experiments were conducted on the anthropomorphic phantom and in vivo low-dose chest CT scans to validate the TBE, characterize the magnitude of local noise variation, and determine the sensitivity of noise estimates to the size of the region in which noise is computed. The TBE was tested on all scans from the Early Lung Cancer Action Program public database. The TBE was evaluated quantitatively on the phantom data and qualitatively on the in vivo data.

Results

   The results show that noise can vary locally by over 200 Hounsfield units on low-dose in vivo chest CT scans and that the TBE can characterize these noise variations within 5 %. The new fat segmentation algorithm successfully improved segmentation on all 50 scans tested.

Conclusion

   The TBE provides a means to estimate noise for image quality monitoring, optimization of denoising algorithms, and improvement of segmentation algorithms. The TBE was shown to accurately characterize the large local noise variations that occur due to changes in material, dose, and X-ray source location.  相似文献   

11.
Knowledge of the noise distribution in magnitude diffusion MRI images is the centerpiece to quantify uncertainties arising from the acquisition process. The use of parallel imaging methods, the number of receiver coils and imaging filters applied by the scanner, amongst other factors, dictate the resulting signal distribution. Accurate estimation beyond textbook Rician or noncentral chi distributions often requires information about the acquisition process (e.g., coils sensitivity maps or reconstruction coefficients), which is usually not available. We introduce two new automated methods using the moments and maximum likelihood equations of the Gamma distribution to estimate noise distributions as they explicitly depend on the number of coils, making it possible to estimate all unknown parameters using only the magnitude data. A rejection step is used to make the framework automatic and robust to artifacts. Simulations using stationary and spatially varying noncentral chi noise distributions were created for two diffusion weightings with SENSE or GRAPPA reconstruction and 8, 12 or 32 receiver coils. Furthermore, MRI data of a water phantom with different combinations of parallel imaging were acquired on a 3T Philips scanner along with noise-only measurements. Finally, experiments on freely available datasets from a single subject acquired on a 3T GE scanner are used to assess reproducibility when limited information about the acquisition protocol is available. Additionally, we demonstrated the applicability of the proposed methods for a bias correction and denoising task on an in vivo dataset acquired on a 3T Siemens scanner. A generalized version of the bias correction framework for non integer degrees of freedom is also introduced. The proposed framework is compared with three other algorithms with datasets from three vendors, employing different reconstruction methods. Simulations showed that assuming a Rician distribution can lead to misestimation of the noise distribution in parallel imaging. Results on the acquired datasets showed that signal leakage in multiband can also lead to a misestimation of the noise distribution. Repeated acquisitions of in vivo datasets show that the estimated parameters are stable and have lower variability than compared methods. Results for the bias correction and denoising task show that the proposed methods reduce the appearance of noise at high b-value. The proposed algorithms herein can estimate both parameters of the noise distribution automatically, are robust to signal leakage artifacts and perform best when used on acquired noise maps.  相似文献   

12.
We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D denoising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional denoising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the denoised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of denoised wavelet coefficients for each voxel. Given the de-correlated nature of these denoised wavelet coefficients, it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules: First, in the analysis module we combine a new 3-D wavelet denoising approach with signal separation properties of ICA in the wavelet domain. This step helps obtain an activation component that corresponds closely to the true underlying signal, which is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing+spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false positive voxels.  相似文献   

13.
Diffusion tensor magnetic resonance imaging (DT-MRI) is becoming a prospective imaging technique in clinical applications because of its potential for in vivo and non-invasive characterization of tissue organization. However, the acquisition of diffusion-weighted images (DWIs) is often corrupted by noise and artifacts, and the intensity of diffusion-weighted signals is weaker than that of classical magnetic resonance signals. In this paper, we propose a new denoising method for DT-MRI, called structure-adaptive sparse denoising (SASD), which exploits self-similarity in DWIs. We define a similarity measure based on the local mean and on a modified structure-similarity index to find sets of similar patches that are arranged into three-dimensional arrays, and we propose a simple and efficient structure-adaptive window pursuit method to achieve sparse representation of these arrays. The noise component of the resulting structure-adaptive arrays is attenuated by Wiener shrinkage in a transform domain defined by two-dimensional principal component decomposition and Haar transformation. Experiments on both synthetic and real cardiac DT-MRI data show that the proposed SASD algorithm outperforms state-of-the-art methods for denoising images with structural redundancy. Moreover, SASD achieves a good trade-off between image contrast and image smoothness, and our experiments on synthetic data demonstrate that it produces more accurate tensor fields from which biologically relevant metrics can then be computed.  相似文献   

14.
Clinical evidence has shown that rib-suppressed chest X-rays (CXRs) can improve the reliability of pulmonary disease diagnosis. However, previous approaches on generating rib-suppressed CXR face challenges in preserving details and eliminating rib residues. We hereby propose a GAN-based disentanglement learning framework called Rib Suppression GAN, or RSGAN, to perform rib suppression by utilizing the anatomical knowledge embedded in unpaired computed tomography (CT) images. In this approach, we employ a residual map to characterize the intensity difference between CXR and the corresponding rib-suppressed result. To predict the residual map in CXR domain, we disentangle the image into structure- and contrast-specific features and transfer the rib structural priors from digitally reconstructed radiographs (DRRs) computed by CT. Furthermore, we employ additional adaptive loss to suppress rib residue and preserve more details. We conduct extensive experiments based on 1673 CT volumes, and four benchmarking CXR datasets, totaling over 120K images, to demonstrate that (i) our proposed RSGAN achieves superior image quality compared to the state-of-the-art rib suppression methods; (ii) combining CXR with our rib-suppressed result leads to better performance in lung disease classification and tuberculosis area detection.  相似文献   

15.
The availability of a large amount of annotated data is critical for many medical image analysis applications, in particular for those relying on deep learning methods which are known to be data-hungry. However, annotated medical data, especially multimodal data, is often scarce and costly to obtain. In this paper, we address the problem of synthesizing multi-parameter magnetic resonance imaging data (i.e. mp-MRI), which typically consists of Apparent Diffusion Coefficient (ADC) and T2-weighted (T2w) images, containing clinically significant (CS) prostate cancer (PCa) via semi-supervised learning and adversarial learning. Specifically, our synthesizer generates mp-MRI data in a sequential manner: first utilizing a decoder to generate an ADC map from a 128-d latent vector, followed by translating the ADC to the T2w image via U-Net. The synthesizer is trained in a semi-supervised manner. In the supervised training process, a limited amount of paired ADC-T2w images and the corresponding ADC encodings are provided and the synthesizer learns the paired relationship by explicitly minimizing the reconstruction losses between synthetic and real images. To avoid overfitting limited ADC encodings, an unlimited amount of random latent vectors and unpaired ADC-T2w Images are utilized in the unsupervised training process for learning the marginal image distributions of real images. To improve the robustness for training the synthesizer, we decompose the difficult task of generating full-size images into several simpler tasks which generate sub-images only. A StitchLayer is then employed to seamlessly fuse sub-images together in an interlaced manner into a full-size image. In addition, to enforce the synthetic images to indeed contain distinguishable CS PCa lesions, we propose to also maximize an auxiliary distance of Jensen-Shannon divergence (JSD) between CS and nonCS images. Experimental results show that our method can effectively synthesize a large variety of mp-MRI images which contain meaningful CS PCa lesions, display a good visual quality and have the correct paired relationship between the two modalities of a pair. Compared to the state-of-the-art methods based on adversarial learning (Liu and Tuzel, 2016; Costa et al., 2017), our method achieves a significant improvement in terms of both visual quality and several popular quantitative evaluation metrics.  相似文献   

16.
This paper presents data-driven methods for echocardiogram enhancement. Existing denoising algorithms typically rely on a single noise model, and do not generalize to the composite noise sources typically found in real-world echocardiograms. Our methods leverage the low-dimensional intrinsic structure of echocardiogram videos. We assume that echocardiogram images are noisy samples from an underlying manifold parametrized by cardiac motion and denoise images via back-projection onto a learned (non-linear) manifold. Our methods incorporate synchronized side information (e.g., electrocardiography), which is often collected alongside the visual data. We evaluate the proposed methods on a synthetic data set and real-world echocardiograms. Quantitative results show improved performance of our methods over recent image despeckling methods and video denoising methods, and a visual analysis of real-world data shows noticeable image enhancement, even in the challenging case of noise due to dropout artifacts.  相似文献   

17.
目的 比较自适应统计迭代重建(ASIR)、常规基于模型的迭代重建(MBIRc)、新一代基于模型的迭代重建(MBIRn)中肺特异性设置(MBIRRP20和MBIRNR40)重建算法对亚mSv胸部CT图像质量的影响。方法 收集接受两次胸部CT平扫的受检者30例。初检采用常规剂量(噪声指数=14) ASIR重建。复查采用低辐射剂量方案(噪声指数=28),分别采用标准算法和肺算法ASIR、MBIRc、MBIRRP20和MBIRNR40重建,重建层厚0.625 mm。在标准算法ASIR、MBIRc和MBIRNR40重建图像上测量胸廓入口层面、气管隆突下层面和肝门层面背部肌肉、皮下脂肪相同部位ROI的CT值与噪声(SD),并计算SNR,采用单因素方差分析比较各重建算法SD和SNR。于肺窗ASIR、MBIRc、MBIRRP20和纵隔窗标准算法ASIR、MBIRc、MBIRNR40进行噪声和细节结构清晰度5分法主观评分,并采用Wilcoxon符号等级检验进行统计学分析。结果 初检有效剂量为(3.01±1.89) mSv,复查有效剂量为(0.88±0.83) mSv,下降约70.76%。MBIRNR40图像噪声明显低于常规剂量ASIR、低剂量ASIR和MBIRc(P均< 0.05)。MBIRNR40图像SNR绝对值明显大于常规剂量ASIR、低剂量ASIR和MBIRc(P均< 0.05)。MBIRNR40的主观图像噪声评分低于常规剂量ASIR和MBIRc(P均< 0.05);MBIRn可更清晰地显示肺、纵隔及上腹部细节结构,评分高于MBIRc和ASIR(P < 0.05)。结论 在胸部CT平扫时,与ASIR、MBIRc相比,MBIRn肺特异性设置中MBIRNR40可显著降低图像噪声并提高SNR,可减少辐射剂量约70%,在低剂量条件下,MBIRRP20可更好地显示肺内、MBIRNR40可更好地显示纵隔、上腹部细节结构。  相似文献   

18.
Photon shot noise is the main noise source of optical microscopy images and can be modeled by a Poisson process. Several discrete wavelet transform based methods have been proposed in the literature for denoising images corrupted by Poisson noise. However, the discrete wavelet transform (DWT) has disadvantages such as shift variance, aliasing, and lack of directional selectivity. To overcome these problems, a dual tree complex wavelet transform is used in our proposed denoising algorithm. Our denoising algorithm is based on the assumption that for the Poisson noise case threshold values for wavelet coefficients can be estimated from the approximation coefficients. Our proposed method was compared with one of the state of the art denoising algorithms. Better results were obtained by using the proposed algorithm in terms of image quality metrics. Furthermore, the contrast enhancement effect of the proposed method on collagen fıber images is examined. Our method allows fast and efficient enhancement of images obtained under low light intensity conditions.OCIS codes: (100.0100) Image processing, (100.7410) Wavelets, (100.3020) Image reconstruction-restoration  相似文献   

19.
Image registration techniques which require image interpolation are widely used in neuroimaging research. We show that signal variance in interpolated images differs significantly from the signal variance of the original images in native space. We describe a simple approach to compute the signal variance in registered images based on the signal variance and covariance of the original images, the spatial transformations computed by the registration procedure, and the interpolation or approximation kernel chosen. The method is general and could handle various sources of signal variability, such as thermal noise and physiological noise, provided that their effects can be assessed in the original images. Our approach is applied to diffusion tensor (DT) MRI data, assuming only thermal noise as the source of variability in the data. We show that incorrect noise variance estimates in registered diffusion-weighted images can affect DT parameters, as well as indices of goodness of fit such as chi-square maps. In addition to DT-MRI, we believe that this methodology would be useful any time parameter extraction methods are applied to registered or interpolated data, such as in relaxometry and functional MRI studies.  相似文献   

20.
Tohka J  Foerde K  Aron AR  Tom SM  Toga AW  Poldrack RA 《NeuroImage》2008,39(3):1227-1245
Blood oxygenation level dependent (BOLD) signals in functional magnetic resonance imaging (fMRI) are often small compared to the level of noise in the data. The sources of noise are numerous including different kinds of motion artifacts and physiological noise with complex patterns. This complicates the statistical analysis of the fMRI data. In this study, we propose an automatic method to reduce fMRI artifacts based on independent component analysis (ICA). We trained a supervised classifier to distinguish between independent components relating to a potentially task-related signal and independent components clearly relating to structured noise. After the components had been classified as either signal or noise, a denoised fMR time-series was reconstructed based only on the independent components classified as potentially task-related. The classifier was a novel global (fixed structure) decision tree trained in a Neyman-Pearson (NP) framework, which allowed the shape of the decision regions to be controlled effectively. Additionally, the conservativeness of the classifier could be tuned by modifying the NP threshold. The classifier was tested against the component classifications by an expert with the data from a category learning task. The test set as well as the expert were different from the data used for classifier training and the expert labeling the training set. The misclassification rate was between 0.2 and 0.3 for both the event-related and blocked designs and it was consistent among variety of different NP thresholds. The effects of denoising on the group-level statistical analyses were as expected: The denoising generally decreased Z-scores in the white matter, where extreme Z-values can be expected to reflect artifacts. A similar but weaker decrease in Z-scores was observed in the gray matter on average. These two observations suggest that denoising was likely to reduce artifacts from gray matter and could be useful to improve the detection of activations. We conclude that automatic ICA-based denoising offers a potentially useful approach to improve the quality of fMRI data and consequently increase the accuracy of the statistical analysis of these data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号