首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Tumor classification and segmentation are two important tasks for computer-aided diagnosis (CAD) using 3D automated breast ultrasound (ABUS) images. However, they are challenging due to the significant shape variation of breast tumors and the fuzzy nature of ultrasound images (e.g., low contrast and signal to noise ratio). Considering the correlation between tumor classification and segmentation, we argue that learning these two tasks jointly is able to improve the outcomes of both tasks. In this paper, we propose a novel multi-task learning framework for joint segmentation and classification of tumors in ABUS images. The proposed framework consists of two sub-networks: an encoder-decoder network for segmentation and a light-weight multi-scale network for classification. To account for the fuzzy boundaries of tumors in ABUS images, our framework uses an iterative training strategy to refine feature maps with the help of probability maps obtained from previous iterations. Experimental results based on a clinical dataset of 170 3D ABUS volumes collected from 107 patients indicate that the proposed multi-task framework improves tumor segmentation and classification over the single-task learning counterparts.  相似文献   

2.
Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.  相似文献   

3.
With the recent development of deep learning, the classification and segmentation tasks of computer-aided diagnosis (CAD) using non-contrast head computed tomography (NCCT) for intracranial hemorrhage (ICH) has become popular in emergency medical care. However, a few challenges remain, such as the difficulty of training due to the heterogeneity of ICH, the requirement for high performance in both sensitivity and specificity, patient-level predictions demanding excessive costs, and vulnerability to real-world external data. In this study, we proposed a supervised multi-task aiding representation transfer learning network (SMART-Net) for ICH to overcome these challenges. The proposed framework consists of upstream and downstream components. In the upstream, a weight-shared encoder of the model is trained as a robust feature extractor that captures global features by performing slice-level multi-pretext tasks (classification, segmentation, and reconstruction). Adding a consistency loss to regularize discrepancies between classification and segmentation heads has significantly improved representation and transferability. In the downstream, the transfer learning was conducted with a pre-trained encoder and 3D operator (classifier or segmenter) for volume-level tasks. Excessive ablation studies were conducted and the SMART-Net was developed with optimal multi-pretext task combinations and a 3D operator. Experimental results based on four test sets (one internal and two external test sets that reflect a natural incidence of ICH, and one public test set with a relatively small amount of ICH cases) indicate that SMART-Net has better robustness and performance in terms of volume-level ICH classification and segmentation over previous methods. All code is available at https://github.com/babbu3682/SMART-Net.  相似文献   

4.
Pixel-wise error correction of initial segmentation results provides an effective way for quality improvement. The additional error segmentation network learns to identify correct predictions and incorrect ones. The performance on error segmentation directly affects the accuracy on the test set and the subsequent self-training with the error-corrected pseudo labels. In this paper, we propose a novel label rectification method based on error correction, namely ECLR, which can be directly added after the fully-supervised segmentation framework. Moreover, it can be used to guide the semi-supervised learning (SSL) process, constituting an error correction guided SSL framework, called ECGSSL. Specifically, we analyze the types and causes of segmentation error, and divide it into intra-class error and inter-class error caused by intra-class inconsistency and inter-class similarity problems in segmentation, respectively. Further, we propose a collaborative multi-task discriminative error prediction network (DEP-Net) to highlight two error types. For better training of DEP-Net, we propose specific mask degradation methods representing typical segmentation errors. Under the fully-supervised regime, the pre-trained DEP-Net is used to directly rectify the initial segmentation results of the test set. While, under the semi-supervised regime, a dual error correction method is proposed for unlabeled data to obtain more reliable network re-training. Our method is easy to apply to different segmentation models. Extensive experiments on gland segmentation verify that ECLR yields substantial improvements based on initial segmentation predictions. ECGSSL shows consistent improvements over a supervised baseline learned only from labeled data and achieves competitive performance compared with other popular semi-supervised methods.  相似文献   

5.
Automatic segmentation of organs at risk is crucial to aid diagnoses and remains a challenging task in medical image analysis domain. To perform the segmentation, we use multi-task learning (MTL) to accurately determine the contour of organs at risk in CT images. We train an encoder-decoder network for two tasks in parallel. The main task is the segmentation of organs, entailing a pixel-level classification in the CT images, and the auxiliary task is the multi-label classification of organs, entailing an image-level multi-label classification of the CT images. To boost the performance of the multi-label classification, we propose a weighted mean cross entropy loss function for the network training, where the weights are the global conditional probability between two organs. Based on MTL, we optimize the false positive filtering (FPF) algorithm to decrease the number of falsely segmented organ pixels in the CT images. Specifically, we propose a dynamic threshold selection (DTS) strategy to prevent true positive rates from decreasing when using the FPF algorithm. We validate these methods on the public ISBI 2019 segmentation of thoracic organs at risk (SegTHOR) challenge dataset and a private medical organ dataset. The experimental results show that networks using our proposed methods outperform basic encoder-decoder networks without increasing the training time complexity.  相似文献   

6.
Clinical diagnosis of the pediatric musculoskeletal system relies on the analysis of medical imaging examinations. In the medical image processing pipeline, semantic segmentation using deep learning algorithms enables an automatic generation of patient-specific three-dimensional anatomical models which are crucial for morphological evaluation. However, the scarcity of pediatric imaging resources may result in reduced accuracy and generalization performance of individual deep segmentation models. In this study, we propose to design a novel multi-task, multi-domain learning framework in which a single segmentation network is optimized over the union of multiple datasets arising from distinct parts of the anatomy. Unlike previous approaches, we simultaneously consider multiple intensity domains and segmentation tasks to overcome the inherent scarcity of pediatric data while leveraging shared features between imaging datasets. To further improve generalization capabilities, we employ a transfer learning scheme from natural image classification, along with a multi-scale contrastive regularization aimed at promoting domain-specific clusters in the shared representations, and multi-joint anatomical priors to enforce anatomically consistent predictions. We evaluate our contributions for performing bone segmentation using three scarce and pediatric imaging datasets of the ankle, knee, and shoulder joints. Our results demonstrate that the proposed approach outperforms individual, transfer, and shared segmentation schemes in Dice metric with statistically sufficient margins. The proposed model brings new perspectives towards intelligent use of imaging resources and better management of pediatric musculoskeletal disorders.  相似文献   

7.
Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.  相似文献   

8.
Ultrasonography is regarded as an effective technique for the detection, diagnosis and monitoring of thyroid nodules. Segmentation of thyroid nodules on ultrasound images is important in clinical practice. However, because in ultrasound images there is an unclear boundary between thyroid nodules and surrounding tissues, the accuracy of segmentation remains a challenge. Although the deep learning model provides an accurate and convenient method for thyroid nodule segmentation, it is unsatisfactory of the existing model in segmenting the margin of thyroid nodules. In this study, we developed boundary attention transformer net (BTNet), a novel segmentation network with a boundary attention mechanism combining the advantages of a convolutional neural network and transformer, which could fuse the features of both long and short ranges. Boundary attention is improved to focus on learning the boundary information, and this module enhances the segmentation ability of the network boundary. For features of different scales, we also incorporate a deep supervision mechanism to blend the outputs of different levels to enhance the segmentation effect. As the BTNet model incorporates the long range–short range connectivity effect and the boundary–regional cooperation capability, our model has excellent segmentation performance in thyroid nodule segmentation. The development of BTNet was based on the data set from Shanghai Jiao Tong University School of Medicine Affiliated Sixth People's Hospital and the public data set. BTNet achieved good performance in the segmentation of thyroid nodules with an intersection-over-union of 0.810 and Dice coefficient of 0.892 Moreover, our work revealed great improvement in the boundary metrics; for example, the boundary distance was 7.308, the boundary overlap 0.201 and the boundary Dice 0.194, all with p values <0.05.  相似文献   

9.
We present our novel deep multi-task learning method for medical image segmentation. Existing multi-task methods demand ground truth annotations for both the primary and auxiliary tasks. Contrary to it, we propose to generate the pseudo-labels of an auxiliary task in an unsupervised manner. To generate the pseudo-labels, we leverage Histogram of Oriented Gradients (HOGs), one of the most widely used and powerful hand-crafted features for detection. Together with the ground truth semantic segmentation masks for the primary task and pseudo-labels for the auxiliary task, we learn the parameters of the deep network to minimize the loss of both the primary task and the auxiliary task jointly. We employed our method on two powerful and widely used semantic segmentation networks: UNet and U2Net to train in a multi-task setup. To validate our hypothesis, we performed experiments on two different medical image segmentation data sets. From the extensive quantitative and qualitative results, we observe that our method consistently improves the performance compared to the counter-part method. Moreover, our method is the winner of FetReg Endovis Sub-challenge on Semantic Segmentation organised in conjunction with MICCAI 2021. Code and implementation details are available at:https://github.com/thetna/medical_image_segmentation.  相似文献   

10.
Deep learning techniques for 3D brain vessel image segmentation have not been as successful as in the segmentation of other organs and tissues. This can be explained by two factors. First, deep learning techniques tend to show poor performances at the segmentation of relatively small objects compared to the size of the full image. Second, due to the complexity of vascular trees and the small size of vessels, it is challenging to obtain the amount of annotated training data typically needed by deep learning methods. To address these problems, we propose a novel annotation-efficient deep learning vessel segmentation framework. The framework avoids pixel-wise annotations, only requiring weak patch-level labels to discriminate between vessel and non-vessel 2D patches in the training set, in a setup similar to the CAPTCHAs used to differentiate humans from bots in web applications. The user-provided weak annotations are used for two tasks: (1) to synthesize pixel-wise pseudo-labels for vessels and background in each patch, which are used to train a segmentation network, and (2) to train a classifier network. The classifier network allows to generate additional weak patch labels, further reducing the annotation burden, and it acts as a second opinion for poor quality images. We use this framework for the segmentation of the cerebrovascular tree in Time-of-Flight angiography (TOF) and Susceptibility-Weighted Images (SWI). The results show that the framework achieves state-of-the-art accuracy, while reducing the annotation time by 77% w.r.t. learning-based segmentation methods using pixel-wise labels for training.  相似文献   

11.
This paper presents a new deep regression model, which we call DeepDistance, for cell detection in images acquired with inverted microscopy. This model considers cell detection as a task of finding most probable locations that suggest cell centers in an image. It represents this main task with a regression task of learning an inner distance metric. However, different than the previously reported regression based methods, the DeepDistance model proposes to approach its learning as a multi-task regression problem where multiple tasks are learned by using shared feature representations. To this end, it defines a secondary metric, normalized outer distance, to represent a different aspect of the problem and proposes to define its learning as complementary to the main cell detection task. In order to learn these two complementary tasks more effectively, the DeepDistance model designs a fully convolutional network (FCN) with a shared encoder path and end-to-end trains this FCN to concurrently learn the tasks in parallel. For further performance improvement on the main task, this paper also presents an extended version of the DeepDistance model that includes an auxiliary classification task and learns it in parallel to the two regression tasks by also sharing feature representations with them. DeepDistance uses the inner distances estimated by these FCNs in a detection algorithm to locate individual cells in a given image. In addition to this detection algorithm, this paper also suggests a cell segmentation algorithm that employs the estimated maps to find cell boundaries. Our experiments on three different human cell lines reveal that the proposed multi-task learning models, the DeepDistance model and its extended version, successfully identify the locations of cell as well as delineate their boundaries, even for the cell line that was not used in training, and improve the results of its counterparts.  相似文献   

12.
Deep learning models for semantic segmentation are able to learn powerful representations for pixel-wise predictions, but are sensitive to noise at test time and may lead to implausible topologies. Image registration models on the other hand are able to warp known topologies to target images as a means of segmentation, but typically require large amounts of training data, and have not widely been benchmarked against pixel-wise segmentation models. We propose the Atlas Image-and-Spatial Transformer Network (Atlas-ISTN), a framework that jointly learns segmentation and registration on 2D and 3D image data, and constructs a population-derived atlas in the process. Atlas-ISTN learns to segment multiple structures of interest and to register the constructed atlas labelmap to an intermediate pixel-wise segmentation. Additionally, Atlas-ISTN allows for test time refinement of the model’s parameters to optimize the alignment of the atlas labelmap to an intermediate pixel-wise segmentation. This process both mitigates for noise in the target image that can result in spurious pixel-wise predictions, as well as improves upon the one-pass prediction of the model. Benefits of the Atlas-ISTN framework are demonstrated qualitatively and quantitatively on 2D synthetic data and 3D cardiac computed tomography and brain magnetic resonance image data, out-performing both segmentation and registration baseline models. Atlas-ISTN also provides inter-subject correspondence of the structures of interest.  相似文献   

13.
Due to the difficulty in accessing a large amount of labeled data, semi-supervised learning is becoming an attractive solution in medical image segmentation. To make use of unlabeled data, current popular semi-supervised methods (e.g., temporal ensembling, mean teacher) mainly impose data-level and model-level consistency on unlabeled data. In this paper, we argue that in addition to these strategies, we could further utilize auxiliary tasks and consider task-level consistency to better excavate effective representations from unlabeled data for segmentation. Specifically, we introduce two auxiliary tasks, i.e., a foreground and background reconstruction task for capturing semantic information and a signed distance field (SDF) prediction task for imposing shape constraint, and explore the mutual promotion effect between the two auxiliary and the segmentation tasks based on mean teacher architecture. Moreover, to handle the potential bias of the teacher model caused by annotation scarcity, we develop a tripled-uncertainty guided framework to encourage the three tasks in the student model to learn more reliable knowledge from the teacher. When calculating uncertainty, we propose an uncertainty weighted integration (UWI) strategy for yielding the segmentation predictions of the teacher. In addition, following the advance of unsupervised learning in leveraging the unlabeled data, we also incorporate a contrastive learning based constraint to help the encoders extract more distinct representations to promote the medical image segmentation performance. Extensive experiments on the public 2017 ACDC dataset and the PROMISE12 dataset have demonstrated the effectiveness of our method.  相似文献   

14.
The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.  相似文献   

15.
目的探讨深度学习方法在超声甲状腺结节分割中的效果及其临床应用价值。 方法收集2018年8月至2020年10月首都医科大学附属北京天坛医院的166例甲状腺结节患者的1044张超声图像。观察使用改进自注意力机制的Unet深度学习方法和Unet基础方法在测试数据集上的分割效果。以分割结果是否接近有多年临床经验的超声医师的手动标注作为参考标准,将改进自注意力机制的Unet和Unet基础方法对甲状腺结节的分割效果进行比较,以交并比(IoU)、戴斯(Dice)相似性系数及与超声医师对甲状腺结节的手动勾勒接近程度来评价深度学习模型对甲状腺结节分割效果及临床应用价值的性能。 结果改进自注意力机制的Unet深度学习模型对甲状腺结节分割的IoU及Dice系数分别为0.815和0.839,与Unet基础方法结果(IoU为0.788,Dice系数为0.817)相比,具有更高的IoU和Dice系数值。从分割图像可以看出,基于改进自注意力机制的Unet深度学习模型对甲状腺结节整体和边缘细节上的分割效果好于Unet基础方法,更接近于超声医师的手动勾勒结果。 结论基于自注意力机制的Unet深度学习模型在甲状腺结节分割方面有着较高的性能,可提高诊断效率,并且该方法具有一定的临床应用价值。  相似文献   

16.
Convolutional neural networks have achieved prominent success on a variety of medical imaging tasks when a large amount of labeled training data is available. However, the acquisition of expert annotations for medical data is usually expensive and time-consuming, which poses a great challenge for supervised learning approaches. In this work, we proposed a novel semi-supervised deep learning method, i.e., deep virtual adversarial self-training with consistency regularization, for large-scale medical image classification. To effectively exploit useful information from unlabeled data, we leverage self-training and consistency regularization to harness the underlying knowledge, which helps improve the discrimination capability of training models. More concretely, the model first uses its prediction for pseudo-labeling on the weakly-augmented input image. A pseudo-label is kept only if the corresponding class probability is of high confidence. Then the model prediction is encouraged to be consistent with the strongly-augmented version of the same input image. To improve the robustness of the network against virtual adversarial perturbed input, we incorporate virtual adversarial training (VAT) on both labeled and unlabeled data into the course of training. Hence, the network is trained by minimizing a combination of three types of losses, including a standard supervised loss on labeled data, a consistency regularization loss on unlabeled data, and a VAT loss on both labeled and labeled data. We extensively evaluate the proposed semi-supervised deep learning methods on two challenging medical image classification tasks: breast cancer screening from ultrasound images and multi-class ophthalmic disease classification from optical coherence tomography B-scan images. Experimental results demonstrate that the proposed method outperforms both supervised baseline and other state-of-the-art methods by a large margin on all tasks.  相似文献   

17.
Semantic instance segmentation is crucial for many medical image analysis applications, including computational pathology and automated radiation therapy. Existing methods for this task can be roughly classified into two categories: (1) proposal-based methods and (2) proposal-free methods. However, in medical images, the irregular shape-variations and crowding instances (e.g., nuclei and cells) make it hard for the proposal-based methods to achieve robust instance localization. On the other hand, ambiguous boundaries caused by the low-contrast nature of medical images (e.g., CT images) challenge the accuracy of the proposal-free methods. To tackle these issues, we propose a proposal-free segmentation network with discriminative deep supervision (DDS), which at the same time allows us to gain the power of the proposal-based method. The DDS module is interleaved with a carefully designed proposal-free segmentation backbone in our network. Consequently, the features learned by the backbone network become more sensitive to instance localization. Also, with the proposed DDS module, robust pixel-wise instance-level cues (especially structural information) are introduced for semantic segmentation. Extensive experiments on three datasets, i.e., a nuclei dataset, a pelvic CT image dataset, and a synthetic dataset, demonstrate the superior performance of the proposed algorithm compared to the previous works.  相似文献   

18.
Detection of early stages of Alzheimer's disease (AD) (i.e., mild cognitive impairment (MCI)) is important to maximize the chances to delay or prevent progression to AD. Brain connectivity networks inferred from medical imaging data have been commonly used to distinguish MCI patients from normal controls (NC). However, existing methods still suffer from limited performance, and classification remains mainly based on single modality data. This paper proposes a new model to automatically diagnosing MCI (early MCI (EMCI) and late MCI (LMCI)) and its earlier stages (i.e., significant memory concern (SMC)) by combining low-rank self-calibrated functional brain networks and structural brain networks for joint multi-task learning. Specifically, we first develop a new functional brain network estimation method. We introduce data quality indicators for self-calibration, which can improve data quality while completing brain network estimation, and perform correlation analysis combined with low-rank structure. Second, functional and structural connected neuroimaging patterns are integrated into our multi-task learning model to select discriminative and informative features for fine MCI analysis. Different modalities are best suited to undertake distinct classification tasks, and similarities and differences among multiple tasks are best determined through joint learning to determine most discriminative features. The learning process is completed by non-convex regularizer, which effectively reduces the penalty bias of trace norm and approximates the original rank minimization problem. Finally, the most relevant disease features classified using a support vector machine (SVM) for MCI identification. Experimental results show that our method achieves promising performance with high classification accuracy and can effectively discriminate between different sub-stages of MCI.  相似文献   

19.
Simultaneous and automatic segmentation of the blood pool and myocardium is an important precondition for early diagnosis and pre-operative planning in patients with complex congenital heart disease. However, due to the high diversity of cardiovascular structures and changes in mechanical properties caused by cardiac defects, the segmentation task still faces great challenges. To overcome these challenges, in this study we propose an integrated multi-task deep learning framework based on the dilated residual and hybrid pyramid pooling network (DRHPPN) for joint segmentation of the blood pool and myocardium. The framework consists of three closely connected progressive sub-networks. An inception module is used to realize the initial multi-level feature representation of cardiovascular images. A dilated residual network (DRN), as the main body of feature extraction and pixel classification, preliminary predicts segmentation regions. A hybrid pyramid pooling network (HPPN) is designed for facilitating the aggregation of local information to global information, which complements DRN. Extensive experiments on three-dimensional cardiovascular magnetic resonance (CMR) images (the available dataset of the MICCAI 2016 HVSMR challenge) demonstrate that our approach can accurately segment the blood pool and myocardium and achieve competitive performance compared with state-of-the-art segmentation methods.  相似文献   

20.
Tissue/region segmentation of pathology images is essential for quantitative analysis in digital pathology. Previous studies usually require full supervision (e.g., pixel-level annotation) which is challenging to acquire. In this paper, we propose a weakly-supervised model using joint Fully convolutional and Graph convolutional Networks (FGNet) for automated segmentation of pathology images. Instead of using pixel-wise annotations as supervision, we employ an image-level label (i.e., foreground proportion) as weakly-supervised information for training a unified convolutional model. Our FGNet consists of a feature extraction module (with a fully convolutional network) and a classification module (with a graph convolutional network). These two modules are connected via a dynamic superpixel operation, making the joint training possible. To achieve robust segmentation performance, we propose to use mutable numbers of superpixels for both training and inference. Besides, to achieve strict supervision, we employ an uncertainty range constraint in FGNet to reduce the negative effect of inaccurate image-level annotations. Compared with fully-supervised methods, the proposed FGNet achieves competitive segmentation results on three pathology image datasets (i.e., HER2, KI67, and H&E) for cancer region segmentation, suggesting the effectiveness of our method. The code is made publicly available at https://github.com/zhangjun001/FGNet.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号