首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To overcome the multifactorial complexity associated with the analysis and interpretation of the capillary electrophoresis results of forensic mixture samples, probabilistic genotyping methods have been developed and implemented as software, based on either qualitative or quantitative models. The former considers the electropherograms’ qualitative information (detected alleles), whilst the latter also takes into account the associated quantitative information (height of allele peaks). Both models then quantify the genetic evidence through the computation of a likelihood ratio (LR), comparing the probabilities of the observations given two alternative and mutually exclusive hypotheses.In this study, the results obtained through the qualitative software LRmix Studio (v.2.1.3), and the quantitative ones: STRmix™ (v.2.7) and EuroForMix (v.3.4.0), were compared considering real casework samples. A set of 156 irreversibly anonymized sample pairs (GeneMapper files), obtained under the scope of former cases of the Portuguese Scientific Police Laboratory, Judiciary Police (LPC-PJ), were independently analyzed using each software. Sample pairs were composed by (i) a mixture profile with either two or three estimated contributors, and (ii) a single contributor profile associated. In most cases, information on 21 short tandem repeat (STR) autosomal markers were considered, and the majority of the single-source samples could not be a priori excluded as belonging to a contributor to the paired mixture sample. This inter-software analysis shows the differences between the probative values obtained through different qualitative and quantitative tools, for the same input samples. LR values computed in this work by quantitative tools showed to be generally higher than those obtained by the qualitative. Although the differences between the LR values computed by both quantitative software showed to be much smaller, STRmix™ generated LRs are generally higher than those from EuroForMix. As expected, mixtures with three estimated contributors showed generally lower LR values than those obtained for mixtures with two estimated contributors.Different software products are based on different approaches and mathematical or statistical models, which necessarily result in the computation of different LR values. The understanding by the forensic experts of the models and their differences among available software is therefore crucial. The better the expert understands the methodology, the better he/she will be able to support and/or explain the results in court or any other area of scrutiny.  相似文献   

2.
BACKGROUND AND PURPOSE: The most accurate method of clinical MR spectroscopy (MRS) interpretation remains an open question. We sought to construct a logistic regression (LR) pattern recognition model for the discrimination of neoplastic from nonneoplastic brain lesions with MR imaging-guided single-voxel proton MRS data. We compared the LR sensitivity, specificity, and receiver operator characteristic (ROC) curve area (Az) with the sensitivity and specificity of blinded and unblinded qualitative MRS interpretations and a choline (Cho)/N-acetylaspartate (NAA) amplitude ratio criterion. METHODS: Consecutive patients with suspected brain neoplasms or recurrent neoplasia referred for MRS were enrolled once final diagnoses were established by histopathologic examination or serial neurologic examinations, laboratory data, and imaging studies. Control spectra from healthy adult volunteers were included. An LR model was constructed with 10 input variables, including seven metabolite resonance amplitudes, unsuppressed brain water content, water line width, and the final diagnosis (neoplasm versus nonneoplasm). The LR model output was the probability of tumor, for which a cutoff value was chosen to obtain comparable sensitivity and specificity. The LR sensitivity and specificity were compared with those of qualitative blinded interpretations from two readers (designated A and B), qualitative unblinded interpretations (in aggregate) from a group of five staff neuroradiologists and a spectroscopist, and a quantitative Cho/NAA amplitude ratio > 1 threshold for tumor. Sensitivities and specificities for each method were compared with McNemar's chi square analysis for binary tests and matched data with a significance level of 5%. ROC analyses were performed where possible, and Az values were compared with Metz's method (CORROC2) with a 5% significance level. RESULTS: Of the 99 cases enrolled, 86 had neoplasms and 13 had nonneoplastic diagnoses. The discrimination of neoplastic from control spectra was trivial with the LR, reflecting high homogeneity among the control spectra. An LR cutoff probability for tumor of 0.8 yielded a specificity of 87%, a comparable sensitivity of 85%, and an area under the ROC curve of 0.96. Sensitivities, specificities, and ROC areas (where available) for the other methods were, on average, 82%, 74%, and 0.82, respectively, for readers A and B, 89% (sensitivity) and 92% (specificity) for the group of unblinded readers, and 79% (sensitivity), 77% (specificity), and 0.84 (Az) for the Cho/NAA > 1 criterion. McNemar's analysis yielded significant differences in sensitivity (n approximately 86 neoplasms) between the LR and reader A, and between the LR and the Cho/NAA > 1 criterion. The differences in specificity between the LR and all other methods were not significant (n approximately 13 nonneoplasms). Metz's analysis revealed a significant difference in Az between the LR and the Cho/NAA ratio criterion.  相似文献   

3.
The interpretation of mixed DNA profiles obtained from low template DNA samples has proven to be a particularly difficult task in forensic casework. Newly developed likelihood ratio (LR) models that account for PCR-related stochastic effects, such as allelic drop-out, drop-in and stutters, have enabled the analysis of complex cases that would otherwise have been reported as inconclusive. In such samples, there are uncertainties about the number of contributors, and the correct sets of propositions to consider. Using experimental samples, where the genotypes of the donors are known, we evaluated the feasibility and the relevance of the interpretation of high order mixtures, of three, four and five donors.The relative risks of analyzing high order mixtures of three, four, and five donors, were established by comparison of a ‘gold standard’ LR, to the LR that would be obtained in casework. The ‘gold standard’ LR is the ideal LR: since the genotypes and number of contributors are known, it follows that the parameters needed to compute the LR can be determined per contributor. The ‘casework LR’ was calculated as used in standard practice, where unknown donors are assumed; the parameters were estimated from the available data. Both LRs were calculated using the basic standard model, also termed the drop-out/drop-in model, implemented in the LRmix module of the R package Forensim.We show how our results furthered the understanding of the relevance of analyzing high order mixtures in a forensic context. Limitations are highlighted, and it is illustrated how our study serves as a guide to implement likelihood ratio interpretation of complex DNA profiles in forensic casework.  相似文献   

4.
A series of two- and three-person mixtures of varying dilutions were prepared and analysed with Life Technologies’ HID-Ion AmpliSeq™ Identity Panel v2.2 using the Ion PGM™ massively parallel sequencing (MPS) system. From this panel we used 134 autosomal SNPs. Using the reference samples of three donors, we evaluated the strength of evidence with likelihood ratio (LR) calculations using the open-source quantitative EuroForMix program and compared the results with a previous study using a qualitative software (LRmix). SNP analysis is a special case of STRs, restricted to a maximum of two alleles per locus. We showed that simple two-person mixtures can be readily analysed with both LRmix and Euroformix, but the performance of three- or more person mixtures is generally inefficient with LRmix. Taking account of the “peak height” information, by substituting ‘sequence read’ coverage values from the MPS data for each SNP allele, greatly improves the discrimination between true and non-contributors. The higher the mixture proportion (Mx) of the person of interest is, the higher the LR. Simulation experiments (up to six contributors) showed that the strength of the evidence is dependent upon Mx, but relatively insensitive to the number of contributors. If a higher number of loci were multiplexed, the analysis of mixtures would be much improved, because the extra information would enable lower Mx values to be evaluated. In summary, incorporating the 'sequence read' (coverage) into the quantitative model shows a significant benefit over the qualitative approach. Calculations are quite fast (six seconds for three contributors).  相似文献   

5.
Several methods exist for weight of evidence calculations on DNA mixtures. Especially if dropout is a possibility, it may be difficult to estimate mixture specific parameters needed for the evaluation. For semi-continuous models, the LR for a person to have contributed to a mixture depends on the specified number of contributors and the probability of dropout for each. We show here that, for the semi-continuous model that we consider, the weight of evidence can be accurately obtained by applying the standard statistical technique of integrating the likelihood ratio against the parameter likelihoods obtained from the mixture data. This method takes into account all likelihood ratios belonging to every choice of parameters, but LR's belonging to parameters that provide a better explanation to the mixture data put in more weight into the final result. We therefore avoid having to estimate the number of contributors or their probabilities of dropout, and let the whole evaluation depend on the mixture data and the allele frequencies, which is a practical advantage as well as a gain in objectivity. Using simulated mixtures, we compare the LR obtained in this way with the best informed LR, i.e., the LR using the parameters that were used to generate the data, and show that results obtained by integration of the LR approximate closely these ideal values. We investigate both contributors and non-contributors for mixtures with various numbers of contributors. For contributors we always obtain a result close to the best informed LR whereas non-contributors are excluded more strongly if a smaller dropout probability is imposed for them. The results therefore naturally lead us to reconsider what we mean by a contributor, or by the number of contributors.  相似文献   

6.
DNA mixture analysis is a current topic of discussion in the forensics literature. Of particular interest is how to approach mixtures where allelic drop-out and/or drop-in may have occurred. The Office of Chief Medical Examiner (OCME) of The City of New York has developed and validated the Forensic Statistical Tool (FST), a software tool for likelihood ratio analysis of forensic DNA samples, allowing for allelic drop-out and drop-in. FST can be used for single source samples and for mixtures of DNA from two or three contributors, with or without known contributors. Drop-out and drop-in probabilities were estimated empirically through analysis of over 2000 amplifications of more than 700 mixtures and single source samples. Drop-out rates used by FST are a function of the Identifiler® locus, the quantity of template DNA amplified, the number of amplification cycles, the number of contributors to the sample, and the approximate mixture ratio (either unequal or approximately equal). Drop-out rates were estimated separately for heterozygous and homozygous genotypes. Drop-in rates used by FST are a function of number of amplification cycles only.FST was validated using 454 mock evidence samples generated from DNA mixtures and from items handled by one to four persons. For each sample, likelihood ratios (LRs) were computed for each true contributor and for each profile in a database of over 1200 non-contributors. A wide range of LRs for true contributors was obtained, as true contributors’ alleles may be labeled at some or all of the tested loci. However, the LRs were consistent with OCME's qualitative assessments of the results. The second set of data was used to evaluate FST LR results when the test sample in the prosecution hypothesis of the LR is not a contributor to the mixture. With this validation, we demonstrate that LRs generated using FST are consistent with, but more informative than, OCME's qualitative sample assessments and that LRs for non-contributors are appropriately assigned.  相似文献   

7.
用Meta分析法评价FDG PET显像诊断肺单发结节的价值   总被引:1,自引:0,他引:1  
目的用Meta分析法综合评价^18F-脱氧葡萄糖(FDG)PET显像鉴别诊断肺单发结节(SPNs)良恶性的价值。方法搜索Medline数据库,检索用^18F-FDG PET显像诊断SPNs的文献,摘录相关数据;根据Cochrane工作组推荐的诊断试验的评价标准进行文献的质量方法学评估。数据分析用Sigmaplot软件绘制summary ROC(SROC),用Cochrane工作组的Revman软件进行Meta分析,得到合并的似然比(OR),用非配对t检验比较目视法和半定量法鉴别诊断SPNs的价值。结果共检索到文献14篇,文献质量水平为a和b级。仅用半定量法可绘制SROC曲线,曲线下面积为0.91。合并后目视法和半定量法OR值分别为39.05和26.97,差异有显著性(P=0.042),灵敏度、特异性分别为96%,81%和90%,86%,差异均无显著性(P分别为0.051和0.738)。结论入选文献均有较高的质量水平。SROC显示半定量法诊断SPNs有较高的准确性。^18F-FDG PET显像在SPNs的诊断中有利于阳性病例的检出。  相似文献   

8.
To date there is no generally accepted method to test the validity of algorithms used to compute likelihood ratios (LR) evaluating forensic DNA profiles from low-template and/or degraded samples. An upper bound on the LR is provided by the inverse of the match probability, which is the usual measure of weight of evidence for standard DNA profiles not subject to the stochastic effects that are the hallmark of low-template profiles. However, even for low-template profiles the LR in favour of a true prosecution hypothesis should approach this bound as the number of profiling replicates increases, provided that the queried contributor is the major contributor. Moreover, for sufficiently many replicates the standard LR for mixtures is often surpassed by the low-template LR. It follows that multiple LTDNA replicates can provide stronger evidence for a contributor to a mixture than a standard analysis of a good-quality profile. Here, we examine the performance of the likeLTD software for up to eight replicate profiling runs. We consider simulated and laboratory-generated replicates as well as resampling replicates from a real crime case. We show that LRs generated by likeLTD usually do exceed the mixture LR given sufficient replicates, are bounded above by the inverse match probability and do approach this bound closely when this is expected. We also show good performance of likeLTD even when a large majority of alleles are designated as uncertain, and suggest that there can be advantages to using different profiling sensitivities for different replicates. Overall, our results support both the validity of the underlying mathematical model and its correct implementation in the likeLTD software.  相似文献   

9.
We present here analytical data using the 15 STR typing (Identifiler) kit regarding heterozygote balance in experimental DNA samples including one or two persons. Surprisingly, the allelic imbalance was observed even in samples consisting of only one person but adequate DNA for the standard protocol. The variance of heterozygote balance was more expanded in two-person mixtures than in one-person samples. Therefore, it is not suitable to use allelic peak heights/areas for estimating the genotypes of the contributors such as the quantitative analysis. We also reevaluated the effectiveness of qualitative analysis by simulation, i.e. consideration of the probability of all possible genotype combinations from the typing results of a mixed DNA sample. As demonstrated, the qualitative analysis using 15 STR loci is still extremely effective even in a mixture from two or three individuals.  相似文献   

10.
ABSTRACT

The recovery of trace DNA from fired cartridge cases has recently gained increased interest throughout the literature, with a variety of methods currently being explored. Soaking fired cartridge cases in a lysis buffer holds potential in producing meaningful DNA profiles; however, chemical interactions between the lysis buffer and brass cartridge cases may limit the efficacy of this method. This preliminary study examines the effects of soaking on the microscopic striation detail of brass and nickel 9 mm Parabellum (9 mmP) calibre and .22 Long Rifle (.22LR) calibre fired cartridge cases. Headstamp and coarse striation patterns on 9 mmP fired cartridge cases and finer striation patterns along the outer wall of .22LR fired cartridge cases were microscopically examined prior to and following soaking. Soaking was performed by submerging the fired cartridge cases in 380 µl of ATL buffer (Qiagen, Germany) for 20 minutes. Microscopic analysis of brass and nickel 9 mmP and .22LR fired cartridge cases showed that coarse and fine striation detail remain unaffected following soaking. These results indicate that comparative ballistics examinations may be performed following DNA recovery using the soaking method.  相似文献   

11.
Although likelihood ratio (LR) based methods to analyse complex mixtures of two or more individuals, that exhibit the twin phenomena of drop-out and drop-in has been in the public domain for more than a decade, progress towards widespread implementation in to casework has been slow. The aim of this paper is to establish a LR-based framework using principles of the basic model recommended by the ISFG DNA commission. We use the tools in the form of open-source software (LRmix) in the Forensim package for the R software. A generalised set of guidelines has been prepared that can be used to evaluate any complex mixture. In addition, a validation framework has been proposed in order to evaluate LRs that are generated on a case-specific basis. This process is facilitated by replacing the reference profile of interest (typically the suspect's profile) with simulated random man using Monte-Carlo simulations and comparing the resulting distributions with the estimated LR. Validation is best carried out by comparison with a standard. Because LRmix is open-source we proposed that it is ideally positioned to be adopted as a standard basic model for complex DNA profile tests. This should not be confused with ‘the best model’ since it is clear that improvements could be made over time. Nevertheless, it is highly desirable to have a methodology in place that can show whether an improvement has been achieved should additional parameters, such as allele peak heights, are incorporated into the model. To facilitate comparative studies, we provide all of the necessary data for three test examples, presented as standard tests that can be utilised to carry out comparative studies. We envisage that the resource of standard test examples will be expanded over coming years so that a range of different case-types that are included will be used in order to improve the efficacy of models; to understand their advantages; conversely, to understand any limitations and to provide training material.  相似文献   

12.
目的:采用第3代双源CT进行动态负荷CT心肌灌注成像(CTP),探讨定性评估与定量评估对于中重度冠心病风险患者心肌缺血的诊断价值,并分析该项检查的临床应用有效性。方法:2016年12月至2018年4月前瞻性连续纳入北京协和医院具有稳定性心绞痛症状、临床评估为中重度冠心病风险的患者,采用第3代双源CT对入组患者进行动态负...  相似文献   

13.
The performance of any model used to analyse DNA profile evidence should be tested using simulation, large scale validation studies based on ground-truth cases, or alignment with trends predicted by theory. We investigate a number of diagnostics to assess the performance of the model using Hd true tests. Of particular focus in this work is the proportion of comparisons to non-contributors that yield a likelihood ratio (LR) higher than or equal to the likelihood ratio of a known contributor (LRPOI), designated as p, and the average LR for Hd true tests. Theory predicts that p should always be less than or equal to 1/LRPOI and hence the observation of this in any particular case is of limited use. A better diagnostic is the average LR for Hd true which should be near to 1. We test the performance of a continuous interpretation model on nine DNA profiles of varying quality and complexity and verify the theoretical expectations.  相似文献   

14.

Purpose:

To investigate the relationships among highly constrained back projection (HYPR)‐LR, projection reconstruction focal underdetermined system solver (PR‐FOCUSS), and k‐t FOCUSS by showing how each method relates to a generalized reference image reconstruction method. That is, the generalized series model employs a fixed reference image and multiplicative corrections—that model is extended here to consider reference images more broadly, both in image space and in transform spaces (x‐t and x‐f spaces), and that can evolve with iteration.

Materials and Methods:

Theoretical relationships between the methods were derived. Computer simulations were done to compare HYPR‐LR to one iteration of PR‐FOCUSS. The generalized reference approaches applied in the x‐t or x‐f domain were compared using computer simulation, five cardiac cine imaging datasets, and six myocardial perfusion datasets.

Results:

PR‐FOCUSS and HYPR‐LR gave comparable errors, with PR‐FOCUSS slightly outperforming HYPR‐LR. The baseline image is important to the performance of k‐t FOCUSS and x‐t FOCUSS, as demonstrated by results from cardiac cine imaging. For cardiac perfusion reconstructions with the use of a temporal average image as the baseline image, k‐t FOCUSS gave lower errors than x‐t FOCUSS.

Conclusion:

HYPR‐LR and PR‐FOCUSS are closely related: both work for radial sampling and use reference images in the x‐t domain; HYPR‐LR is an approximate implementation of the generalized reference framework, while PR‐FOCUSS is a conjugate gradient implementation of the generalized reference framework. The superiority of generalized reference approaches applied in the x‐t or x‐f domain was sensitive to the characteristics of the acquired data and to the baseline image used. J. Magn. Reson. Imaging 2011;. © 2011 Wiley‐Liss, Inc.  相似文献   

15.
Unprecedented fidelity and specificity have afforded DNA testing its long reigning status as the gold standard for establishing personal identification. While the method itself is flawless, forensic experts have undoubtedly stumbled across challenging cases in which no reference samples for an unknown person (UP) are available for comparison. In such cases, experts often must resort to an assortment of kinship analyses-primarily those involving alleged parents or children of a UP-to establish personal identification. The present study derives likelihood ratio (LR) distributions from an extensive series of kinship simulations and places actual data, obtained from 120 cases in which personal identification of a UP was established via kinship analyses, to a comprehensive comparison in order to evaluate the efficacy of kinship assessments in establishing personal identification. A commercially available AmpFlSTR Identifiler kit was used to obtain DNA profiles. UP DNAs were extracted and isolated from fingernail (n=87), cardiac blood (24), carpal bone (7) and tooth (2). Buccal cells were procured from alleged kin (AK) for subsequent kinship analyses. In 72 cases 1-3 alleged children were available for comparison; in 46 cases, one or both alleged parents were available; and in the final 2 cases (involving a pair of bodies discovered together in a dwelling), their alleged children were typed for comparison. For each case a LR was calculated based on the DNA typing results. Interestingly, we found that the median LR observed in the actual cases virtually mirrored those of the simulations. With exception to 2 cases in which a silent allele was observed at D19S433, biological relatives showed a LR greater than 100 and in these cases, kinship between the UP and AK were further supported by additional forms of evidence. We show here that in the vast majority of identification cases where direct reference samples are unavailable for a UP, kinship analyses referring to alleged parents/children and using 15 standard loci is more than capable of establishing the identification of a UP. However, discretion should be advised for silent alleles which-albeit rare-are known to occur at loci such as D19S433, along with other mutations which could render a deceivingly reduced LR.  相似文献   

16.
The vast majority of radiography research is subject to critique and evaluation from peers in order to justify the method and the outcome of the study. Within the quantitative domain, which the majority of medical imaging publications tend to fall into, there are prescribed methods for establishing scientific rigour and quality in order to critique a study.However, researchers within the qualitative paradigm, which is a developing area of radiography research, are often unclear about the most appropriate methods to measure the rigour (standards and quality) of a research study. This article considers the issues related to rigour, reliability and validity within qualitative research. The concepts of reliability and validity are briefly discussed within traditional positivism and then the attempts to use these terms as a measure of quality within qualitative research are explored.Alternative methods for research rigour in interpretive research (meanings and emotions) are suggested in order to compliment the existing radiography framework that exists for qualitative studies. The authors propose the use of an established model that is adapted to reflect the iterative process of qualitative research. Although a mechanistic approach to establishing rigour is rejected by many qualitative researchers, it is argued that a guide for novice researchers within a developing research base such as radiography is appropriate in order to establish the credibility and trustworthiness of a qualitative study.  相似文献   

17.
Probabilistic genotyping software based on continuous models is effective for interpreting DNA profiles derived from DNA mixtures and small DNA samples. In this study, we updated our previously developed Kongoh software (to ver. 3.0.1) to interpret DNA profiles typed using the GlobalFiler™ PCR Amplification Kit. Recently, highly sensitive typing systems such as the GlobalFiler system have facilitated the detection of forward, double-back, and minus 2-nt stutters; therefore, we implemented statistical models for these stutters in Kongoh. In addition, we validated the new version of Kongoh using 2–4-person mixtures and DNA profiles with degradation in the GlobalFiler system. The likelihood ratios (LRs) for true contributors and non-contributors were well separated as the information increased (i.e., larger peak height and fewer contributors), and these LRs tended to neutrality as the information decreased. These trends were observed even in profiles with DNA degradation. The LR values were highly reproducible, and the accuracy of the calculation was also confirmed. Therefore, Kongoh ver. 3.0.1 is useful for interpreting DNA mixtures and degraded DNA samples in the GlobalFiler system.  相似文献   

18.
Monte Carlo-based treatment planning algorithms are advancing rapidly and will certainly be implemented as part of conventional treatment planning systems in the near future. This paper was designed as a basic tutorial for using the Monte Carlo method as applied to radiotherapy treatment planning. The tutorial addresses the basic transport differences between photon and electron transport as well as the sampling distributions. The implementation of a virtual linac source model and the conversion from the Monte Carlo source modeling reference plane into the treatment reference plane is discussed. The implementation of a thresholding algorithm for converting CT electron density to patient specific materials is also presented. A 6-field prostate boost treatment is used to compare a conventional treatment planning algorithm (pencil beam model) with a Monte Carlo simulation algorithm. The agreement between the 2 calculation methods is good based upon the qualitative comparison of the isodose distribution and the dose-volume histograms for the prostate and the rectum. The effects of statistical uncertainty on the Monte Carlo calculation are also presented.  相似文献   

19.
It has become widely accepted in forensics that, owing to a lack of sensible priors, the evidential value of matching DNA profiles in trace donor identification or kinship analysis is most sensibly communicated in the form of a likelihood ratio (LR). This restraint does not abate the fact that the posterior odds (PO) would be the preferred basis for returning a verdict. A completely different situation holds for Forensic DNA Phenotyping (FDP), which is aimed at predicting externally visible characteristics (EVCs) of a trace donor from DNA left behind at the crime scene. FDP is intended to provide leads to the police investigation helping them to find unknown trace donors that are unidentifiable by DNA profiling. The statistical models underlying FDP typically yield posterior odds (PO) for an individual possessing a certain EVC. This apparent discrepancy has led to confusion as to when LR or PO is the appropriate outcome of forensic DNA analysis to be communicated to the investigating authorities. We thus set out to clarify the distinction between LR and PO in the context of forensic DNA profiling and FDP from a statistical point of view. In so doing, we also addressed the influence of population affiliation on LR and PO. In contrast to the well-known population dependency of the LR in DNA profiling, the PO as obtained in FDP may be widely population-independent. The actual degree of independence, however, is a matter of (i) how much of the causality of the respective EVC is captured by the genetic markers used for FDP and (ii) by the extent to which non-genetic such as environmental causal factors of the same EVC are distributed equally throughout populations. The fact that an LR should be communicated in cases of DNA profiling whereas the PO are suitable for FDP does not conflict with theory, but rather reflects the immanent differences between these two forensic applications of DNA information.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号