首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
2.
Prior studies report no sex differences in cocaine consumption during maintenance of self-administration. We find female rats show poorer lever discrimination during acquisition of self-administration. Now, we test whether female rats show greater non-reinforced or ineffective responding (presses during infusion and time-out periods as well as inactive lever presses) than male rats during maintenance of cocaine self-administration (.0625–1.0 mg/kg/infusion) in Experiment 1. Persistence of responding during extinction when saline-replaced cocaine was also examined. Whether response differences reflect sex differences in movements under a non-drug condition was tested in Experiment 2. Because cocaine may affect lever press rates differentially between sexes, we examined the effects of cocaine (.3–30 mg/kg; IP) on responding for food in Experiment 3. Cocaine consumption does not differ between female and male rats. However, females respond more during infusion and time-out periods and during extinction than males. There is no sex difference in movements and high cocaine doses decrease responding for food more in female vs. male rats. That females engage in more ineffective responding may represent heightened “craving” and cannot be explained by increased movements or cocaine-stimulated increases in lever pressing. In contrast, responding for cocaine in males appears driven by drug delivery.  相似文献   

3.
4.
Most teenage fears subside with age, a change that may reflect brain maturation in the service of refined fear learning. Whereas adults clearly demarcate safe situations from real dangers, attenuating fear to the former but not the latter, adolescents' immaturity in prefrontal cortex function may limit their ability to form clear-cut threat categories, allowing pervasive fears to manifest. Here we developed a discrimination learning paradigm that assesses the ability to categorize threat from safety cues to test these hypotheses on age differences in neurodevelopment. In experiment 1, we first demonstrated the capacity of this paradigm to generate threat/safety discrimination learning in both adolescents and adults. Next, in experiment 2, we used this paradigm to compare the behavioral and neural correlates of threat/safety discrimination learning in adolescents and adults using functional MRI. This second experiment yielded three sets of findings. First, when labeling threats online, adolescents reported less discrimination between threat and safety cues than adults. Second, adolescents were more likely than adults to engage early-maturing subcortical structures during threat/safety discrimination learning. Third, adults' but not adolescents' engagement of late-maturing prefrontal cortex regions correlated positively with fear ratings during threat/safety discrimination learning. These data are consistent with the role of dorsolateral regions during category learning, particularly when differences between stimuli are subtle [Miller EK, Cohen JD (2001) Annu Rev Neurosci 24:167-202]. These findings suggest that maturational differences in subcortical and prefrontal regions between adolescent and adult brains may relate to age-related differences in threat/safety discrimination.  相似文献   

5.
AIMS: This study presented and tested a model of behavior change in long-term substance use disorder recovery, the acceptance and relationship context (ARC) model. The model specifies that acceptance-based behavior and constructive social relationships lead to recovery, and that treatment programs with supportive, involved relationships facilitate the development of these factors. DESIGN: This study used a prospective longitudinal naturalistic design and controlled for baseline levels of study variables. SETTING AND PARTICIPANTS: The model was tested on a sample of 2549 patients in 15 residential substance use disorder treatment programs. MEASUREMENTS: Acceptance-based responding (ABR), social relationship quality (SRQ), treatment program alliance (TPA) and substance use-related impairment were assessed using interviews and self-report questionnaires. FINDINGS: TPA predicted ABR and SRQ and, in turn, ABR predicted better 2-year and 5-year treatment outcomes. The baseline-controlled model accounted for 41% of the variance in outcome at 2-year follow-up and 28% of the variance in outcome at 5-year follow-up. CONCLUSIONS Patients from treatment programs with an affiliative relationship network are more likely to respond adaptively to internal states associated previously with substance use, develop constructive social relationships and achieve long-term treatment benefits.  相似文献   

6.
7.
The development of antibodies to factor VIII (FVIII) in severely affected haemophilia A patients is a serious complication associated with increased morbidity and mortality. Bypassing agents are used to treat acute bleeding episodes; however, elimination of the inhibitors can only be achieved with immune tolerance therapy (ITT) in 60-80% of cases. High responding (HR) inhibitors are more likely to respond to ITT if the titre is decreased to <5 BU over time or in selected cases after the administration of immunosuppressive drugs, plasmapheresis or immunoabsorption, techniques difficult to apply in children. Anti-CD20 (rituximab), a monoclonal antibody, was given as an alternative treatment in two haemophilic children with HR inhibitors and impaired quality of life, due to recurrent haemarthrosis. Rituximab was given at the dose of 375 mg m(-2), once weekly for four consecutive weeks. Both patients showed a partial response to rituximab reducing the inhibitor titre to <5 BU, thus facilitating ITT initiation; however, only the older patient eradicated the inhibitor within 21 days after application of ITT. The second patient, despite depletion of B cells, did not respond to ITT. No long-term side effects have been observed in both patients for a follow-up period of 20 and 18 months respectively. In conclusion, rituximab appears to be an alternative effective therapy to rapidly reduce or eliminate the inhibitor in selected cases of severely affected haemophiliacs before further proceeding to ITT. However, the dose and appropriate schedule, as well as long-term side effects need further investigation.  相似文献   

8.
A very interesting perspective of “big data” in diabetes management stands in the integration of environmental information with data gathered for clinical and administrative purposes, to increase the capability of understanding spatial and temporal patterns of diseases. Within the MOSAIC project, funded by the European Union with the goal to design new diabetes analytics, we have jointly analyzed a clinical-administrative dataset of nearly 1.000 type 2 diabetes patients with environmental information derived from air quality maps acquired from remote sensing (satellite) data. Within this context we have adopted a general analysis framework able to deal with a large variety of temporal, geo-localized data. Thanks to the exploitation of time series analysis and satellite images processing, we studied whether glycemic control showed seasonal variations and if they have a spatiotemporal correlation with air pollution maps. We observed a link between the seasonal trends of glycated hemoglobin and air pollution in some of the considered geographic areas. Such findings will need future investigations for further confirmation. This work shows that it is possible to successfully deal with big data by implementing new analytics and how their exploration may provide new scenarios to better understand clinical phenomena.  相似文献   

9.
10.
<正>采供血机构作为一个不以盈利为目的,采集、提供临床用血的公益性卫生机构,通过宣传招募,采集全血和成分血,经加工、检测合格后供应临床使用。1998年之前,地方及部队的医疗和采供血机构都可以自行采集血液,1998年《献血法》颁布,规定血液采集统一归采供血机构管理,自此卫生管理部门和采供血机构逐步开始引进和实施对采供血过程的规范化管理,特别是2006年卫生部颁布"一个办法两个规范",并开始了连续的血站质量督  相似文献   

11.

BACKGROUND:

Many studies have relied on administrative data to identify patients with heart failure (HF).

OBJECTIVE:

To systematically review studies that assessed the validity of administrative data for recording HF.

METHODS:

English peer-reviewed articles (1990 to 2008) validating International Classification of Diseases (ICD)-8, -9 and -10 codes from administrative data were included. An expert panel determined which ICD codes should be included to define HF. Frequencies of ICD codes for HF were calculated using up to the 16 diagnostic coding fields available in the Canadian hospital discharge abstract during fiscal years 2000/2001 and 2005/2006.

RESULTS:

Between 1992 and 2008, more than 70 different ICD codes for defining HF were used in 25 published studies. Twenty-one studies validated hospital discharge abstract data; three studies validated physician claims and two studies validated ambulatory care data. Eighteen studies reported sensitivity (range 29% to 89%). Specificity and negative predictive value were greater than 70% across 17 studies. Nineteen studies reported positive predictive values (range 12% to 100%). Ten studies reported kappa values (range 0.39 to 0.84).For Canadian hospital discharge data, ICD-9 and -10 codes 428 and I50 identified HF in 5.50% and 4.80% of discharge records, respectively. Additional HF-related ICD-9 and -10 codes did not impact HF prevalence.

CONCLUSION:

The ICD-9 and -10 codes 428 and I50 were the most commonly used to define HF in hospital discharge data. Validity of administrative data in recording HF varied across the studies and data sources that were assessed.  相似文献   

12.
Although open databases are an important resource in the current deep learning (DL) era, they are sometimes used “off label”: Data published for one task are used to train algorithms for a different one. This work aims to highlight that this common practice may lead to biased, overly optimistic results. We demonstrate this phenomenon for inverse problem solvers and show how their biased performance stems from hidden data-processing pipelines. We describe two processing pipelines typical of open-access databases and study their effects on three well-established algorithms developed for MRI reconstruction: compressed sensing, dictionary learning, and DL. Our results demonstrate that all these algorithms yield systematically biased results when they are naively trained on seemingly appropriate data: The normalized rms error improves consistently with the extent of data processing, showing an artificial improvement of 25 to 48% in some cases. Because this phenomenon is not widely known, biased results sometimes are published as state of the art; we refer to that as implicit “data crimes.” This work hence aims to raise awareness regarding naive off-label usage of big data and reveal the vulnerability of modern inverse problem solvers to the resulting bias.

Public databases are an important driving force in the current deep learning (DL) revolution; ImageNet (1) is a well-known example. However, due to the growing availability of open-access data and the general hype around artificial intelligence, databases are sometimes used in an “off-label” manner: Data published for one task are used for different ones. Here we aim to show that such naive and seemingly appropriate usage of open-access data could lead to biased, overly optimistic results.Biased performance of machine-learning models due to faulty construction of data cohorts or research pipelines recently has been identified for various tasks, including gender classification (2), COVID-19 prediction (3), and natural language processing (4). However, to the best of our knowledge, it has not been studied for inverse problem solvers. We address this gap by highlighting scenarios that lead to biased performance of algorithms developed for image reconstruction from undersampled MRI measurements; the latter is a real-world example of an inverse problem and a current frontier of DL research (513).The MRI measurements are fundamentally acquired in the Fourier domain, which is known as k-space. Sub-Nyquist sampling is commonly applied to shorten the traditionally lengthy MRI scan time, and image reconstruction algorithms are used to recover images from the undersampled data (1417). Therefore, the development of such algorithms ideally should be done using raw k-space data. However, the development of DL methods requires thousands of examples, and databases containing raw k-space data are scarce. To date, only a few databases offer such data (for example, refs. 1822), whereas many more offer reconstructed and processed magnetic resonance (MR) images (for example, refs. 2330). The latter offer images for postreconstruction tasks, such as segmentation and biomarker discovery. Nevertheless, due to their availability, they often are downloaded and used to synthesize “raw” k-space data using the forward Fourier transform; the synthesized data are then used for the development of reconstruction algorithms. We identified that this common approach could lead to undesirable consequences; the underlying cause is that the nonraw MR images are commonly processed using hidden pipelines. These pipelines, which are implemented by commercial scanner software or during database storage, include a full set or a subset of the following steps: image reconstruction, filtering, storage of magnitude data only (i.e., loss of the MRI complex values), lossy compression, and conversion to Digital Imaging and Communications in Medicine (DICOM) or Neuroimaging Informatics Technology Initative (NIFTI) formats. These reduce the data entropy. We aim to highlight that when modern algorithms are trained and evaluated using such data, they benefit from the data processing and, hence, tend to exhibit overly optimistic results compared to performance on raw, unprocessed data. Because this phenomenon is largely unknown, such biased results are sometimes published as state of the art without reporting the processing pipelines or addressing their effects. To raise community awareness of this growing problem, we coin the term “data crimes” to describe such publications, in reference to the more obvious “inverse crime” scenario (31) described next.Bias stemming from the underlying data has been recognized previously in a few scenarios related to inverse problems. The term inverse crime describes a scenario in which an algorithm is tested using simulated data, and the simulation resonates with the algorithm such that it leads to improved results (3135). Specifically, the authors of ref. 34 described an inverse crime as a situation where the same discrete model is used for simulating k-space measurements and reconstructing an MR image from them. They showed that compared with reconstruction from raw or analytically computed measurements, this leads to reduced ringing artifacts. A second example is evaluation of MRI reconstruction algorithms on real-valued magnitude images. In this case, k-space exhibits conjugate symmetry; hence, it is sufficient to use only about half of it for full image recovery. This symmetry often is leveraged in partial Fourier methods such as Homodyne (15) and projection onto convex sets (36), where additional steps are applied for recovery of the full complex data. However, neglecting the fact that the data are complex valued results in better conditioning due to the lower dimensionality of the inverse problem. This may lead to an obvious advantage when evaluating algorithms on such data as opposed to raw k-space data. However, to the best of our knowledge, inverse crimes have not been studied yet in the context of machine learning or public data usage.Here we report on two subtle forms of algorithmic bias that have not been described in the literature yet and that are relevant to the current DL era. We show how they arise from two hidden data-processing pipelines that affect many open-access MRI databases: a commercial scanner pipeline and a JPEG data storage pipeline. To demonstrate these scenarios, we took raw MRI data and “spoiled” them with carefully controlled processing steps. We then used the processed datasets for training and evaluation of algorithms from three well-established MRI reconstruction frameworks: compressed sensing (CS) with a wavelet transform (37), dictionary learning (DictL) (38), and DL (39). Our experiments demonstrate that these algorithms yield overly optimistic results when trained and evaluated on processed data.The main contributions of this work are fivefold. First, we reveal scenarios in which algorithmic bias of inverse problem solvers may arise from off-label usage of open-access databases and analyze them through large-scale statistics. Second, we find that CS, DictL, and DL algorithms are all prone to this form of subtle bias. While recent studies identified stability issues of MRI reconstruction algorithms (5, 40), here we identify a common vulnerability of canonical algorithms to data-related bias. Third, we demonstrate the potentially harmful impact of data crimes by showing that methods trained on processed data but applied to unprocessed data yield lower-quality image reconstruction in real-world scenarios. Fourth, our experiments reveal limited generalization ability of the studied algorithms. Finally, by introducing the concept of data crimes, we hope to raise community awareness of the growing problem of bias stemming from off-label usage of open-access data.  相似文献   

13.
Data mining is the process of selecting, exploring, and modeling large amounts of data to discover unknown patterns or relationships useful to the data analyst. This article describes applications of data mining for the analysis of blood glucose and diabetes mellitus data. The diabetes management context is particularly well suited to a data mining approach. The availability of electronic health records and monitoring facilities, including telemedicine programs, is leading to accumulating huge data sets that are accessible to physicians, practitioners, and health care decision makers. Moreover, because diabetes is a lifelong disease, even data available for an individual patient may be massive and difficult to interpret. Finally, the capability of interpreting blood glucose readings is important not only in diabetes monitoring but also when monitoring patients in intensive care units. This article describes and illustrates work that has been carried out in our institutions in two areas in which data mining has a significant potential utility to researchers and clinical practitioners: analysis of (i) blood glucose home monitoring data of diabetes mellitus patients and (ii) blood glucose monitoring data from hospitalized intensive care unit patients.  相似文献   

14.
The Inter-Sectoral Impact Model Intercomparison Project offers a framework to compare climate impact projections in different sectors and at different scales. Consistent climate and socio-economic input data provide the basis for a cross-sectoral integration of impact projections. The project is designed to enable quantitative synthesis of climate change impacts at different levels of global warming. This report briefly outlines the objectives and framework of the first, fast-tracked phase of Inter-Sectoral Impact Model Intercomparison Project, based on global impact models, and provides an overview of the participating models, input data, and scenario set-up.  相似文献   

15.
European Union member states must have national haemovigilance reporting of serious adverse reactions and events. We sent national competent authorities an email questionnaire about data validation. Responses were received from 23/27 countries. Nine previously had no national haemovigilance system. In 13 (57%), the serious adverse reactions and events can be verified. Coverage of blood establishments is documented in 20 systems (87%) and of hospitals in 15 systems (65%). Although all member states have implemented haemovigilance systems, there are currently wide variations in data quality assurance, not allowing comparisons between countries.  相似文献   

16.
This technical note describes a new, simple design to photograph the hemodynamic data on cineangiocardiographic film. The advantages are the convenience for reviewing the cardiac catheterization data and the storage of all datas in one place.  相似文献   

17.
Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade‐offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval.  相似文献   

18.

Objective

The study aimed to construct and manage an acute respiratory distress syndrome (ARDS)/sepsis registry that can be used for data warehousing and clinical research.

Methods

The workflow methodology and software solution of research electronic data capture (REDCap) was used to construct the ARDS/sepsis registry. Clinical data from ARDS and sepsis patients registered to the intensive care unit (ICU) of our hospital formed the registry. These data were converted to the electronic case report form (eCRF) format used in REDCap by trained medical staff. Data validation, quality control, and database management were conducted to ensure data integrity.

Results

The clinical data of 67 patients registered to the ICU between June 2013 and December 2013 were analyzed. Of the 67 patients, 45 (67.2%) were classified as sepsis, 14 (20.9%) as ARDS, and eight (11.9%) as sepsis-associated ARDS. The patients’ information, comprising demographic characteristics, medical history, clinical interventions, daily assessment, clinical outcome, and follow-up data, was properly managed and safely stored in the ARDS/sepsis registry. Data efficiency was guaranteed by performing data collection and data entry twice weekly and every two weeks, respectively.

Conclusions

The ARDS/sepsis database that we constructed and manage with REDCap in the ICU can provide a solid foundation for translational research on the clinical data of interest, and a model for development of other medical registries in the future.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号