首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The spatial distribution of synaptic inputs on the dendritic tree of a neuron can have significant influence on neuronal function. Consequently, accurate anatomical reconstructions of neuron morphology and synaptic localization are critical when modeling and predicting physiological responses of individual neurons. Historically, generation of three-dimensional (3D) neuronal reconstructions together with comprehensive mapping of synaptic inputs has been an extensive task requiring manual identification of putative synaptic contacts directly from tissue samples or digital images. Recent developments in neuronal tracing software applications have improved the speed and accuracy of 3D reconstructions, but localization of synaptic sites through the use of pre- and/or post-synaptic markers has remained largely a manual process. To address this problem, we have developed an algorithm, based on 3D distance measurements between putative pre-synaptic terminals and the post-synaptic dendrite, to automate synaptic contact detection on dendrites of individually labeled neurons from 3D immunofluorescence image sets. In this study, the algorithm is implemented with custom routines in Matlab, and its effectiveness is evaluated through analysis of primary sensory afferent terminals on motor neurons. Optimization of algorithm parameters enabled automated identification of synaptic contacts that matched those identified by manual inspection with low incidence of error. Substantial time savings and the elimination of variability in contact detection introduced by different users are significant advantages of this method.  相似文献   

2.
Neuroanatomical analysis, such as classification of cell types, depends on reliable reconstruction of large numbers of complete 3D dendrite and axon morphologies. At present, the majority of neuron reconstructions are obtained from preparations in a single tissue slice in vitro, thus suffering from cut off dendrites and, more dramatically, cut off axons. In general, axons can innervate volumes of several cubic millimeters and may reach path lengths of tens of centimeters. Thus, their complete reconstruction requires in vivo labeling, histological sectioning and imaging of large fields of view. Unfortunately, anisotropic background conditions across such large tissue volumes, as well as faintly labeled thin neurites, result in incomplete or erroneous automated tracings and even lead experts to make annotation errors during manual reconstructions. Consequently, tracing reliability renders the major bottleneck for reconstructing complete 3D neuron morphologies. Here, we present a novel set of tools, integrated into a software environment named ‘Filament Editor’, for creating reliable neuron tracings from sparsely labeled in vivo datasets. The Filament Editor allows for simultaneous visualization of complex neuronal tracings and image data in a 3D viewer, proof-editing of neuronal tracings, alignment and interconnection across sections, and morphometric analysis in relation to 3D anatomical reference structures. We illustrate the functionality of the Filament Editor on the example of in vivo labeled axons and demonstrate that for the exemplary dataset the final tracing results after proof-editing are independent of the expertise of the human operator.  相似文献   

3.
4.
The comprehensive characterization of neuronal morphology requires tracing extensive axonal and dendritic arbors imaged with light microscopy into digital reconstructions. Considerable effort is ongoing to automate this greatly labor-intensive and currently rate-determining process. Experimental data in the form of manually traced digital reconstructions and corresponding image stacks play a vital role in developing increasingly more powerful reconstruction algorithms. The DIADEM challenge (short for DIgital reconstruction of Axonal and DEndritic Morphology) successfully stimulated progress in this area by utilizing six data set collections from different animal species, brain regions, neuron types, and visualization methods. The original research projects that provided these data are representative of the diverse scientific questions addressed in this field. At the same time, these data provide a benchmark for the types of demands automated software must meet to achieve the quality of manual reconstructions while minimizing human involvement. The DIADEM data underwent extensive curation, including quality control, metadata annotation, and format standardization, to focus the challenge on the most substantial technical obstacles. This data set package is now freely released ( http://diademchallenge.org ) to train, test, and aid development of automated reconstruction algorithms.  相似文献   

5.
Digital reconstructions of neuronal morphology are used to study neuron function, development, and responses to various conditions. Although many measures exist to analyze differences between neurons, none is particularly suitable to compare the same arborizing structure over time (morphological change) or reconstructed by different people and/or software (morphological error). The metric introduced for the DIADEM (DIgital reconstruction of Axonal and DEndritic Morphology) Challenge quantifies the similarity between two reconstructions of the same neuron by matching the locations of bifurcations and terminations as well as their topology between the two reconstructed arbors. The DIADEM metric was specifically designed to capture the most critical aspects in automating neuronal reconstructions, and can function in feedback loops during algorithm development. During the Challenge, the metric scored the automated reconstructions of best-performing algorithms against manually traced gold standards over a representative data set collection. The metric was compared with direct quality assessments by neuronal reconstruction experts and with clocked human tracing time saved by automation. The results indicate that relevant morphological features were properly quantified in spite of subjectivity in the underlying image data and varying research goals. The DIADEM metric is freely released open source ( http://diademchallenge.org ) as a flexible instrument to measure morphological error or change in high-throughput reconstruction projects.  相似文献   

6.
Digital reconstruction of neuronal morphology is a powerful technique for investigating the nervous system. This process consists of tracing the axonal and dendritic arbors of neurons imaged by optical microscopy into a geometrical format suitable for quantitative analysis and computational modeling. Algorithmic automation of neuronal tracing promises to increase the speed, accuracy, and reproducibility of morphological reconstructions. Together with recent breakthroughs in cellular imaging and accelerating progress in optical microscopy, automated reconstruction of neuronal morphology will play a central role in the development of high throughput screening and the acquisition of connectomic data. Yet, despite continuous advances in image processing algorithms, to date manual tracing remains the overwhelming choice for digitizing neuronal morphology. We summarize the issues involved in automated reconstruction, overview the available techniques, and provide a realistic assessment of future perspectives.  相似文献   

7.
Advances in the application of electron microscopy (EM) to serial imaging are opening doors to new ways of analyzing cellular structure. New and improved algorithms and workflows for manual and semiautomated segmentation allow us to observe the spatial arrangement of the smallest cellular features with unprecedented detail in full three‐dimensions. From larger samples, higher complexity models can be generated; however, they pose new challenges to data management and analysis. Here we review some currently available solutions and present our approach in detail. We use the fully immersive virtual reality (VR) environment CAVE (cave automatic virtual environment), a room in which we are able to project a cellular reconstruction and visualize in 3D, to step into a world created with Blender, a free, fully customizable 3D modeling software with NeuroMorph plug‐ins for visualization and analysis of EM preparations of brain tissue. Our workflow allows for full and fast reconstructions of volumes of brain neuropil using ilastik, a software tool for semiautomated segmentation of EM stacks. With this visualization environment, we can walk into the model containing neuronal and astrocytic processes to study the spatial distribution of glycogen granules, a major energy source that is selectively stored in astrocytes. The use of CAVE was key to the observation of a nonrandom distribution of glycogen, and led us to develop tools to quantitatively analyze glycogen clustering and proximity to other subcellular features. J. Comp. Neurol. 524:23–38, 2016. © 2015 Wiley Periodicals, Inc.  相似文献   

8.
The digital reconstruction of single neurons from 3D confocal microscopic images is an important tool for understanding the neuron morphology and function. However the accurate automatic neuron reconstruction remains a challenging task due to the varying image quality and the complexity in the neuronal arborisation. Targeting the common challenges of neuron tracing, we propose a novel automatic 3D neuron reconstruction algorithm, named Rivulet, which is based on the multi-stencils fast-marching and iterative back-tracking. The proposed Rivulet algorithm is capable of tracing discontinuous areas without being interrupted by densely distributed noises. By evaluating the proposed pipeline with the data provided by the Diadem challenge and the recent BigNeuron project, Rivulet is shown to be robust to challenging microscopic imagestacks. We discussed the algorithm design in technical details regarding the relationships between the proposed algorithm and the other state-of-the-art neuron tracing algorithms.  相似文献   

9.
We have developed a computer system which enters and aligns serial sections and displays the completed reconstructions at different rotations in space. The system uses commercially available hardware, including a Hewlett-Packard 9845T microcomputer and an H-P 9874A digitizer. The software for the system is written in the BASIC language.The system consists of two programs, one for section digitization, the other for rotation and display of the reconstructions. Sections are digitized directly from micrographs or back-projected slides. The outlines of cells or other structures are traced from these media using a hand-held cursor on the digitizer. The positions of elements (inputs) which contact the structure and fiducials are also digitized. The sections are aligned by simultaneously displaying two consecutive sections on the graphics CRT screen. The sections are coarsely superimposed by centering around screen center using a centering algorithm. They are precisely aligned by rotating and translating the images with a reference cursor. Special functions for inserting and deleting sections and rapid section scanning are available for editing. The aligned sections are stored using a linked-list file structure on either floppy disks or tape cartridges. The rotation program replots the completed reconstructions on the graphics CRT or digital plotter. The program will reproduce the reconstructions at any scale and at any rotation in the x-, y- or z-planes. A hidden line algorithm removes hidden lines to give a 3-dimensional (3-D) perspective to the reconstructions. The positions of inputs and fiducials are represented by symbols. We use the system to reconstruct cells and neural processes. The 3-D reconstructions allow us to: (a) examine the spatial distribution and density of synaptic contacts on neurons; (b) study complex neuronal shapes; (c) examine the vectors of neural processes. The computer reconstruction system, which is moderately priced, should also prove useful for reconstructing many other types of biological profile.  相似文献   

10.
Automating the process of neural circuit reconstruction on a large-scale is one of the foremost challenges in the field of neuroscience. In this study we examine the methodology for circuit reconstruction from three-dimensional light microscopy (LM) stacks of images. We show how the minimal error-rate of an ideal reconstruction procedure depends on the density of labeled neurites, giving rise to the fundamental limitation of an LM based approach for neural circuit research. Circuit reconstruction procedures typically involve steps related to neuron labeling and imaging, and subsequent image pre-processing and tracing of neurites. In this study, we focus on the last step--detection of traces of neurites from already pre-processed stacks of images. Our automated tracing algorithm, implemented as part of the Neural Circuit Tracer software package, consists of the following main steps. First, image stack is filtered to enhance labeled neurites. Second, centerline of the neurites is detected and optimized. Finally, individual branches of the optimal trace are merged into trees based on a cost minimization approach. The cost function accounts for branch orientations, distances between their end-points, curvature of the merged structure, and its intensity. The algorithm is capable of connecting branches which appear broken due to imperfect labeling and can resolve situations where branches appear to be fused due the limited resolution of light microscopy. The Neural Circuit Tracer software is designed to automatically incorporate ImageJ plug-ins and functions written in MatLab and provides roughly a 10-fold increases in speed in comparison to manual tracing.  相似文献   

11.
OBJECTIVE: A fully automated method for reducing EOG artifacts is presented and validated. METHODS: The correction method is based on regression analysis and was applied to 18 recordings with 22 channels and approx. 6 min each. Two independent experts scored the original and corrected EEG in a blinded evaluation. RESULTS: The expert scorers identified in 5.9% of the raw data some EOG artifacts; 4.7% were corrected. After applying the EOG correction, the expert scorers identified in another 1.9% of the data some EOG artifacts, which were not recognized in the uncorrected data. CONCLUSIONS: The advantage of a fully automated reduction of EOG artifacts justifies the small additional effort of the proposed method and is a viable option for reducing EOG artifacts. The method has been implemented for offline and online analysis and is available through BioSig, an open source software library for biomedical signal processing. SIGNIFICANCE: Visual identification and rejection of EOG-contaminated EEG segments can miss many EOG artifacts, and is therefore not sufficient for removing EOG artifacts. The proposed method was able to reduce EOG artifacts by 80%.  相似文献   

12.
This paper presents a new algorithm for extracting the centerlines of the axons from a 3D data stack collected by a confocal laser scanning microscope. Recovery of neuronal structures from such datasets is critical for quantitatively addressing a range of neurobiological questions such as the manner in which the branching pattern of motor neurons change during synapse elimination. Unfortunately, the data acquired using fluorescence microscopy contains many imaging artifacts, such as blurry boundaries and non-uniform intensities of fluorescent radiation. This makes the centerline extraction difficult. We propose a robust segmentation method based on probabilistic region merging to extract the centerlines of individual axons with minimal user interaction. The 3D model of the extracted axon centerlines in three datasets is presented in this paper. The results are validated with the manual tracking results while the robustness of the algorithm is compared with the published repulsive snake algorithm.  相似文献   

13.
We describe an approach for automation of the process of reconstruction of neural tissue from serial section transmission electron micrographs. Such reconstructions require 3D segmentation of individual neuronal processes (axons and dendrites) performed in densely packed neuropil. We first detect neuronal cell profiles in each image in a stack of serial micrographs with multi-scale ridge detector. Short breaks in detected boundaries are interpolated using anisotropic contour completion formulated in fuzzy-logic framework. Detected profiles from adjacent sections are linked together based on cues such as shape similarity and image texture. Thus obtained 3D segmentation is validated by human operators in computer-guided proofreading process. Our approach makes possible reconstructions of neural tissue at final rate of about 5 microm3/manh, as determined primarily by the speed of proofreading. To date we have applied this approach to reconstruct few blocks of neural tissue from different regions of rat brain totaling over 1000microm3, and used these to evaluate reconstruction speed, quality, error rates, and presence of ambiguous locations in neuropil ssTEM imaging data.  相似文献   

14.
This paper presents a fully automated pipeline for thickness profile evaluation and analysis of the human corpus callosum (CC) in 3D structural T 1-weighted magnetic resonance images. The pipeline performs the following sequence of steps: midsagittal plane extraction, CC segmentation algorithm, quality control tool, thickness profile generation, statistical analysis and results figure generator. The CC segmentation algorithm is a novel technique that is based on a template-based initialisation with refinement using mathematical morphology operations. The algorithm is demonstrated to have high segmentation accuracy when compared to manual segmentations on two large, publicly available datasets. Additionally, the resultant thickness profiles generated from the automated segmentations are shown to be highly correlated to those generated from the ground truth segmentations. The manual editing tool provides a user-friendly environment for correction of errors and quality control. Statistical analysis and a novel figure generator are provided to facilitate group-wise morphological analysis of the CC.  相似文献   

15.
Removing power line noise and other frequency‐specific artifacts from electrophysiological data without affecting neural signals remains a challenging task. Recently, an approach was introduced that combines spectral and spatial filtering to effectively remove line noise: Zapline. This algorithm, however, requires manual selection of the noise frequency and the number of spatial components to remove during spatial filtering. Moreover, it assumes that noise frequency and spatial topography are stable over time, which is often not warranted. To overcome these issues, we introduce Zapline‐plus, which allows adaptive and automatic removal of frequency‐specific noise artifacts from M/electroencephalography (EEG) and LFP data. To achieve this, our extension first segments the data into periods (chunks) in which the noise is spatially stable. Then, for each chunk, it searches for peaks in the power spectrum, and finally applies Zapline. The exact noise frequency around the found target frequency is also determined separately for every chunk to allow fluctuations of the peak noise frequency over time. The number of to‐be‐removed components by Zapline is automatically determined using an outlier detection algorithm. Finally, the frequency spectrum after cleaning is analyzed for suboptimal cleaning, and parameters are adapted accordingly if necessary before re‐running the process. The software creates a detailed plot for monitoring the cleaning. We highlight the efficacy of the different features of our algorithm by applying it to four openly available data sets, two EEG sets containing both stationary and mobile task conditions, and two magnetoencephalography sets containing strong line noise.  相似文献   

16.
A telemetry system for neuronal discharge recording from behaving monkeys positioned in a primate chair is described. Using FM telemetry system single neuronal activity was recorded from the monkey brain through five teflon-coated platinum-indium microwire electrodes (25 μ in dia.). This system permits stable long term neuronal discharge recording during feeding behavior with minimal artifacts even during mastication. The system has a broadcasting distance of more than 30 m. The frequency response and the S/N ratio obtained from this system is almost comparable with that of the usual direct wire procedures, and the elimination of artifacts is superior. This report describes the procedure for constructing the microelectrode assembly and FM telemetry system.  相似文献   

17.
OBJECTIVE: To propose a noise reduction procedure for magnetoencephalography (MEG) signals introducing an automatic detection system of artifactual components (ICs) separated by an independent component analysis (ICA) algorithm, and a control cycle on reconstructed cleaned data to recovery part of non-artifactual signals possibly lost by the blind mechanism. METHODS: The procedure consisted of three main steps: (1) ICA for blind source separation (BSS); (2) automatic detection method of artifactual components, based on statistical and spectral ICs characteristics; (3) control cycle on 'discrepancy,' i.e. on the difference between original data and those reconstructed using only ICs automatically retained. Simulated data were generated as representative mixtures of some common brain frequencies, a source of internal Gaussian noise, power line interference, and two real artifacts (electrocardiogram=ECG, electrooculogram=EOG), with the adjunction of a matrix of Gaussian external noise. Three real data samples were chosen as representative of spontaneous noisy MEG data. RESULTS: In simulated data the proposed set of markers selected three components corresponding to ECG, EOG and the Gaussian internal noise; in real-data examples, the automatic detection system showed a satisfactory performance in detecting artifactual ICs. 'Discrepancy' control cycle was redundant in simulated data, as expected, but it was a significant amelioration in two of the three real-data cases. CONCLUSIONS: The proposed automatic detection approach represents a suitable strengthening and simplification of pre-processing data analyses. The proposed 'discrepancy' evaluation, after automatic pruning, seems to be a suitable way to render negligible the risk of loose non-artifactual activity when applying BSS methods to real data. SIGNIFICANCE: The present noise reduction procedure, including ICA separation phase, automatic artifactual ICs selection and 'discrepancy' control cycle, showed good performances both on simulated and real MEG data. Moreover, application to real signals suggests the procedure to be able to separate different cerebral activity sources, even if characterized by very similar frequency contents.  相似文献   

18.
BACKGROUND:It is not possible to reconstruct the inner structure of the spinal cord.such as gray matter and spinal tracts,from the Visual Human Project database or CT and MRI databases,due to low image resolution and contrast in macrosection images.OBJECTIVE:To explore a semi-automatic computerized three-dimensional(3D)reconstruction of human spinal cord based on histological serial sections,in order to solve issues such as low contrast.DESIGN,TIME AND SETTING:An experimental study combining serial section techniques and 3D reconstruction,performed in the laboratory of Hunlan Anatomy and Histoembryology at the Medical School of Nantong University during January to April 2008.SETTING:Department of Anatomy,Institute of Neurobiology,Jiangsu Province Key Laboratory of Neural Regeneration,Laboratory of Image Engineering.MATERIALS:A human lumbar spinal cord segment from fresh autopsy material of an adult male.M[ETHODS:After 4% paraformaldehyde fixation for three days,serial sections of the lumbar spinal cord were cut on a Leica cryostat and mounted on slides in sequence,with eight sections aligned separately on each slide.All sections were stained with Luxol Fast Blue to reveal myelin sheaths.After gradient dehydration and clearing,the stained slides were coverslipped.Sections were observed and images recorded under a light microscope using a digital camera.Six images were acquired at ×25 magnification and automatically stitched into a complete section image.Aftel-all serial images were obtained,96 complete serial images of the human lumbar cord segment were automatically processed with "Curves","Autocontrast","Gray scale 8 bit","Invert","Image resize to 50%"steps using Photoshop 7.0 software.All images were added in order into 3D-DOCTOR 4.0 software as a stack where serial images were automatically realigned with neighbonng images and semi-automatically segmented for white matter and gray matter.Finally,simple surface and volume reconstruction were completed on a personal computer.The reconstructed human lumbar spinal cord segment was interactively observed,cut.and measured.MAIN OUTCOME MEASURES:The reconstructed human lumbar spinal cord segment.RESULTS:Compared with serial images obtained from other image modalities,such as CT,MRI,and macrosections from The Visual Human Project database.the Luxol Fast Blue stained histological serial section images exhibited higher resolution and contrast between gray and white matter.Image processing and 3D reconstruction steps were semi-automatically performed with related software.The 3D reconstructed human lumbar cord segment were observed,cut,and measured on a PC.CONCLUSION:A semi-automatically computerized method,based on histological serial sections,is an effective way to 3D-reconstruct the human spinal cord.  相似文献   

19.
BACKGROUND: It is not possible to reconstruct the inner structure of the spinal cord, such as gray matter and spinal tracts, from the Visual Human Project database or CT and MRI databases, due to low image resolution and contrast in macrosection images.
OBJECTIVE: To explore a semi-automatic computerized three-dimensional (3D) reconstruction of human spinal cord based on histological serial sections, in order to solve issues such as low contrast.
DESIGN, TIME AND SETTING: An experimental study combining serial section techniques and 3D reconstruction, performed in the laboratory of Human Anatomy and Histoembryology at the Medical School of Nantong University during January to April 2008.
SETTING: Department of Anatomy, Institute of Neurobiology, Jiangsu Province Key Laboratory of Neural Regeneration, Laboratory of Image Engineering.
MATERIALS: A human lumbar spinal cord segment from fresh autopsy material of an adult male.
METHODS: After 4% paraformaldehyde fixation for three days, serial sections of the lumbar spinal cord were cut on a Leica cryostat and mounted on slides in sequence, with eight sections aligned separately on each slide. All sections were stained with Luxol Fast Blue to reveal myelin sheaths. After gradient dehydration and clearing, the stained slides were coverslipped. Sections were observed and images recorded under a light microscope using a digital camera. Six images were acquired at x25 magnification and automatically stitched into a complete section image. After all serial images were obtained, 96 complete serial images of the human lumbar cord segment were automatically processed with "Curves", "Autocontrast", "Gray scale 8 bit", "Invert", "Image resize to 50%" steps using Photoshop 7.0 software. All images were added in order into 3D-DOCTOR 4.0 software as a stack, where serial images were automatically realigned with neighboring images and semi-automatically segmented for white matter and gray matter. Finally, simple surface and volume recon  相似文献   

20.
An integrated microscope image analysis pipeline is developed for automatic analysis and quantification of phenotypes in zebrafish with altered expression of Alzheimer's disease (AD)-linked genes. We hypothesize that a slight impairment of neuronal integrity in a large number of zebrafish carrying the mutant genotype can be detected through the computerized image analysis method. Key functionalities of our zebrafish image processing pipeline include quantification of neuron loss in zebrafish embryos due to knockdown of AD-linked genes, automatic detection of defective somites, and quantitative measurement of gene expression levels in zebrafish with altered expression of AD-linked genes or treatment with a chemical compound. These quantitative measurements enable the archival of analyzed results and relevant meta-data. The structured database is organized for statistical analysis and data modeling to better understand neuronal integrity and phenotypic changes of zebrafish under different perturbations. Our results show that the computerized analysis is comparable to manual counting with equivalent accuracy and improved efficacy and consistency. Development of such an automated data analysis pipeline represents a significant step forward to achieve accurate and reproducible quantification of neuronal phenotypes in large scale or high-throughput zebrafish imaging studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号