首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
机器人手术作为一种微创手术技术,由于其高度的精准性,已经得到了外科医生的广泛认可。但机器人手术最大的局限之一就是外科医生缺乏操作的“手感”(即力触觉反馈),增加了手术的不确定性和风险性,从而限制了手术机器人的进一步发展。文中从手术机器人力触觉反馈系统的组成、关键技术、以及当前的研究现状等几个方面,对该反馈系统进行了综述。力触觉反馈包括力反馈与触觉反馈。该系统的实现依赖力触觉的传感与再现,本文介绍了常用的力触觉传感器、再现设备及感觉模拟技术,并分析了各项技术的优缺点。对力触觉反馈系统在手术机器人的研究进展进行了概述。在汇总以前研究的基础上,对相关研究做了展望。  相似文献   

2.
BY-1型声刺激器的研制及其应用   总被引:1,自引:0,他引:1  
BY-1型声刺激器是专门用作听觉系统诱发电位(蜗电图、脑干电位、皮层声诱发电位等)测定的声刺激信号发生器。可发生短纯音、单周纯音、连续纯音、短声、滤波短声、变形短纯音等各种声音信号。短纯音持续期:1ms~999ms;间隔期:1ms~9999ms;上升、下降时间:0.5ms~15ms;频率:0.25K,0.5K,1K,2K,4K,8KHz:最大输出:4KHz以下110dB;8KHz,100dB。分别由20W扬声器及耳机输出。仪器备有每挡5dB的衰减器及自动反相装置。我们应用该仪器发出的各种声音信号,测定了正常人、病人(143例)及豚鼠的听觉系统声诱发电位。  相似文献   

3.
医学虚拟现实系统的分解与划分   总被引:1,自引:0,他引:1  
吴一风  刘祖碧 《医学信息》2004,17(9):541-544
对医学虚拟现实系统进行了技术内涵的分解与应用外延的划分,医学虚拟现实按表现形式可以分为参数化虚拟现实和增强现实,按设施的使用方式又可以分为交互式视景虚拟系统和交互式沉浸虚拟系统。对系统的构成诸要素、要件及过程做了简要描述和分析。对国外虚拟医学的研究进行了分析和评价,涵盖了该领域的基本概念、基本理论和进展。虚拟医学系统的产生和相关理论的兴起形成了虚拟医学.并使虚拟医学系统化理论系统化和软硬件系列化。  相似文献   

4.
喉病患者发音的微机分析系统   总被引:3,自引:0,他引:3  
该系统包括PC/386微机,自制的信号输入器及24针打印机。软件包括6个功能模块:操作指导,采样,数据管理,频谱分析,相关分析及声图。分析图形及参数表可屏幕显示或打印绘图。彩色声图可显示三维图象,即时间、频率及强度关系图。声图中设置可移动光标,用于选择某时刻详细进行频谱与相关分析。临床应用征明,该系统可获得喉疾病早期诊断信息。  相似文献   

5.
普通外科手术中,外科医生主要依赖触觉反馈来操控手术器械和判断组织性质,但微创手术很难实现触觉反馈.本文综述了国内外对手术剪刀、手术刀和微创手术器械触觉力的研究现状,从实验装置、传感器位置和测量原理等方面分析了触觉力测量和反馈中的关键技术.随着集成/嵌入式传感器、模块化设计、微机电系统等技术的广泛应用,触觉力测量的研究也将为无创手术器械的力学性能检测提供新的思路.  相似文献   

6.
为了进一步研究镫骨肌声反射的非线性行为,采用控制论的方法对同侧声反射进行了数学建模。针对镫骨肌声反射的声导纳变化曲线,建立声反射动态系统的微分方程,进一步推导出相应的传递函数。该数学模型能描述声反射的潜伏期、上升过程、适应过程等现象。计算机模拟结果表明,多个频率的实验数据与仿真结果都比较一致。  相似文献   

7.
研究激励源特征与磁声信号之间的关系,探究被测样本激励信号与激发声信号频率的对应关系。建立磁声信号检测实验系统,采用不同幅度及频率的单周期正弦脉冲为激励信号,以铜导线为被测样本,检测电磁激励产生的声信号;采用短时傅立叶变换STFT加移动平滑矩形窗时频法分析激励电流及声信号的幅度、频率等信息,并进行比较。激励电流频率相同时,输出声信号幅值与激励电流幅值呈线性关系,系统函数一致性较好;激励电流频率不同时,对应的输出信号频谱有不同的变化规律,系统函数差异较大。检测系统对频率高度敏感,为在磁声信号提取更多信息,需提高检测电路信噪比及增大声探头带宽。  相似文献   

8.
本文综述了近年来国内外关于自发耳声发射的基础及理论研究,其中包括自发耳声发射的检测方法、信号处理及信号特征等方面。自发耳声发射的发现,对基础医学及神经生理的研究有重要的意义  相似文献   

9.
触觉学已经在很多领域得到应用,如遥控机器人、手术机器人、假肢、娱乐交互界面以及虚拟现实技术等,以此来增强人对机器或者虚拟物体的操作性。电触觉是现在主要的触觉再现措施,即通过改变恒流/恒压电脉冲的频率、脉宽、强度以及脉冲方向等因素,让人产生不同的触觉感。首先通过人体皮肤神经的建模推导出激励函数,并进行仿真研究,通过改变施加在电刺激阵列的电脉冲方向和强度,设计能分别刺激皮下3种刺激感受器(Meissner触觉小体、Merkel触盘、Pancinian环层小体)的实验范式,同时进行了心理物理学实验。10位受试者参加了实验,对食指进行电触觉刺激。正脉冲时采用不同频率(10、30、70、90 Hz)的电脉冲进行刺激,让受试者产生不同级别的振动感;负脉冲时采用不同脉宽(150、200、250、300 μs)的电脉冲进行刺激,让受试者产生不同级别的压力感。受试者对振动和压力的感觉强度进行主观判断。统计结果表明,该模型下的实验模式能够使触觉强度分级平均识别率高达80%以上,同时可通过对刺激电极分布以及电流大小的分析,找到最佳的刺激模式,保证最佳的触觉再现。  相似文献   

10.
通过研究听觉输入对视觉系统中瞳孔控制系统的调节作用 ,将有助于了解听觉系统和瞳孔控制系统这两个系统间的相互作用与影响。本工作用红外线电视动态瞳孔仪作测量设备 ,对正常青年受试者进行了一系列声 瞳孔反应的实验 ,确定了声刺激能引起瞳孔瞬态扩大反应 ,并初步分析了反应的特点与声刺激参数的关系 ,以及对声 瞳孔反应的可能的联系部位进行了初步的探讨。  相似文献   

11.
The stethoscope is a medical acoustic device which is used to auscultate internal body sounds, mainly the heart and lungs. A digital stethoscope overcomes the limitations of a conventional stethoscope as the sound data is transformed into electrical signals which can be amplified, stored, replayed and, more importantly, sent for an expert opinion, making it very useful in telemedicine. With the above in view, a low cost digital stethoscope has been developed which is interfaceble with mobile communication devices. In this instrument sounds from various locations can be captured with the help of an electret condenser microphone. Captured sound is filtered, amplified and processed digitally using an adaptive line enhancement technique to obtain audible and distinct heart sounds.  相似文献   

12.
Two transient sounds, considered as a conditioner followed by a probe, were delivered successively from the same or different direction in virtual acoustic space (VAS) while recording from single neurons in primary auditory cortex (AI) of cats under general anesthesia. Typically, the response to the probe sound was progressively suppressed as the interval between the two sounds (ISI) was systematically reduced from 400 to 50 ms, and the sound-source directions were within the cell's virtual space receptive field (VSRF). Suppression of the cell's discharge could be accompanied by an increase in response latency. In some neurons, the joint response to two sounds delivered successively was summative or facilitative at ISIs below about 20 ms. These relationships held throughout the VSRF, including those directions on or near the cell's acoustic axis where sounds often elicit the strongest response. The strength of suppression varied systematically with the direction of the probe sound when the ISI was fixed and the conditioning sound arrived from the cell's acoustic axis. Consequently a VSRF defined by the response to the lagging probe sound was progressively reduced in size when ISIs were shortened from 400 to 50 ms. Although the presence of a previous sound reduced the size of the VSRF, for many of these VSRFs a systematic gradient of response latency was maintained. The maintenance of such a gradient may provide a mechanism by which directional acuity remains intact in an acoustic environment containing competing acoustic transients.  相似文献   

13.
Attentional modulation of human auditory cortex   总被引:2,自引:0,他引:2  
Attention powerfully influences auditory perception, but little is understood about the mechanisms whereby attention sharpens responses to unattended sounds. We used high-resolution surface mapping techniques (using functional magnetic resonance imaging, fMRI) to examine activity in human auditory cortex during an intermodal selective attention task. Stimulus-dependent activations (SDAs), evoked by unattended sounds during demanding visual tasks, were maximal over mesial auditory cortex. They were tuned to sound frequency and location, and showed rapid adaptation to repeated sounds. Attention-related modulations (ARMs) were isolated as response enhancements that occurred when subjects performed pitch-discrimination tasks. In contrast to SDAs, ARMs were localized to lateral auditory cortex, showed broad frequency and location tuning, and increased in amplitude with sound repetition. The results suggest a functional dichotomy of auditory cortical fields: stimulus-determined mesial fields that faithfully transmit acoustic information, and attentionally labile lateral fields that analyze acoustic features of behaviorally relevant sounds.  相似文献   

14.
Auscultation of pulmonary sounds provides valuable clinical information but has been regarded as a tool of low diagnostic value due to the inherent subjectivity in the evaluation of these sounds. In this work, a Digital Signal Processor is used to design an instrument capable of acquiring, parameterizing and subsequently classifying lung sounds into two classes with an aim to evaluate them objectively in real time. The instrument operates on sound signal from a chest microphone and flow signal from a pneumotachograph. The classification is carried out separately on the 12 reference libraries (pathological and healthy) of six sub-phases of a full respiration cycle and the results are combined to arrive at a final decision. The k-nearest neighbour and minimum distance classifiers with different distance metrics have been implemented in the instrument. The instrument was tested in the clinical environment, attaining 96% accuracy in real-time classification.  相似文献   

15.
The present article outlines the contribution of the mismatch negativity (MMN), and its magnetic equivalent MMNm, to our understanding of the perception of speech sounds in the human brain. MMN data indicate that each sound, both speech and nonspeech, develops its neural representation corresponding to the percept of this sound in the neurophysiological substrate of auditory sensory memory. The accuracy of this representation, determining the accuracy of the discrimination between different sounds, can be probed with MMN separately for any auditory feature (e.g., frequency or duration) or stimulus type such as phonemes. Furthermore, MMN data show that the perception of phonemes, and probably also of larger linguistic units (syllables and words), is based on language-specific phonetic traces developed in the posterior part of the left-hemisphere auditory cortex. These traces serve as recognition models for the corresponding speech sounds in listening to speech. MMN studies further suggest that these language-specific traces for the mother tongue develop during the first few months of life. Moreover, MMN can also index the development of such traces for a foreign language learned later in life. MMN data have also revealed the existence of such neuronal populations in the human brain that can encode acoustic invariances specific to each speech sound, which could explain correct speech perception irrespective of the acoustic variation between the different speakers and word context.  相似文献   

16.
The primary objective of the study was to investigate the effects of pneumothorax (PTX) on breath sounds and to evaluate their use for PTX diagnosis. The underlying hypothesis is that there are diagnostic breath sound changes with PTX. An animal model was created in which breath sounds of eight mongrel dogs were acquired and analysed for both normal and PTX states. The results suggested that pneumothorax was associated with a reduction in sound amplitude, a preferential decrease in high-frequency acoustic components and a reduction in sound amplitude variation during the respiration cycle (p<0.01 for each, using the Wilcoxson signed-rank test). Although the use of diminished sound amplitude for PTX diagnosis assumes availability of baseline measurements, this appears unnecessary for high-frequency reduction or sound amplitude changes over the respiratory cycle. Further studies are warranted to test the clinical feasibility of the method in humans.  相似文献   

17.
Current breathing flow estimation methods use tracheal breath sounds, but one step of the process, ‘breath phase (inspiration/expiration) detection’, is done by either assuming alternating breath phases or using a second acoustic channel of lung sounds. The alternating assumption is unreliable for long recordings, non-breathing events, such as apnea, swallow or cough change the alternating nature of the phases. Using lung sounds intensity requires the addition of a secondary channel and the associated labor. Hence, an automatic and accurate method for breath-phase detection using only tracheal sounds would be of great benefit. We present a method using several breath sound parameters to differentiate between the two respiratory phases. The proposed method is novel and independent of flow level; it requires only one prior- and one post-breath sound segment to identify the phase. The proposed method was tested on data from 93 healthy individuals, without any history of pulmonary diseases breathing at 4 different flow levels. The most prominent features were from the duration, volume and shape of the sound envelope. This method has shown an accuracy of 95.6% with 95.5% sensitivity and 95.6% specificity for breath-phase identification without assuming breath-phase-alteration and/or using any other information.  相似文献   

18.
1. The sound frequency selectivities of single stapedius motoneurons were investigated in ketamine anesthetized and in decerebrate cats by recording from axons in the small nerve fascicles entering the stapedius muscle. 2. Stapedius motoneuron tuning curves (TCs) were very broad, similar to the tuning of the overall acoustic reflexes as determined by electromyographic recordings. The lowest thresholds were usually for sound frequencies between 1 and 2 kHz, although many TCs also had a second sensitive region in the 6- to 12-kHz range. The broad tuning of stapedius motoneurons implies that inputs derived from different cochlear frequency regions (which are narrowly tuned) must converge at a point central to the stapedius motoneuron outputs, possibly at the motoneuron somata. 3. There were only small differences in tuning among the four previously described groups of stapedius motoneurons categorized by sensitivity to ipsilateral and contralateral sound. The gradation in high-frequency versus low-frequency sensitivity across motoneurons suggests there are not distinct subgroups of stapedius motoneurons, based on their TCs. 4. The thresholds and shapes of stapedius motoneuron TCs support the hypothesis that the stapedius acoustic reflex is triggered by summed activity of low-spontaneous-rate auditory nerve fibers with both low and high characteristic frequencies (CFs). Excitation of high-CF auditory nerve fibers by sound in their TC "tails" is probably an important factor in eliciting the reflex. 5. In general, the most sensitive frequency for stapedius motoneurons is higher than the frequency at which stapedius contractions produce the greatest attenuation of middle ear transmission. We argue that this is true because the main function of the stapedius acoustic reflex is to reduce the masking of responses to high-frequency sounds produced by low-frequency sounds.  相似文献   

19.
The problems encountered in the automatic detection of cardiac sounds and murmurs are numerous. The phonocardiogram (PCG) is a complex signal produced by deterministic events such as the opening and closing of the heart valves, and by random phenomena such as blood-flow turbulence. In addition, background noise and the dependence of the PCG on the recording sites render automatic detection a difficult task. In the paper we present an iterative automatic detection algorithm based on the a priori knowledge of spectral and temporal characteristics of the first and second heart sounds, the valve opening clicks, and the systolic and diastolic murmurs. The algorithm uses estimates of the PCG envelope and noise level to identify iteratively the position and duration of the significant acoustic events contained in the PCG. The results indicate that it is particularly effective in detecting the second heart sound and the aortic component of the second heart sound in patients with lonescu-Shiley aortic valve bioprostheses. It has also some potential for the detection of the first heart sound, the systolic murmur and the diastolic murmur.  相似文献   

20.
The third heart sound is normally heard during auscultation of younger individuals but disappears with increasing age. However, this sound can appear in patients with heart failure and is thus of potential diagnostic use in these patients. Auscultation of the heart involves a high degree of subjectivity. Furthermore, the third heart sound has low amplitude and a low-frequency content compared with the first and second heart sounds, which makes it difficult for the human ear to detect this sound. It is our belief that it would be of great help to the physician to receive computer-based support through an intelligent stethoscope, to determine whether a third heart sound is present or not. A precise, accurate and low-cost instrument of this kind would potentially provide objective means for the detection of early heart failure, and could even be used in primary health care. In the first step, phonocardiograms from ten children, all known to have a third heart sound, were analysed, to provide knowledge about the sound features without interference from pathological sounds. Using this knowledge, a tailored wavelet analysis procedure was developed to identify the third heart sound automatically, a technique that was shown to be superior to Fourier transform techniques. In the second step, the method was applied to phonocardiograms from heart patients known to have heart failure. The features of the third heart sound in children and of that in patients were shown to be similar. This resulted in a method for the automatic detection of third heart sounds. The method was able to detect third heart sounds effectively (90%), with a low false detection rate (3.7%), which supports its clinical use. The detection rate was almost equal in both the children and patient groups. The method is therefore capable of detecting, not only distinct and clearly visible/audible third heart sounds found in children, but also third heart sounds in phonocardiograms from patients suffering from heart failure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号