首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Informed Consent     
There have been significant changes in the doctor patient relationship with the impact of technology in day-to-day practice. More and more patients are aware of their rights and are keen to make free choice and decision on their treatment. This helps them to choose the treatment of their choice from the options available and to select a physician of their choice. Doctor''s decisions are being questioned regarding their correctness and there is a need to educate the patient, on what one offers by way of treatment. In some procedures and types of treatment, patient needs to be educated and informed of the merits and demerits of the treatment available. This will help the patient to make appropriate choice and also to accept some adverse outcome of treatment. Towards this end, all countries are looking afresh at the necessity of Informed Consent. Methods adopted by some countries are highlighted to help our physicians practice them in an appropriate way. A lot of remedial work needs to be done to minimize future litigation, as many doctors misunderstand their legal obligations and haven''t caught up with the change in judge''s thinking.Key Words: Doctor-patient relationship, Consumer ethics, Medical negligenceClinical ethics teaches physicians, a wide range of specific ethical issues e.g informed consent, truth telling, end-of-life decisions, advance directives (substitute decision making for incompetent patients) and increasing third-party constraints on the autonomy of both patients and physicians [1].There have been many changes, between 1965 and 1970 on the subject of medical ethics and physician-patient relationship. The traditional medical ethical principle required that the physician do what he thought would benefit the patient. The principle of mutual trust protected these decisions. The medical profession even refused to recognize the wishes of the patient and felt that he knows what is best for the patient – a paternalistic attitude. The physicians failed to accept that the patient is entitled to make his own free choice and decision-the principle of autonomy [2].Patients had earlier placed their faith in the physician''s higher education and the authority of his medical role. But of late doubts have been raised about the quality of doctor''s decision, as their decisions vary depending upon the facts that :
  • a.He is a long trusted physician or an emergency room doctor seeing the patient for the first time,
  • b.The patient is acutely ill or is suffering from a chronic illness,
  • c.The procedure / treatment is a one time or it involves prolonged or repeated treatment,
  • d.There are multiple / alternative choices or only one choice,
  • e.Patient''s economic and social status, source of fund for treatment, and
  • f.Doctor-patient relationship [3].
The rights (autonomy) of the patient have deeply eroded the old model of doctor-patient relationship. The emerging model prefers to treat a doctor as a service provider (medical) for hire, governed by negotiation and a commercial relationship. The consumer ethics has displaced the physician from their previous prominent status and allow patients to say, “Doctors are not Gods“. The patient has now the ability to select and dismiss their doctors. They have the resources and can express their preferences about making decisions on general or even specific treatments. They can ask questions, reject proposals, and often find allies in dealing not only with doctors but also hospitals from legal literature, support group, counsellors, and social workers [3].The clinical-ethical process of shared decision-making is mirrored by the legal doctrine of Informed Consent (IC). Informed consent is defined as voluntary acceptance by a competent patient of a plan for medical care after physician adequately discloses the proposed plan, its risks and benefits, and alternative approaches. This requires a process of effective communication and education between the physician and patient [1]. Informed consent is a process with the elements of autonomous authorization, free from coercion or manipulation, decision-making capacity, disclosures to the patients, and comprehension [4].In academic medicine, as teaching of medical ethics became formalized in the 1970s, moral principles of respect for autonomy, non-malfeasance (the obligation to avoid causing harm), beneficence (obligation to provide benefits and to balance benefits against risks), and justice, assumed a central role. Thirty years later, IC is still written with the intent to protect the medical profession from lawsuits. Indeed, court views of IC also include a therapeutic privilege for physicians not to inform a patient who may be harmed by the disclosed information [5].Over the past two decades a considerable volume of litigation in many countries have focused on the consent issue and the doctrine of informed consent has assumed a significant role in the medical negligence debate.  相似文献   

2.
Male erectile dysfunction is common and frustrating after the age of forty years. Erectile dysfunction is a cause of misery, relationship difficulties, and significantly reduced quality of life. Sildenafil citrate (Viagra) has shown promising results in recently published clinical trials. Sildenafil is a potent and competitive inhibitor of cGMp specific phosphodiesterase-5, predominant isoenzyme in the human corpus cavernosum. It is effective in erectile dysfunction of diverse origin, however it requires a patent vascular system to be effective. It is not effective in patients with endocrinal impotence, loss of libido, premature ejaculation or infertility. Its main adverse effects are headache, flushing, dyspepsia, diarrhoea, nasal congestion, indigestion, visual disturbances, dizziness and rash. Ventricular tachycardia and acute myocardial infraction have been reported in patients of ischaemic heart disease after consumption of sildenafil. Six deaths have been reported in patients taking nitrates. In India it is likely to be prescribed by a primary care physician without complete evaluation of patient on complaint of impotence. Hence the ethical question of who should prescribe this drug should be addressed by medical fraternity and proper guidelines formulated to avoid misuse of sildenafil. Phosphodiesterase is distributed in nerve, central nervous system, and systemic vasculature, hence long-term effects of drug on these tissues has to be ascertained. It should be made mandatory to report all adverse drug reactions to ADR monitoring centres. It is a wonder for those who require it, but has potentially dangerous adverse effects and drug interactions and hence is and not a wonder pill for all kinds of impotence.KEY WORDS: Erectile dysfunction, Impotence, Phosphodiesterase inhibitor, Sildenafil citrateMale erectile dysfunction has been defined as the persistent inability to attain and/or maintain penile erection sufficient for satisfactory sexual performance [1]. The prevalence of erectile dysfunction ranges from 52% in men aged 40-70 years to greater than 95% in men over 70 [2]. Improved understanding of peripheral and central mechanism for erection [3] has resulted in trials of various drugs [4, 5, 6, 7]. Dr Simon Campbell while working on newer molecules for angina discovered sildenafil citrate, which has shown promising results in recently published clinical trials [8, 9, 10]. This drug was approved in USA by the FDA on 27 March 98 [11] and licence was granted for sale in Europe by European Medicine Evaluation Agency in third week of September 98 [12]. In India various companies are exploring the possibilities of its sale [13].  相似文献   

3.
Pemphigus vulgaris is perhaps the most formidable disease encountered by dermatologists. In the days before steroid therapy the mortality rate was 95 per cent, death occuring usually within 14 months. The cause of death was septicaemia, starvation and toxic state. Corticosteroid, immunosuppressants and adjuvant therapy have reduced the mortality to 10-40 per cent with the cause of death being uncontrolled pemphigus, complications of corticosteroid and immunosuppressant therapy, septicaemia and thromboembolism. Elderly patients and patients with extensive lesions have higher mortality rate. Prognosis has further improved by intensive care, adequate fluid replacement, nutritional support, a co-herent antibacterial policy alongwith aggressive corticosteroid therapy and immunosuppressants. Plasmapheresis has been used in patients who fail to respond to conventional management. Extracorporeal photophoresis has been reported to be effective in patients with ’treatment resistance’ pemphigus vulgaris.KEY WORDS: Adjuvant therapy, Corticosteroids, Immunosuppressants, Pemphigus vulgaris, Photophoresis, PlasmapheresisPemphigus Vulgaris (PV) is an autoimmune blistering disease of the skin and mucous membranes characterized histologically by in vivo bound and circulating IgG auto antibody directed against the cell surface of keratinocytes. The PV antigen [1] is a 130-KD glycoprotein that has been named, desmoglein 3.In the days before corticosteroid therapy, the mortality rate of PV was 95 per cent, death occurring usually within 14 months [2]. Corticosteroid therapy and other symptomatic measures have now reduced the mortality rate to 10-40 per cent [3].Symptomatic management of PV must focus on bacteriological investigations, antibacterial therapy, nutritional support and electrolyte equilibrium. Major fluid losses are rarely a problem in this disease. Profound hypoproteinaemia and hypoalbuminaemia due to loss of proteins from blisters promote thromboembolism and impaired defence against infection. Poor nutrition, old age, oral lesions, escape of protein from the blisters and hypercatabolism resulting from corticosteroids are frequently underestimated [4].All the needs for water and electrolytes of patients with PV should be maintained by a nasogastric silicone tube and venous access are used only a few hours a day for a discontinous supply of macromolecules. Intravenous fluids are supplemented with potassium phosphate in order to correct hypophosphataemia. In addition, all patients are given 1500 ml of nasogastric feeding during the first 24 hours. On the following days oral supplies are progressively increased and intravenous fluids decreased [5].Aggressive nutritional support [6], is needed to minimize the protein losses and to promote tissue synthesis during the healing of cutaneous lesions. The diet should be of high protein with 2-3 g/kg body weight of protein daily in adults. A nasogastric feeding tube is required in most patients, as mucosal erosions impair oral feeding. In patients who are able to eat, discontinuous tube feeding supplements are administered during the night. In all cases enteral alimentation is preferable to parenteral therapy considering the risk of venous line contamination.Patients with extensive skin lesions usually have fever and shivering, even in the absence of infection. Interleukin I, which is produced by epidermal cells, plays a key role in inducing fever [7]. If patients have high fever any attempt to decrease their central temperature by a cooler environment will result in added energy expenditure and be an additional threat to survival. On the other hand lowering of the central body temperature by antipyretics will contribute to the reduction of cutaneous heat loses and improve the cardiac index [6]. Environment temperature should be raised to 30-32° C. The temperature of antiseptic baths should be carefully monitored and set to 35-38° C. Infrared lamps and an air fluidized bed are of great help for the patients warming [6].Damaged skin and exudates support the growth of a wide spectrum of micro-organisms. At the beginning of the hospital stay Staph. aureus is the main suspect while later Pseudomonas sp. and enterobacteriaceae are most likely responsible for severe infection. Routine prophylactic systemic antibiotics do not prevent infection but rather lead to the emergence of resistant bacterial strains and of fungi [3]. There is no ideal antibiotic and the choice will be based on the antibiotic sensitivity test against the bacterial strains isolated from the lesions of the skin.Patients are bathed once or twice a day in 0.05 per cent aqueous chlorhexidine or 1:10000 potassium permanganate solution. In between the skin lesions are painted every 4 hourly with 0.05 per cent aqueous silver nitrate or 0.05 per cent aqueous cholrhexidine. When oozing from the skin lesions subside the lesions are treated with 1 per cent silver suphadiazine cream or sofratulle dressing. Efficiency of antiseptic therapy is monitored by bacterial sampling on several sites of altered skin every two days [6].Intralesional steroids have been used for the treatment of localized, or recalcitrant localised blistering lesions. The lesions are injected with 0.05 to 1 ml of Triamcinolone acetonide 5 to 10 mg/ml in skin lesion and 10 to 20 mg/ml for oral lesions per site every 1-2 weeks until healing occurs. This modality may be useful to treat new lesions in patients whose systemic therapy is being tapered off. It hastens the resolution of individual lesions without increasing the dosage of steroids.Patients with mild disease may respond to as little as 20 mg per day of prednisolone while those with severe and extensive involvement tend to require higher dosages, [8] such as 60 to 80 mg per day, (1 to 1.5 mg/kg/day). In patients whose disease fails to respond to the initial dose of prednisolone, the dosage is increased by 50 per cent every four to five days until initial control is reached. Patients whose skin continues to blister while they are receiving relatively higher dosages of prednisolone, 120 mg/day, split doses, e.g. 80 mg in the morning and 40 mg in the evening may be tried for better control. Once new blistering has ceased, the dosage of prednisolone is maintained until the majority of the erosions have healed. Once the majority of lesions have healed, slow judicious tapering of prednisolone dosage can begin [6].Long term side effects associated with corticosteroid use include osteoporosis, peptic ulcer disease, aseptic vascular necrosis, cataract formation and unmasking of diabetes mellitus [6]. Current agents which used to mitigate against osteoporosis include vitamin D2, 400 IU per day and calcium carbonate I gm per day. Patients with a history of renal insufficiency or renal stones are not the candidates for calcium and vitamin D2 supplementation. The drugs currently under investigation, to prevent osteoporosis are the use of biphosphonates and calcitonin. Intranasal calcitonin is highly promising as an effective, well tolerated agent to prevent bone loss in susceptible persons.The strategies to minimise these side effects include alternate day conticosteroid therapy, and the use of adjuvants. Concomitant use of immunosuppressives or other adjuvants are advocated only if there are relative contraindications to the use of steroids, development of serious side effects due to steroids or if a reduction in their dose is not possible because of repeated exacerbations in the disease activity.Nicotinamide 1.5 gm per day and tetracycline, 2 gm per day have been reported to control PV in an uncontrolled trial [10]. A few patients with mild disease have improved with gold, but concomitant steroid therapy is required [11]. Auranofin, an oral formulation of gold is less toxic than parenteral formulations [12]. Dapsone, 100-300 mgs per day, alone is effective in pemphigus erythematosus and foliaceous [13] and may also be used in PV as a steroid sparing agent [9].Immunosuppressants [14], azathioprine [15], cyclophosphamide [16] and cyclosporine [17], are now used in the management of PV. Patients do not respond to immunosppressant until six to eight weeks after initiating therapy, so adequate dosages of steroids should be maintained [15, 16, 17].Azathioprine 1 to 3 mg/kg/day is effective as an adjuvant in managing patients with PV [15]. Among the major toxic effects associated with their use are bone marrow suppression, including leukopenia, anemia, thromobocytopenia, and those secondary to immunosuppression, including atypical infections and the enhanced development of malignancies particularly lymphoma.Cyclophosphamide 2 to 3 mg/kg/day has been reported to be effective both as a first line adjuvant and in the treatment of those whose disease has previously failed to respond to azathioprine [16]. Patients can be well hydrated before therapy and the risk of hemorrhagic cystitis significantly lessened [16].Cyclosporine [17, 18] has been used in small series of PV patients with an usual dosage range between 5 and 10 mg/kg/day. Toxic effects associated with cyclosporine include hypertension, and nephrotoxicity [17].Pulse steroid therapy has been used in PV patients refractory to oral corticosteroids and immunosuppressants [19]. It consists of giving a mega dosage of intravenous corticosteroids, 1 gm methylprednisolone 3-12 hourly repeated on 3 or 5 more consecutive days. Complications associated with pulse steroids include sepsis and electrolyte imbalances that have been associated with fatal arrhythmias [19].Dexamethasone – cyclophosphamide pulse therapy [20] has been used in PV cases. It involves the intravenous administration of 100 mg dexamethasone with 500 mg of cyclophosphamide in 500 ml of 5 per cent glucose solution, over 1-2 hours on day 1 followed by dexamethasone alone on the next 2 days [21]. Such pulses are repeated every month. On the remaining days cyclophosphamide 50 mg per day is administered orally. The major advantages are the quick healing of lesions and absence of long term side effects of dexamethasone [23].In patients whose disease fail to respond to high dosage corticosteroids and immunsuppressants, plasmapheresis has been used in PV [22]. Plasmapheresis rapidly depletes the level of circulating autoantibodies in patients with high titre circulating autoantibodies. However, patients may experience a rebound phenomenon with increased autoantibody production and clinical exacerbation [2, 3]. Hence, immunosuppressive drug, cyclophosphamide is concomitantly administered to suppress antibody formation. Side effects from this therapy include sepsis, hypotension, depletion of clotting factors.Extracorporeal photophoresis [24] has been reported to be effective in a limited series of patients with treatment resistant PV. Patients were treated with two treatments per week every two to four weeks with ingestion of 8-methoxypsoralen followed by passage of patients blood through an apparatus that exposes it to UVA irradiation. Side effects of extracorporeal photophoresis include post-infusion fever, hypotension, sepsis, thrombocytopenia, and elevated results of liver function test. Limited availability of this expensive procedure may not make this therapeutic option practical for many patients [25].  相似文献   

4.
Immunophenotyping of leukaemias is presently well established. It is invaluable for proper case management [1]. The most useful information on the management of cases of Acute Lymphocytic Leukaemia (ALL) is provided by the detection of the CD 10 marker on the leukaemic cell. The CD 10 positive ALL has since long been associated with a favourable prognosis as compared to the CD 10 negative ALL [1, 2]. As immunophenotyping is resource intensive, the monoclonal antibodies (MO ABS), are not in widespread use in our country. The expenses involved make this procedure prohibitive in most institutions. For this reason, a limited panel of MO ABS has been used in this study, with a special emphasis on the CD 10 marker. A total of 25 cases of ALL were studied. 17 cases were found to be positive for the CD 10 marker (68%). These cases were associated with a favourable prognosis as compared to the CD 10 negative group. Although, the diagnosis of ALL and leukaemias in general, is essentially based on the study of Romanowsky stained smears [3, 4], the additional information provided by the cell surface marker study results in better case management [1].KEY WORDS: Common ALL Antigen, Immunophenotyping, Surface markers  相似文献   

5.
Surgery of the skull base has evolved over the past 100 years. This anatomical area has been approached by neurosurgeons, otologists, maxillofacial surgeons and plastic surgeons from different angles. Presently, the combined skills of these surgeons are utilized in treating lesions of this area once considered a ‘bony no man''s land’. Modern microsurgical techniques are based on the principle that removal of adequate bone from the cranial base could provide sufficient access without the necessity to retract dura. Accurate preoperative assessment by imaging, the use of microsurgical techniques, preservation of vital structures such as nerves by intraoperative monitoring, and modern anaesthetic and postoperative management have all contributed to the reduction in mortality and morbidity to acceptable levels. In the future, with refinements in imaging, stereotactic radiosurgery and chemotherapy, the above management protocol would be tailored to suit each individual patient and decided by a team of experts.KEY WORDS: Cranial fossa posterior, Craniotomy, Neuroma acoustic, Neurosurgery, Skull base, SurgerySurgery of the skull base was always considered to be difficult due to the complex anatomy of this region. The involvement of many vital structures in diseases affecting this area as well as the possibility of damage to these structures prevented surgeons from operating and it was labelled as a surgical ‘no man''s land’. Surgery of the skull base is an area where the combined skills of the neurosurgeon, otorhinolaryngologist and the reconstructive surgeon may be necessary to treat a patient adequately and safely [1]. Skull base surgery has undergone extraordinary development thanks to the pioneering effort of eminent specialists who had the foresight, single-mindedness and dedication to tackle the almost unsurmountable problems confronting them at the time. It is worth remembering pioneers such as Sir Charles Ballance who, in 1894, was the first to perform excision of an acoustic neuroma, and Cushing and Dandy who introduced silver haemostatic clips and electrocautery which helped in reducing the operative mortality to around 20 per cent [2, 3]. Despite this, leading neurosurgeons like Pennybacker and Cairnes felt that it may be unwise to operate on patients with skull base lesions because the patient may end up with more disability after surgery than before [4].The contemporary era of skull base surgery began in the early 1960''s when William House, considered the father of neuro-otology, decided to challenge the status quo. Thanks to refinements in audiological tests and radiological investigations he was able to diagnose these lesions early. He felt that a transtemporal approach to the posterior cranial fossa would be considerably safer and introduced 2 approaches – middle fossa and translabyrinthine approaches – to the cerebellopontine angle. He also introduced the microscope in neurosurgery which resulted in safe operations in areas previously considered to be hazardous. Thus, House succeeded in reducing the operative mortality for acoustic tumour surgery to less than 1 per cent [5].The various approaches to the skull base can be considered under approaches to the anterior, middle and posterior cranial skull base.Obwegeser and Tessier were responsible for expanding the extent of anterior extracranial access. Modern cranio-maxillofacial surgical techniques were initially applied for the correction of major congenital skeletal deformities in 1957 by Dr Paul Tessier [6]. Later, he applied these techniques to correct late traumatic deformities. He demonstrated that performing selective osteotomies at specific points in the skull and repositioning the cranial bones was possible with excellent cosmetic and functional results. It was a decade later that these principles were applied in the resection of tumours of the orbit and paranasal sinuses extending into the anterior skull base. Transsphenoidal approach to the pituitary fossa for removal of pituitary tumours has evolved rapidly and is now the procedure of choice in microadenomas, its greatest advantage being the reduced morbidity when compared to the transfrontal approach [6, 7, 8, 9]. Large tumours of the pituitary are resected using a combined transfrontal and transsphenoidal approach. At present anterior skull base approaches are used to treat fractures of the anterior cranial base, CSF rhinorrhoea, exophthalmos, inflammatory diseases of the frontal, ethmoidal and sphenoidal sinuses extending intracranially, and tumours of paranasal sinuses as in craniofacial resection.The middle cranial fossa approach described by William House is used for removal of small intra-canalicular acoustic neuromas less than 1 cm in size [5]. It is also used in performing vestibular neurectomy for Meniere''s disease. The infratemporal fossa was explored by Ugo Fisch, another famous neuro-otologist. Using microsurgical techniques, he was guided by the principle that removal of adequate amounts of bone from the skull base could provide sufficient access without the necessity of retracting the dura. The facial nerve, which traversed the operative field, could be repositioned and adequate exposure of the infratemporal course of the carotid artery could be obtained. The infratemporal fossa approach of Fisch is of three types – Type A, B and C. These give access to the entire lateral skull base from the nasopharyngeal roof, clivus, the jugular bulb, internal carotid artery and the last 5 cranial nerves. Type A provides access along the infratemporal course of the internal carotid artery and to the jugular foramen. Type B and C approaches provide anterior exposure of the lateral skull base especially to the clivus and nasopharyngeal roof. Type A and B approaches are useful in subtotal petrosectomy for extensive cholesteatoma of the petrous temporal bone, removal of glomus jugulare tumours, aneurysms of the carotid artery, high parapharyngeal tumours and malignant tumours of the middle ear cleft involving the petrous temporal bone. The type C approach provides access to the roof of the nasopharynx and the clivus for removal of tumours such as chordomas and recurrent nasopharyngeal carcinomas. These approaches, though time consuming, are safe as all the vital structures are identified and preserved while the morbidity is reduced due to minimal retraction of the brain [10, 11, 12, 13].The posterior cranial fossa was traditionally explored by the suboccipital route. This approach, though fast and relatively easy, was associated with increased morbidity due to cerebellar retraction and damage to the facial and other lower cranial nerves. To overcome this problem House had introduced the translabyrinthine approach where the morbidity was much less [5]. Modifications of this basic approach continue to evolve. Fisch described the transotic approach which gives a wider exposure when combined with the translabyrinthine approach [14]. Retrosigmoid and retromastoid approaches give wider exposure. As a result bigger tumours, up to 4 cm in size, can be resected in toto while preserving vital structures such as the facial nerve. Efforts to preserve cranial nerves was initiated by Krause in 1898 who was the first to identify the facial nerve by Faradic stimulation. Routine intraoperative nerve monitoring is now the standard practice during skull base procedures [15, 16, 17]. The contemporary monitoring systems are generally based on evoked electromyographic methods. This has resulted in better localization and preservation of function of the cranial nerves [18, 19, 20].Advances in allied specialities have also contributed to the evolution of skull base surgery and resulted in improved treatment outcome besides reducing mortality and morbidity. Improvements in diagnostic imaging especially the invention of the computerized tomographic scan, high resolution computerized tomographic scan and magnetic resonance imaging have enabled clinicians to diagnose skull base lesions early and devise simpler and less destructive operations to excise them [21, 22]. The development of interventional radiology, especially digital subtraction angiography with superselective embolization of the tumour vasculature is invaluable while operating on highly vascular tumours such as juvenile nasopharyngeal angiofibromas. Anaesthetic techniques have also improved, notably the technique of hypotensive anaesthesia and monitoring of vital parameters during these long procedures. This has resulted in minimal intraoperative mortality and reduction in postoperative morbidity.The development of stereotactic radiosurgery for the treatment of tumours of the skull base was viewed as a threat to skull base surgery. However the former is still in its infancy and the initial results, though encouraging, have not stood the test of time. The delayed complications are also many. Instead of replacing skull base surgery it may prove to be an alternative in poor-risk patients unable to withstand surgery and in patients with bilateral skull base lesions [23, 24]. In future the management of skull base lesions will be tailored to suit the individual patient and the decision will be taken by a multidisciplinary team of skull base surgeons and their associates such as radiotherapists and chemotherapists. With this approach it would be possible to deliver the best treatment to the patient with minimal morbidity. Truly an exciting field for those who are dedicated!  相似文献   

6.
Kala azar continues to be a medical problem in India and with the increase in incidence of HIV Infection it is likely that kala azar will be encountered more frequently and in its atypical forms. To aid diagnosis, several immunological tests are now available and they are more sensitive and specific than the aldehyde test. Like many other diseases today, the treatment of kala azar is hampered by drug resistance. Newer drugs are available and so are new delivery systems. Kala azar develops frequently in the HIV infected person before development of AIDS. The presentation is atypical and leishmanial species other than L. donovani may also be the infecting agents. A combination of sandfly control, detection and treatment of patients and prevention of drug resistance continues to the ideal approach for the control of the disease.KEY WORDS: Kala azar, Leishmaniasis, HIVWHO estimates that more than 200 million people in the world are exposed to leishmanial parasites and more than 500,000 people develop clinical visceral leishmaniasis each year [1]. Major epidemics have occurred in the eastern part of our country and some other parts of the world [2]. Large scale drug failure has been the outstanding feature of the Indian epidemic of 1991, leading to increased morbidity and mortality. Certain clinical manifestations like lymphadenopathy which were not seen in Indian kala azar earlier, have been reported from Bihar and West Bengal [3]. With the increasing incidence of HIV infection, more atypical presentations are being noted [1]. Due to the development of widespread resistance to conventional drugs, several new drugs and other modalities of treatment have been developed and the conventional drugs are being tried in modified dosages with variable success.  相似文献   

7.
Medical error reduction is an international issue, as is the implementation of patient care information systems (PCISs) as a potential means to achieving it. As researchers conducting separate studies in the United States, The Netherlands, and Australia, using similar qualitative methods to investigate implementing PCISs, the authors have encountered many instances in which PCIS applications seem to foster errors rather than reduce their likelihood. The authors describe the kinds of silent errors they have witnessed and, from their different social science perspectives (information science, sociology, and cognitive science), they interpret the nature of these errors. The errors fall into two main categories: those in the process of entering and retrieving information, and those in the communication and coordination process that the PCIS is supposed to support. The authors believe that with a heightened awareness of these issues, informaticians can educate, design systems, implement, and conduct research in such a way that they might be able to avoid the unintended consequences of these subtle silent errors.Medical error reduction is an international issue. The Institute of Medicine''s report on medical errors1 dramatically called attention to dangers inherent in the U.S. medical care system that might cause up to 98,000 deaths in hospitals and cost approximately $38 billion per year. In the United Kingdom, the chief medical officer of the newly established National Patient Safety Agency estimates that “850,000 incidents and errors occur in the NHS each year.”2 In The Netherlands, the exact implications of the U.S. figures for the Dutch health care scene are much debated. There as well, however, patient safety is on its way to becoming a political priority. Medication errors alone have been estimated to cause 80,000 hospital admissions per year in Australia, costing $350 million.3In much of the literature on patient safety, patient care information systems (PCISs) are lauded as one of the core building blocks for a safer health care system.4 PCISs are broadly defined here as applications that support the health care process by allowing health care professionals or patients direct access to order entry systems, medical record systems, radiology information systems, patient information systems, and so on. With fully accessible and integrated electronic patient records, and with instant access to up-to-date medical knowledge, faulty decision making resulting from a lack of information can be significantly reduced.5 Likewise, computerized provider order entry (CPOE) systems and automated reminder systems can reduce errors by eliminating illegible orders, improving communication, improving the tracking of orders, checking for inappropriate orders, and reminding professionals of actions to be undertaken. In this way, these systems can contribute to preventing under-, over-, or misuse of diagnostic or therapeutic interventions.6,7,8 Among the broad array of health informatics applications, CPOE systems, and especially medication systems, have received the most attention.9,10,11,12PCISs are complicated technologies, often encompassing millions of lines of code written by many different individuals. The interaction space13 within which clinicians carry out their work can also be immensely complex, because individuals can execute their tasks by communicating across rich social networks. When such technologies become an integral part of health care work practices, we are confronted with a large sociotechnical system in which many behaviors emerge out of the sociotechnical coupling, and the behavior of the overall system in any new situation can never be fully predicted from the individual social or technical components.13,14,15,16,17It is not surprising, therefore, that authors have started to describe some of the unintended consequences that the implementation of PCISs can trigger.18 For instance, professionals could trust the decision support suggested by the seemingly objective computer more than is actually called for.15,19 Also, PCISs could impose additional work tasks on already heavily burdened professionals,20,21 and the tasks are often clerical and therefore economically inefficient.17 They can upset smooth working relations and communication routines.13,22 Also, given their complexity, PCISs could themselves contain design flaws “that generate specific hazards and require vigilance to detect.”23,24 As a consequence, PCISs might not be as successful in preventing errors as is generally hoped. Worse still, PCISs could actually generate new errors.25(p.511),26,27It is obvious that PCISs will ultimately be a necessary component of any high-quality health care delivery system. Yet, in our research in three different countries, we have each encountered many instances in which PCIS applications seemed to foster errors rather than reduce their likelihood. In health care practices in the United States, Europe, and Australia alike, we have seen situations in which the system of people, technologies, organizational routines, and regulations that constitutes any health care practice seemed to be weakened rather than strengthened by the introduction of the PCIS application. In other words, we frequently observed instances in which the intended strengthening of one link in the chain of care actually leads unwittingly to a deletion or weakening of others.We argue that many of these errors are the result of highly specific failures in PCIS design and/or implementation. We do not focus on errors that are the result of faulty programming or other technical dysfunctions. Hardware problems and software bugs are more common than they should be, especially in a high-risk field such as medicine. However, these problems are well known and can theoretically be dealt with through testing before implementation. Similarly, we do not discuss errors that are the result of obvious individual or organizational dysfunctioning such as a physician refusing to seek information in the computer system “because that is not his task,” or a health care delivery organization cutting training programs for a new PCIS for budgetary reasons.We do focus on those often latent or silent errors that are the result of a mismatch between the functioning of the PCIS and the real-life demands of health care work. Such errors are not easily found by a technical analysis of the PCIS design, or even suspected after the first encounter with the system in use. They can only emerge when the technical system is embedded into a working organization and can vary from one organization to the next. Yet, in failing to take seriously some by now well-recognized features of health care work, some PCISs are designed or implemented in such a way that error can arguably be expected to result. Only when thoughtful consideration is given to these issues, we argue, will PCISs be able to fulfill their promise.  相似文献   

8.
The tuberculosis situation in the country is a matter of great concern since the disease has not been contained. The problem has been further compounded by the emerging problem of HIV infection in the country together with development of multi-resistant tubercle bacilli. There is, therefore, a need to change our National Tuberculosis Control Strategy without disturbing the basic infrastructure of the National Tuberculosis Programme. Changes such as reinforcement of the District Tuberculosis Centre, HIV and drug sensitivity testing, giving up of long term chemotherapy, BCG vaccination policy, chemoprophylaxis policy and involvement of Non-Governmental Organisations and general practitioners are suggested.KEY WORDS: Human immunodeficiency virus, TuberculosisDespite existence of the National Tuberculosis Programme for over 3 decades, the situation of tuberculosis is grim in the country. Tuberculosis is responsible for 500,000 deaths annually in India [1]. According to an estimate, there were 10 million radiologically active cases of pulmonary tuberculosis in India in 1981 of which 2.5 million were infectious and as many as 50% people in India are infected by tubercle bacilli although they may appear healthy [2]. As per the estimates, there are more than 4 million people, mostly in developing countries, who have been infected with both HIV and tuberculosis [3]. HIV infection, by progressively impairing cell mediated immunity, appears to be the highest risk factor for reactivation of tuberculosis into an active disease [4]. It is a well known fact that HIV infection is spreading unabated throughout the length and breadth of the country and as per the estimates, by 2000 AD, 400,000 HIV infections will have occurred in the country. Thus tuberculosis can be considered as the most important candidate as an opportunistic infection in HIV infected individuals in the country. There is every possibility of the problem being compounded by infections or reinfections occurring with multi-drug resistant strains of tubercule bacilli [5]. As it is, the problem of drug resistant strains throughout the country remains unmapped and any emergence of such new strains may lead to their spread both among HIV infected persons as well as the general population. In such a complicated scenario it has become imperative to take a second look at our National Tuberculosis Programme and make changes consistent with the emerging problems. Suggested changes are as following:  相似文献   

9.
Objective: To determine the availability of inpatient computerized physician order entry in U.S. hospitals and the degree to which physicians are using it.Design: Combined mail and telephone survey of 964 randomly selected hospitals, contrasting 2002 data and results of a survey conducted in 1997.Measurements: Availability: computerized order entry has been installed and is available for use by physicians; inducement: the degree to which use of computers to enter orders is required of physicians; participation: the proportion of physicians at an institution who enter orders by computer; and saturation: the proportion of total orders at an institution entered by a physician using a computer.Results: The response rate was 65%. Computerized order entry was not available to physicians at 524 (83.7%) of 626 hospitals responding, whereas 60 (9.6%) reported complete availability and 41 (6.5%) reported partial availability. Of 91 hospitals providing data about inducement/requirement to use the system, it was optional at 31 (34.1%), encouraged at 18 (19.8%), and required at 42 (46.2%). At 36 hospitals (45.6%), more than 90% of physicians on staff use the system, whereas six (7.6%) reported 51–90% participation and 37 (46.8%) reported participation by fewer than half of physicians. Saturation was bimodal, with 25 (35%) hospitals reporting that more than 90% of all orders are entered by physicians using a computer and 20 (28.2%) reporting that less than 10% of all orders are entered this way.Conclusion: Despite increasing consensus about the desirability of computerized physician order entry (CPOE) use, these data indicate that only 9.6% of U.S. hospitals presently have CPOE completely available. In those hospitals that have CPOE, its use is frequently required. In approximately half of those hospitals, more than 90% of physicians use CPOE; in one-third of them, more than 90% of orders are entered via CPOE.In an editorial in American Medical News, legibility, remote access, and the potential “to make users better doctors” were described as the upsides of computerized physician order entry (CPOE) use, but the downsides of typing, system rigidity, and time were cited as making implementation of CPOE systems a highly controversial topic.1 We define CPOE as a process that allows a physician to use a computer to directly enter medical orders. Physicians are not the only members of the health care team who might enter orders into a computerized system, but they are the focus of this particular study. Hospitals are being encouraged by outside forces to implement CPOE in an effort to reduce medical errors. We conducted a survey in 1997, with results published in 1998,2 to discover what percentage of U.S. hospitals had CPOE at that time and to determine how heavily used it was in hospitals that had it. We found that one-third of hospitals claimed to have CPOE available but that it was little used at these sites. An earlier survey with a small response rate had found that 20% of surveyed institutions had CPOE,3 and a study published in 2000 that was limited to inpatient medication ordering by physicians reported that less than 10% of hospitals or health systems had such systems.4 A survey of hospital information systems in Japan discovered that order-entry systems for laboratory, imaging, and pharmacy were available at fewer than 20% of reporting hospitals, but this was not necessarily physician order entry.5 A 2003 report by the Leapfrog Group (a coalition of public and private organizations founded by the Business Roundtable, which is an association of chief executive officers of Fortune 500 companies) stated that 4.1% of the reporting hospitals in a recent survey had CPOE fully implemented,6 but the sample was primarily limited to certain demographics. During the five years since the results of our last survey were published, there have been numerous publications about the benefits of CPOE7,8,9,10 and about some of the difficulties encountered by hospitals implementing it.11,12,13 Several governmental agencies and other bodies such as the Leapfrog Group have made efforts to encourage CPOE use.14,15,16 To aid organizations during planning and implementation, a number of guides and manuals have been published as well.17,18,19,20,21 Although much attention is being focused on CPOE, no recent nationwide figures on hospital installations have been published. Therefore, we decided to send the same survey to the same sample population in 2002 that we did in 1997. The questions to be addressed here are: how widespread is the implementation of CPOE in hospitals across the United States, where is it available, and how much is it used?  相似文献   

10.
CD4 receptor molecules on ‘T’ lymphocytes and macrophages have already been identified as the route of entry for HIV. However CCR5 and CXCR4 are identified only recently as the second receptors for HIV on macrophages and ‘T’ lymphocytes respectively. Presence of homozygous CCR5 Δ 32, a defective CCR5 gene leads to resistance to HIV infection in the risk groups. While heterozygous CCRS Δ 32 leads to delay in the progress of HIV infection to AIDS.KEY WORDS: CCR5, CXCR4, SDF-1, HIVSince the beginning of the HIV pandemic, many a questions still remain unanswered. The most intriguing of them is susceptibility to HIV. Even in the same risk group, exposed to same risk factors only some of the individuals get the infection while others do not. Some of the siblings (about 20%) born from the same HIV infected mother develop AIDS whereas others escape. Another pertinent question is why some HIV infected people progress rather faster than the others towards AIDS? What protects the uninfected? What slows the progress of the HIV to AIDS? After years of research scientists have tried to find some of the answers to these questions.Though possible HLA differences were postulated to be the cause of these differences [1], recent evidences suggest that the good fortune of some individuals, who are partly or fully resistant to HIV infection are due to possession of a particular variant of a gene involved in immunologic function. At present this gene and its variations are intensely studied for strategies to prevent and control HIV infection particularly to HIV-1.Immediately after the discovery of the virus in 1984, at the National Cancer Institute, search for these factors were initiated. In a cohort study, groups of several hundred individuals at high risk of HIV infections-viz. homosexual men, IV drug abusers and hemophiliacs who had received contaminated blood products were monitored for years by physicians. The patients (with their consent) supplied blood, tissue samples and case reports to researchers for study of cell biology and DNA genetic testing. The cohorts were divided into (a) those infected with HIV vs those who remained free of it after extensive exposure (b) infected patients who progressed to AIDS rapidly vs those who progressed to AIDS slowly, if at all (c) infected HIV, who developed a specific type of infection example, Pneumocystis carinii pneumonia vs those who did not. Their genotypes were studied [2].An individual inherits two copies of all genes outside the sex chromosome (one copy from the mother and one copy from the father). The pair of alleles of a particular chromosomal locus or gene address constitute the genotype. One who inherits two identical alleles of a given gene is said to be a homozygote; one who inherits two distinct alleles is said to be a heterozygote. After more than a decade of relentless research for their differences in multiple centres of excellence, finally a ray of hope appeared in 1995 [3]. By 1990 it was well documented that HIV causes immunodeficiency mainly by depletion of white blood cells known as T lymphocytes that displayed a protein CD4 on their surface [4]. Many aspects of immune response against the virus are directed by T cells. Another immune cell, the macrophage also carry the CD4 receptor and are also infected by the HIV virus. As the macrophages are not destroyed by the virus, the infection persists for years. Thus the HIV virus has a cytolytic effect on the T cells but it has no such effect on the macrophages [5].The glycoprotein gp120 of HIV virus attaches to CD4 receptor molecules of the cell to gain entry into them. Though CD4 receptor is essential, it is not sufficient by itself to allow the entry of HIV. The second receptor has been recently known as a chemokine receptor. Chemokines or chemoattractant cytokines, are short 10 kd aminoacid chains which are responsible for luring immune cells to injured or diseased sites. The chemokines viz RANTES (Regulated upon activation normal T-cell expressed and secreted), MIP-1α (Macrophage inflammatory protein), MIP-1ß, possibly interfere with HIV entry into immune cells by binding to and blocking some cell surface proteins that HIV requires for access to the interior [6]. These cell surface proteins are known as receptors and they physiologically mediate the chemotaxis of T-cells and phagocytic cells to areas of inflammation. Choe et al in 1996, discovered the second receptor on CD4 T cells called the chemokine receptor CXCR4. Simultaneously the second receptor on the macrophages was discovered it is called CCR5 [7]. CXCR4 is an a chemokine where the first two cysteine residues have an intervening aminoacid, whereas CCR5 is a ß chemokine where the first two cysteine residues do not have an intervening aminoacid [8].To keep the records of pathogenicity straight, the HIV virus initially infects macrophages by its gp 120 molecule by attaching to two receptors, i.e., CD4 and CCR5 (Fig-1). These are called the macrophagetropic strains or M-tropic strain or R5 strains. They are also known as transmitted strains [9]. Once inside the macrophages, it synthesizes large quantities of the virus. After years of infection the constantly mutating virus alters the gene for gp120 which changes its allegiance to CXCR4 instead of CCR5. Thus it becomes T lymphotropic as it gains entry via CD4 and CXCR4 receptor on its surface (Fig-2). These HIV strains are known as T-tropic strain or X4 strains. But here the disease takes a dramatic turn. The virus behaves here as cytolytic and kills the T lymphocytes after multiplying within it. The T lymphocyte count steadily dips from 1000/cmm to <200/cmm when opportunistic infections set in and it becomes an AIDS defining condition [10]. However in another cohort study Winkler et al in National Cancer Institute, USA, found stromal derived factor (SDF-1) which is a principal ligand for CXCR4 has protective effect when its structural defective homozygous variant SDF1-3’A/3’A is present in some individuals. The protective effect against AIDS is twice as strong as that conferred by variants of CCR5 [11].Open in a separate windowFig. 1Entry of HIV into macrophages and entry inhibition (M-Tropic Strain)Open in a separate windowFig. 2Entry of HIV into T lymphocytes and entry inhibition (T-Tropic Strains)Liu and colleagues in July 1996 characterized the genetic sequence of CCR5 in both the groups of individuals in the cohort studies. It was found that the groups which were protected from HIV infection were having 32 nucleotides missing from the gene of CCR5 known as CCR5 A 32 gene. The 32 base pair deletion in the CCR5 gene correspond to the second intracellular loop of CCR5 and encodes a severely truncated molecule. They experimentally found that this CCR5 A 32 was producing a truncated CCR5 receptor protein which either fails to reach the cell surface or is so deformed that it cannot attach to HIV [12]. They found that the deletion mutant for the protection against HIV was highly significant statistically. Those individuals who are homogeneous for CCR5 A allele are resistant to HIV infection but those who are heterozygous may be infected with HIV, but progress slowly to AIDS, if at all and they have a lower level of viremia. In these cases the virus has 4-10 fold reduced ability to replicate [13]. Mishrahi in one study found that children with CCR5 A 32 heterozygocity are not protected against mother to infant transmission but, if, infected, there is a longer time before adverse clinical outcome develops than in children with wild type CCR5 geno-type [14]. With increasing HIV exposure the prevalence of A 32/ A 32 CCR5 genotype increases. The wild type CCR5 and CCR5 A 32alleles can be detected easily by polymerase chain reaction (PCR) technique using EcoR1 restriction enzyme digestion and running the digested DNA in agarose gel electrophoresis [15].This recent knowledge has important bearing on the pathogenicity of HIV and potential for a possible therapeutic agent against CCR5-HIV-1 interaction.On full analysis of the available data, it was found that Russians have the mutant allele in 13% of the population, Caucasian Americans have 11.1% for Caucasian European it is 10.0%, African American 1.7% and for Native American, African and Asian population it is 0% [16]. A study on the Indian population is yet to be done about this protective factor which limits HIV infections.The final word about the protective factors against HIV has not yet been told as HIV infection has been reported in a person with hemophilia and several homosexual men with CCR5 Δ 32 homozygotes [8]. This only shows that research has to go on, for, the war on HIV seems to be endless.  相似文献   

11.

Objective

Home telemonitoring represents a patient management approach combining various information technologies for monitoring patients at distance. This study presents a systematic review of the nature and magnitude of outcomes associated with telemonitoring of four types of chronic illnesses: pulmonary conditions, diabetes, hypertension, and cardiovascular diseases.

Methods

A comprehensive literature search was conducted on Medline and the Cochrane Library to identify relevant articles published between 1990 and 2006. A total of 65 empirical studies were obtained (18 pulmonary conditions, 17 diabetes, 16 cardiac diseases, 14 hypertension) mostly conducted in the United States and Europe.

Results

The magnitude and significance of the telemonitoring effects on patients’ conditions (e.g., early detection of symptoms, decrease in blood pressure, adequate medication, reduced mortality) still remain inconclusive for all four chronic illnesses. However, the results of this study suggest that regardless of their nationality, socioeconomic status, or age, patients comply with telemonitoring programs and the use of technologies. Importantly, the telemonitoring effects on clinical effectiveness outcomes (e.g., decrease in the emergency visits, hospital admissions, average hospital length of stay) are more consistent in pulmonary and cardiac studies than diabetes and hypertension. Lastly, economic viability of telemonitoring was observed in very few studies and, in most cases, no in-depth cost-minimization analyses were performed.

Conclusion

Home telemonitoring of chronic diseases seems to be a promising patient management approach that produces accurate and reliable data, empowers patients, influences their attitudes and behaviors, and potentially improves their medical conditions. Future studies need to build evidence related to its clinical effects, cost effectiveness, impacts on services utilization, and acceptance by health care providers.Continued advances in science and technology and general improvements in environmental and social conditions have increased life expectancy around the world. 1 As a result, the world’s population is aging. Over the last 50 years, the number of people age 60 years or over has tripled, and is expected to triple again to almost two billion by 2050. 2 Population ageing is a global phenomenon affecting all regions. Globally, the proportion of older people was 8% in 1950 and 10% in 2000, and is projected to reach 21% in 2050. 3 China is the region where the increase is likely to be most spectacular, from 6.9% in 2000 to 22.7% in 2050. 3 Population ageing is profound, having major consequences and implications for all facets of human life, including health and health care. Indeed, as we age, the incidence and prevalence of chronic diseases, such as cardiovascular disease, chronic obstructive pulmonary disease (COPD), and diabetes, continue to increase. 1,4 Chronic diseases have become major causes of death in almost all countries. By the end of 2005, it is estimated that 60% of all deaths will be due to chronic diseases. 5 Such prevalence of chronic diseases is one reason why expenditures on health care are skewed: in most health care delivery systems, 5% of patients are responsible for 50% of costs. 6 The economic burden of chronic diseases is profound, accounting for 46% of the global burden of disease. 7 The losses in national income for 2005 due to deaths from heart disease, stroke, and diabetes were estimated (in international dollars) to be $18 billion in China, $1.6 billion in the United Kingdom, and $1.2 billion in Canada. 5 In the United States, chronically ill patients account for 78% of all medical costs nationally. 8 The increasing burden of chronic disease on health care resources and costs provides a powerful incentive to find more compelling ways to care for these patients.The challenge is even more complex because of the supply-and-demand curve in health care. 4 Indeed, at the same time as we face dramatic increases in the numbers of chronically ill patients, there are global provider shortages. An acute nursing shortage exists in many developed countries, including the United States, United Kingdom, Australia, and Canada, and there is no realistic prospect that this situation will change in the near future. 9–11 Furthermore, some countries have to cope with reductions in the number of persons entering the nursing profession. 12–14 Several studies have also suggested a substantial physician shortage, which is expected to develop in the coming years in various countries. 15–17 Dramatic increases in the numbers of chronically ill patients in the face of shrinking provider numbers and significant cost pressures mean that a fundamental change is required in the process of care. We need to identify patient management approaches that would ensure appropriate monitoring and treatment of patients while reducing the cost involved in the process. Provision of care directly to the patient home represents an alternative. It may be perceived as a substitute for acute hospitalization, an alternative to long-term institutionalization, a complementary means of maintaining individuals in their own community, and an alternative to conventional hospital outpatient or physician visits. 1 Information technology can play a crucial role in providing care to the home, and telehealth technologies have been growing dramatically. More precisely, home telemonitoring is a way of responding to the new needs of home care in an ageing population. In this regard, Meystre 18 recently concluded that long-term disease monitoring of patients at home currently represents the most promising application of telemonitoring technology for delivering cost effective quality care. Yet, to be able to comprehensively assess and determine the benefits of home telemonitoring, it is essential to perform a systematic review that can critically synthesize the results of various studies in this area and provides a solid ground for clinical and policy decision making. 19 This article provides a systematic review of experimental and quasi-experimental studies involving home telemonitoring of chronic patients with pulmonary conditions, diabetes, hypertension, and cardiovascular diseases. Precisely, it reveals the nature and magnitude of the outcomes or impacts associated with telemonitoring programs across the world.  相似文献   

12.
Objectives: To determine clinicians'' (doctors'', nurses'', and allied health professionals'') “actual” and “reported” use of a point-of-care online information retrieval system; and to make an assessment of the extent to which use is related to direct patient care by testing two hypotheses: hypothesis 1: clinicians use online evidence primarily to support clinical decisions relating to direct patient care; and hypothesis 2: clinicians use online evidence predominantly for research and continuing education.Design: Web-log analysis of the Clinical Information Access Program (CIAP), an online, 24-hour, point-of-care information retrieval system available to 55,000 clinicians in public hospitals in New South Wales, Australia. A statewide mail survey of 5,511 clinicians.Measurements: Rates of online evidence searching per 100 clinicians for the state and for the 81 individual hospitals studied; reported use of CIAP by clinicians through a self-administered questionnaire; and correlations between evidence searches and patient admissions.Results: Monthly rates of 48.5 “search sessions” per 100 clinicians and 231.6 text hits to single-source databases per 100 clinicians (n = 619,545); 63% of clinicians reported that they were aware of CIAP and 75% of those had used it. Eighty-eight percent of users reported CIAP had the potential to improve patient care and 41% reported direct experience of this. Clinicians'' use of CIAP on each day of the week was highly positively correlated with patient admissions (r = 0.99, p < 0.001). This was also true for all ten randomly selected hospitals.Conclusion: Clinicians'' online evidence use increases with patient admissions, supporting the hypothesis that clinicians'' use of evidence is related to direct patient care. Patterns of evidence use and clinicians'' self-reports also support this hypothesis.Current literature1,2,3 indicates that clinicians do not routinely use the available evidence to support clinical decisions. Several studies have shown that simply disseminating evidence, for example, in the form of practice guidelines, does not lead to increased use of that information to inform clinical decisions.4 Clinicians apparently pursue answers to only a minority of their questions5,6 and, when they do so, they rely most heavily on colleagues for answers.5 Lack of easy access to up-to-date evidence is cited as a barrier to evidence-based practice by clinicians.7,8Online clinical information resources have the potential to support clinicians who adopt an evidence-based approach by providing them with the information they need when they need it.9 We have some evidence that searches of bibliographic databases such as Medline are efficacious.5,10 Given sufficient time, clinicians are able to retrieve research evidence relevant to their clinical questions.11 Training in online searching techniques enhances the quality of evidence retrieved,12 whereas education in critical appraisal increases clinicians'' abilities to apply the information obtained.13However, measuring the actual uptake of online information retrieval systems is problematic and few studies have been attempted. Studies of intranet provision of online resources report monthly utilization rates of 30 to 720 searches per 100 person-months.14 However, most studies only report use rates that exclude clinicians who have access to the system but do not use it. Consequently, these studies do not provide a measure of actual uptake by the clinical population.It is also difficult to measure the impact that online access to evidence has on clinical practice. Assessments of the impact of Medline information on decision making and patient care have relied primarily on self-reports of clinicians. Haynes and McKibbon15 provided training and access to online Medline to a group of 158 U.S. physicians. For 92% of searches related to a patient''s care, clinicians reported that the information retrieved resulted in “at least some improvement” in care. Using the critical incident technique, Lindberg et al.16 interviewed U.S.-clinician Medline users about their searches. Of the 1,158 searches described, 41% were classified as affecting decisions regarding patient care. A survey of U.K. general practitioners who used the ATTRACT system, which provides rapid access to evidence-based summaries to clinical queries, found 60% (24 of 40 doctors) reported that the information gained had changed their practice.17Based on the assumption that providing clinicians with easy access to “evidence” will support decision making and result in improvements in patient care, in 1997 the State Health Department in New South Wales (NSW), Australia, implemented the Clinical Information Access Program (CIAP; <http://www.ciap.health.nsw.gov.au/>). CIAP is a Web site providing point-of-care, 24-hour, online access to a wide range of bibliographic and other resource databases for the approximately 55,000 clinicians (doctors, nurses, and allied health staff) employed by the state and primarily working in public hospitals.Qualitative data from case studies indicated that clinicians perceived a range of factors influenced their use of the system, including support from facility managers and direct supervisors, access to computer terminals, training and skills (appraisal of evidence, database searching, and computer skills), and the place of evidence-based practice in their professional work.18We sought to test hypotheses generated as a result of these case studies using Web log and survey data. We aimed to determine the rates of “actual” and “reported” use of online evidence by the population of clinicians working in the public health system in NSW and to assess the extent to which system use was related to direct patient care.We posed and tested two competing hypotheses. These hypotheses were formulated following a qualitative study examining clinicians'' use of online evidence18: hypothesis 1—clinicians use online evidence primarily to support clinical decisions relating to direct patient care; and hypothesis 2—clinicians use online evidence predominantly for research and continuing education. These hypotheses were tested by the following methods.

Examination of Patterns of Online Evidence Use by Clinicians by Time, Day, and Location of Searches

Hypothesis 1 would be supported by a pattern of use that coincided with patient care and peaked between the core working hours of 9 am and 5 pm, with most use originating from within hospitals (). Hypothesis 2 would be supported by a relatively wide distribution of use across the times of the day with sustained rates of use into the evening when clinicians have more free time for research (). A high proportion of access would be expected to occur from outside hospitals, e.g., at home. Figure 1.Hypotheses regarding patterns of online evidence use by clinicians. H = hypothesis.

Measuring the Association between Hospital Admissions and Use of Online Evidence

A significant positive correlation between patient admissions to hospitals and online evidence searches would provide support for hypothesis 1 by demonstrating that CIAP use is likely to be used primarily to inform patient care as opposed to meeting research or continuing education information needs. Absence of a correlation would support hypothesis 2.  相似文献   

13.
14.
The ’Report on the Health of the Armed Forces’ and the ‘Annual Report on the Health of the Army’ are documents that contain very useful information. There is, however, a surprising lack of awareness about the data contained within and hence a total absence of critical comment. The average length of stay in respect of certain illnesses, even those like common cold, showed wide disparity between various groups and these facts are a matter of clinical concern and managerial introspection. The reasons for this may lie in the situational content of the publications and the mindset of the readers.KEY WORDS: Health planning, Health surveys, Length of stay, MorbidityReports are meant to be read. The situation regarding the prestigious ‘Report on the Health of the Armed Forces’ is, however, depressing. Many are unaware of its very existence. It took considerable efforts to obtain a copy of the report pertaining to the year 1990, published in 1993 and received during 1994. The contents were startling in several respects. Even more surprising, however, was the disturbing fact that this remarkable document had escaped widespread attention and critical comment. An exercise was undertaken to analyze and critically evaluate the same.Data were acquired from the ‘Annual Report on the Health of the Army’ for 1986 and 1987, and the ‘Report on the Health of the Armed Forces’ for 1989 and 1990 [1, 2, 3, 4]. Certain parameters were selected to highlight significant longitudinal and cross-sectional trends. Data were tabulated and analyzed accordingly.  相似文献   

15.
There are constraints embedded in medical record structure that limit use by patients in self-directed disease management. Through systematic review of the literature from a critical perspective, four characteristics that either enhance or mitigate the influence of medical record structure on patient utilization of an electronic patient record (EPR) system have been identified: environmental pressures, physician centeredness, collaborative organizational culture, and patient centeredness. An evaluation framework is proposed for use when considering adaptation of existing EPR systems for online patient access. Exemplars of patient-accessible EPR systems from the literature are evaluated utilizing the framework. From this study, it appears that traditional information system research and development methods may not wholly capture many pertinent social issues that arise when expanding access of EPR systems to patients. Critically rooted methods such as action research can directly inform development strategies so that these systems may positively influence health outcomes.Electronic patient record (EPR) systems fundamentally change the way health information is structured. An EPR is a dynamic entity, affording greater efficiency and quality control to the work processes of clinicians by providing data entry at the point of care, logical information access capabilities, efficient information retrieval, user friendliness, reliability, information security, and a capacity for expansion as needs arise.1,2An EPR system promotes patient participation in care to a greater extent than paper records because of its capacity for interaction. Patients can transmit real-time vital signs and other forms of data from their bedside, home, or office and receive up-to-date supportive information customized and contextualized to their individual needs.3,4In this journal, Ross and Lin recently presented a comprehensive review of the world literature on the effects of patient access to medical records, noting a potential for modest benefits and minimal risk, while also citing that the impact of access may vary depending on the patient population in question.5 This is consistent with findings in the information system literature that systems fail when inadequate attention is paid to stakeholder needs and work processes during design6 or when assumptions are made about how well a system fits with the user''s role within the organization during implementation.7Medical records are structured primarily for the use of clinicians and administrators. Patients typically are not counted among the primary users of an EPR system. They tend to be given access sometime after the system is implemented in the organization. Structural concessions and decisions made when the system is first implemented, such as fragmented data entries and foreign lexicons, can make the information difficult for patients to follow and the records all but impossible for them to effectively use.8  相似文献   

16.

Objective

Although demand for information about the effectiveness and efficiency of health care information technology grows, large-scale resource-intensive randomized controlled trials of health care information technology remain impractical. New methods are needed to translate more commonly available clinical process measures into potential impact on clinical outcomes.

Design

The authors propose a method for building mathematical models based on published evidence that provides an evidence bridge between process changes and resulting clinical outcomes. This method combines tools from systematic review, influence diagramming, and health care simulations.

Measurements

The authors apply this method to create an evidence bridge between retinopathy screening rates and incidence of blindness in diabetic patients.

Results

The resulting model uses changes in eye examination rates and other evidence-based population parameters to generate clinical outcomes and costs in a Markov model.

Conclusion

This method may serve as an alternative to more expensive study designs and provide useful estimates of the impact of health care information technology on clinical outcomes through changes in clinical process measures.The announcement 1 and reaffirmation 2 of the federal commitment to advancing health care information technology (HIT) has been further bolstered by events in the Gulf South after the recent hurricane seasons. 3 This commitment creates both opportunities and challenges for health services and clinical informatics researchers. Clinicians, policy makers, lobbyists, economists, and the media demand evidence-based recommendations for HIT. To make decisions that will affect millions of lives and billions of dollars, decision makers require more than efficacy studies—they require results that indicate both the effectiveness and the efficiency of HIT solutions. The ability of the informatics research community to respond to this need with useful and credible evidence will determine our relevance to the debate.Many evaluations of health services focus primarily on process measures. 4 For example, there are numerous studies in the disease management literature that report the impact of technology on the rate of annual eye or foot examinations for diabetic patients. 5–14 However, there are few published studies that evaluate HIT’s impact on the rate of blindness or amputations. Despite the increasing demand for credible clinical outcomes evidence, many studies in HIT lack the power to detect changes in clinical outcomes, a product of limited time and resources. 15 In addition, the rapid evolution of new technologies makes the study subject itself, HIT, a moving target. By the time a large-scale trial is completed, the state of the art will have moved on. 16 Evaluations in HIT therefore tend to be relatively brief studies comparing convenient measures, more often made in a laboratory environment, or potentially idiosyncratic academic environments, than in real-world clinical settings, thus limiting generalizability. These studies would be classified by Fuchs and Garber 17 as stage 1 and 2 technology assessments—evaluating the performance of the technology itself and perhaps the impact of the technology on processes of care. However, the demand for outcomes evidence mandates that future HIT research be at the level of stage 3 technology assessments, in which comprehensive clinical, economic, and social outcomes are evaluated to determine both the effectiveness and efficiency of the intervention. The importance of linking process measures to clinical outcomes has been previously described, but progress has been limited. 18 We propose an approach to maximize the ability of HIT evaluation research to report clinical and financial outcomes.  相似文献   

17.
Tuberculosis (TB) is still a major cause of serious illness in many parts of the world. Intracranially, TB manifests itself variably as meningitis, tuberculoma and tubercular abscess [1]. Although its appearance on MR is not absolutely specific, it is important in the proper clinical setting to recognize the range of possible patterns that can be observed on images [2]. Magnetic Resonance Imaging (MRI) has emerged as a quality imaging tool aiding in the diagnostic evaluation of intracranial TB variably displaying meningeal, parenchymal, osseous and craniovertebral lesions. The MRI characteristics of 18 cases of intracranial TB were reviewed. Results: Multiple lesions occured with a slightly higher incidence at 61%. In all, 11 patients (61%) presented with meningitis. Meningeal lesions without parenchymal or vascular involvement were seen in 16% of cases. 2 patients had extension of enhancing exudates into the spinal subarachnoid spaces. While 6 patients had focal intra-axial tuberculomas, representing 33% cases, 3 patients presented with infarcts. 1 patient presented with haemorrhagic infarct at right middle cerebral artery territory while two other showed multiple small infarcts. Hydrocephalus was identified in 4 patients and epidural lesions were noted in 2 cases. MRI should be considered as the imaging modality of choice for patients with suspected intracranial TB.KEY WORDS: Intracranial tuberculosis, Magnetic resonance imaging, Tubercular meningitis  相似文献   

18.
Syndromic surveillance refers to methods relying on detection of individual and population health indicators that are discernible before confirmed diagnoses are made. In particular, prior to the laboratory confirmation of an infectious disease, ill persons may exhibit behavioral patterns, symptoms, signs, or laboratory findings that can be tracked through a variety of data sources. Syndromic surveillance systems are being developed locally, regionally, and nationally. The efforts have been largely directed at facilitating the early detection of a covert bioterrorist attack, but the technology may also be useful for general public health, clinical medicine, quality improvement, patient safety, and research. This report, authored by developers and methodologists involved in the design and deployment of the first wave of syndromic surveillance systems, is intended to serve as a guide for informaticians, public health managers, and practitioners who are currently planning deployment of such systems in their regions.Bioterrorism preparedness has been the subject of concentrated national effort1 that has intensified since the events of fall 2001.2 In response to these events, the biomedical, public health, defense, and intelligence communities are developing new approaches to real-time disease surveillance in an effort to augment existing public health surveillance systems. New information infrastructure and methods to support timely detection and monitoring,3,4,5,6,7 including the discipline of syndromic surveillance, are evolving rapidly. The term syndromic surveillance refers to methods relying on detection of clinical case features that are discernable before confirmed diagnoses are made. In particular, prior to the laboratory confirmation of an infectious disease, ill persons may exhibit behavioral patterns, symptoms, signs, or laboratory findings that can be tracked through a variety of data sources. If the attack involved anthrax, for example, a syndromic surveillance system might detect a surge in influenza-like illness, thus, providing an early warning and a tool for monitoring an ongoing crisis.Unlike traditional systems that generally utilize voluntary reports from providers to acquire data, contemporary syndromic surveillance relies on an approach in which data are continuously acquired through protocols or automated routines. The real-time nature of these syndromic systems makes them valuable for bioterrorism-related outbreak detection, monitoring, and investigation. These systems augment the capabilities of the alert frontline clinician who, athough an invaluable resource for outbreak detection, is generally better at recognizing individual cases rather than patterns of cases over time and across a region. Syndromic surveillance technology may be useful not only for bioterrorism event detection, but also for general public health, clinical medicine, quality improvement, patient safety, and research. This report, authored by developers and methodologists involved in the design and deployment of the first wave of syndromic surveillance systems, is intended to serve as a guide for informaticians, public health managers, and practitioners who may be planning deployment of such systems in their regions.  相似文献   

19.
20.
This article describes the algorithms implemented in the Essie search engine that is currently serving several Web sites at the National Library of Medicine. Essie is a phrase-based search engine with term and concept query expansion and probabilistic relevancy ranking. Essie’s design is motivated by an observation that query terms are often conceptually related to terms in a document, without actually occurring in the document text. Essie’s performance was evaluated using data and standard evaluation methods from the 2003 and 2006 Text REtrieval Conference (TREC) Genomics track. Essie was the best-performing search engine in the 2003 TREC Genomics track and achieved results comparable to those of the highest-ranking systems on the 2006 TREC Genomics track task. Essie shows that a judicious combination of exploiting document structure, phrase searching, and concept based query expansion is a useful approach for information retrieval in the biomedical domain.A rapidly increasing amount of biomedical information in electronic form is readily available to researchers, health care providers, and consumers. However, readily available does not mean conveniently accessible. The large volume of literature makes finding specific information ever more difficult. Development of effective search strategies is time consuming, 1 requires experienced and educated searchers, 2 well versed in biomedical terminology, 3 and is beyond the capability of most consumers. 4 Essie, a search engine developed and used at the National Library of Medicine, incorporates a number of strategies aimed at alleviating the need for sophisticated user queries. These strategies include a fine-grained tokenization algorithm that preserves punctuation, concept searching utilizing synonymy, and phrase searching based on the user’s query.This article describes related background work, the Essie search system, and the evaluation of that system. The Essie search system is described in detail, including its indexing strategy, query interpretation and expansion, and ranking of search results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号