Testing and implementing signal impact analysis in a regulatory setting: results of a pilot study. |
| |
Authors: | Emma Heeley Patrick Waller Jane Moseley |
| |
Affiliation: | Post-Licensing Division, Medicines and Healthcare products Regulatory Agency, London, UK. emma.heeley@mhra.gsi.gov.uk |
| |
Abstract: | BACKGROUND AND AIM: Statistical signal detection methods such as proportional reporting ratios (PRRs) detect many drug safety signals when applied to databases of spontaneous suspected adverse drug reactions (ADRs). Impact analysis is a tool that was developed as an aid to prioritisation of such signals. This paper describes a pilot project whereby impact analysis was simultaneously introduced into practice in a regulatory setting and tested in comparison with the existing approach. METHODS: Impact analysis was run on signals detected during a 26-week period from the UK Adverse Drug Reactions On-line Information Tracking (ADROIT) database of spontaneous ADRs that met minimum criteria (PRR>or=3.0, chi2>or=4.0 and >or=3 reported cases) and related to established drugs (i.e. those that have been available for at least 2 years and no longer carry the 'black triangle' symbol). The current method of signal prioritisation (i.e. the collective judgement at a weekly meeting) was initially performed without knowledge of the findings of impact analysis. Subsequently, the meeting was presented with the findings and, where appropriate, given the opportunity to reconsider the judgement made. The categories arising from the two methods were compared and the ultimate action recorded. Inter-observer variation between scientists performing impact analysis was also assessed. RESULTS: Eighty-six separate signals were analysed by impact analysis, of which 5% were categorised as high priority (A), 14% as requiring further information (B), 31% as low priority (C) and 50% as no action required (D). In general, the new method tended to give a higher level of priority to signals than the existing approach. Overall, there was 59% agreement between the impact analysis and the collective judgement at the meetings (kappa statistic=0.30). There was slightly greater agreement between impact analysis and the final action taken (kappa statistic=0.39), indicating that the findings of an impact analysis had an influence on the outcome. Assessment of inter-observer variation demonstrated that the method is repeatable (kappa statistic for overall category=0.77). Almost 70% of those who participated in the pilot study believed that impact analysis represented an improvement in how signals were prioritised. CONCLUSIONS: Impact analysis is a repeatable method of signal prioritisation that tended to give a higher level of priority to signals than the standard approach and which had an influence on the ultimate outcome. |
| |
Keywords: | |
|
|