Expert opinions admitted by courts are not always valid and reliable. However, we know little about how indicators of opinion quality affect the persuasiveness of an expert. In this study 25 Australian magistrates and 22 jury-eligible lay people rated the persuasiveness (via credibility, value and weight) of either a high- or a low-quality expert opinion. Opinion quality was determined using attributes specified in the Expert Persuasion Expectancy (ExPEx) framework: Field, Specialty, Ability and Trustworthiness. Both magistrates and jurors were significantly more persuaded by the high- than the low-quality expert opinion. Magistrates were also significantly more sceptical of the expert opinion than lay people, and when given the opportunity sought information that was logically relevant to their decision. These results suggest that magistrates can differentiate between high- and low-quality expert opinions, but it is unclear whether the information they need for the task is actually available for use during trials.Key words:
expert evidence, expert testimony, forensic science, judges, jury decision-making, persuasionExpert evidence is regularly used to assist courts with decision-making (Gross,
1991; Jurs,
2015). However, the quality of expert opinions has been, and continues to be, of significant concern (Edmond & San Roque,
2012; Findley,
2008; Martire & Edmond,
2016; National Research Council,
2009; President’s Council of Advisors on Science and Technology, PCAST,
2016; Risinger, Denbeaux, & Saks,
1989). The quality of expert opinions in a range of disciplines has been questioned, including the forensic sciences, mental health diagnoses, medical causation, gender discrimination and eyewitness reliability among others (e.g. Bernstein,
1990; Cole,
2003; Cunliffe & Edmond,
2013; Deitch,
2009; Edens et al.,
2012; Imwinkelried,
2009; Martire & Kemp,
2011; Monahan, Walker, & Mitchell,
2008; Taupin,
2004). Even so, the opinions of experts from these and other areas remain persuasive to factfinders.Courts worldwide have been influenced by the mistaken and inaccurate opinions of expert witnesses. False and flawed expert testimony has contributed to approximately 60% of known wrongful convictions identified by the United States Innocence Project (Garrett,
2017; Garrett & Neufeld,
2009), one third (31%) of 71 Australian exonerations (Dioso-Villa,
2015) and one quarter (24%) of exonerations in the United States National Registry of Exonerations (Gross & Shaffer,
2012). Indeed, discredited and unvalidated forms of expert evidence continue to be admitted and relied on by courts despite authoritative criticism (National Research Council,
2009; PCAST,
2016; Skene,
2018). This leaves factfinders with the challenging task of differentiating between witnesses who are genuine experts and those who are not.In Australia, expert evidence is most likely to be evaluated by judicial officers rather than lay juries. Of 592,455 finalised defendants in 2017–2018, 92% (or 545,251) appeared in the Magistrates’ Courts (Australian Bureau of Statistics, ABS,
2019). Magistrates’ Courts are summary courts, meaning that magistrates rather than juries determine the verdict. They also generally consider matters of a less serious nature than the Higher Courts. However, this does not remove the need for expert witnesses. In 2017–2018, Australian Magistrates’ Courts resolved 40,576 theft offences, 56,638 illicit drug offences and 71,405 regulatory driving offences (Australian Bureau of Statistics,
2019). These matters often require the testimony of, for example, fingerprint analysts, forensic chemists, pharmacologists or toxicologists to assist to establish the fact of an offence, or the identity of the perpetrator. Thus it is likely that magistrates regularly hear expert testimony. This makes it important for us to understand how magistrates evaluate expert opinion evidence and whether it can be improved.To date, research examining how judges and magistrates evaluate expert opinion quality suggests that their performance is likely to be imperfect. While there is some evidence that judges evaluating scientific opinions have a good understanding of some key indicators of scientific reliability (i.e. peer review and general acceptance; Gatowski et al.,
2001), as well as some types of evidence (i.e. mitochondrial DNA; Hans,
2007), weaknesses have also been found. Judges frequently make logical errors (i.e. the prosecutors’ fallacy; De Keijser & Elffers,
2012), mistakenly believe that scientific knowledge can and should be categorical or certain (Faigman,
2006), misunderstand falsifiability and error rate (Gatowski et al.,
2001) and are unable to differentiate valid and invalid research (Kovera & McAuliff,
2000). These failures are likely to impair the assessment of expert quality. However, there are other relevant considerations to be taken into account.At least eight attributes have been identified as logically relevant to determining the quality of an expert opinion (Martire, Edmond & Navarro,
2020; Walton,
1997). These are: Foundation, Field, Specialty, Ability, Opinion, Support, Consistency and Trustworthiness. The indicators of scientific validity and reliability described above relate to Foundation and Ability attributes. Specifically, Foundation covers field, discipline and technique validity and reliability (e.g. error rate, falsifiability, study design, etc.), while Ability relates to the personal proficiency or competence of the witness. Field and Specialty (respectively) relate to the general and specific training, study and experience of the witness relevant to their opinion. Opinion is about the content, conservatism and comprehensibility of the opinion expressed. Support includes the evidentiary basis for, and logic of, the opinion. Consistency concerns whether other experts agree with the opinion. And Trustworthiness incorporates the bias, honesty and conscientiousness of the witness. These attributes have been formalised in the Expert Persuasion Expectancy (ExPEx) framework (Martire et al.
2020) as follows:
- Foundation – Does training, study or experience in the field F support assertions like A?
- Field – Does witness W have training, study or experience in the field F?
- Specialty – Does W have training, study or experience specific to assertions like A?
- Ability – Does W provide assertions like A accurately and reliably?
- Opinion – Does W convey A clearly, and with necessary qualifications?
- Support – Does W rely on evidence in making A?
- Consistency – Is A consistent with what other experts assert?
- Trustworthiness – Is W personally reliable as a source?
At present, we have only limited information about the extent to which magistrates and judges attend to and value these logically relevant attributes when assessing the quality of an expert opinion. Champagne, Shuman, and Whitaker (
1990) surveyed 10 United States judges who noted a diverse array of attributes relevant to expert quality. These included: credentials (Field or Specialty), bias (Trustworthiness), methodology (Foundation or Support), communication skills (Opinion) and experience (Field or Specialty). Indeed, only one reported attribute clearly fell outside the ExPEx framework – witness demeanour. However, it is not clear whether the attributes that judges believe to be relevant to quality actually influence their evaluation of an expert opinion.More recently, Tadei, Finnila, Reite, Antfolk, and Santtila (2016) surveyed 87 judges in Finland to explore how they determined the quality of an expert opinion. Judges rated the importance of seven listed indicators of reliability: falsifiability (Foundation), error rate (Foundation or Ability), peer-reviewed research (Field or Specialty), scientific acceptance (Foundation), practical acceptance (Consistency), work experience (Field or Specialty) and research activity (Field or Specialty). They were also asked to read five vignettes and note any questions they would ask to evaluate the reliability of the expert opinion in the scenario.The judges’ ratings showed that work experience was seen as the most important of the listed attributes for determining an experts’ reliability. This was followed by error rate, practical acceptance, scientific acceptance, peer-reviewed research, falsifiability and research activity. While 54% of judges also posed questions about error rate in the case scenarios, most of them did not ask about the other listed indicators. Instead, they wanted to know more about the opinion (83%), its basis (77%), and the supporting research (56%). These questions relate to the Opinion, Support and Foundation attributes in ExPEx, respectively.Overall then, we have some preliminary evidence about how judges
believe they assess the quality of an expert opinion (Champagne et al.,
1990). We also know a little about the information they might
seek to complete their assessments (Tadei et al.,
2016), and that much of this information is logically relevant to determining the quality of an expert opinion. However, we do not know whether this logically relevant information actually affects the decision-making of judges when it is available.In this article we examine whether the decision-making of magistrates is affected by attributes logically relevant to the quality of the expert opinion. Specifically, we examine whether magistrates consider strong expert opinions more persuasive than weak ones when quality is manipulated via ExPEx attributes. We also compare the performance of magistrates to that of lay people for reference. We predict that magistrates and lay people will be significantly more persuaded by high- rather than low-quality expert opinion evidence when operationalised in terms of ExPEx attributes.In addition, following from Tadei et al. (
2016), we examine whether magistrates seek information that is logically relevant to their assessment of expert opinion quality. If magistrates request information that is within the ExPEx framework, rather than outside of it, this suggests that magistrates know which attributes of an expert opinion should be taken into account. It would also indicate that the ExPEx framework usefully represents the informational needs of judges.
相似文献