Forewarned is forearmed or, if Latin is your thing, “praemonitus, praemunitus”. So the saying goes and clearly there is great value in knowing what lies ahead for us. If we know what is coming our way we can, in theory, prepare properly for any challenge.

Challenges are nothing new to forensic scientists. Critics routinely point out issues they have with our work. Some of those criticisms are fair and reasonable, others not so much. Much of the critical commentary affects a discipline as a whole demanding an overall, or group, response by members of each discipline. In my experience, disciplines tend to be behind the curve in their responses to critics. Nonetheless, over time some issues have been addressed, at least partially if not completely, through empirical research. Others have not. To be fair, the activities needed to properly address the critics are not trivial and require both time and resources; scarce commodities in modern forensic labs. Overall, things are improving, albeit very slowly.

Criticism takes on a whole new meaning in the context of a court of law. Indeed, I think that criticism is the essence of cross-examination — a fundamental and important aspect of any adversarial justice system. Although essential, it is rarely an enjoyable part of the proceedings for any expert.

I find cross-examination to be the most interesting part of any witness’ time on the stand because they must answer whatever question is posed to them by ‘opposing’ (i.e., unfriendly) counsel. Criticism often begins very personal and has a practical tone since forensic examiners must answer serious questions about their own work. Truly the rubber meets the road during cross-examination.

Sometimes the witness can anticipate a line of questioning but, in general, it is rare that one knows what will come next. I don’t think that bothers any truly ‘expert’ witness. They are, after all, supposed to be knowledgeable about everything within the scope of their domain. In a way, it’s just part and parcel of being an expert. Still, it is nice to have some idea of what might be coming…

In that regard, an article published in the Australian Bar Review journal could be seen by forensic examiners as an early Christmas present. The paper is entitled, “How to cross-examine forensic scientists: A guide for lawyers” written by an esteemed collection of academics including Professor Gary Edmond, Kristy Martire, and Richard Kemp, among several others.3

The article provides the reader with an excellent summary of various criticisms that can be raised in cross-examination of an expert witness, with the focus in the article being “the identification (or comparison) sciences”; those pertaining to identity/origin/source of some sample or trace. While it provides little in the way of truly novel criticism, it does a great job outlining both general and specific issues presenting them in a very useful and practical way. The abstract reads:

This article is a resource for lawyers approaching the cross-examination of forensic scientists (and other expert witnesses). Through a series of examples, it provides information that will assist lawyers to explore the probative value of forensic science evidence, in particular forensic comparison evidence, on the voir dire and at trial. Questions covering a broad range of potential topics and issues, including relevance, the expression of results, codes of conduct, limitations and errors, are supplemented with detailed commentary and references to authoritative reports and research on the validity and reliability of forensic science techniques.

What better way to be forewarned than to read such recommendations and prepare a response to each and every one of them? In my opinion, every forensic examiner should be able to answer the questions posed in this article. Some are ‘easy’, others not so much.

The article is a bit unusual. I think it is important to note that it “was developed by a multi-disciplinary group composed of research scientists, forensic scientists and lawyers during their annual meeting in Wollongong”. The composition of the group is excellent. All too often such activities are undertaken independently and separately by lawyers or by forensic scientists, with little input from academia. Not so in this case. This is precisely the kind of mixed forum needed to explore the issues fully and properly. It is also worth noting that the event was “supported by the Australian Research council” with funding under projects FT0992041 and LP120100063. The funding under those projects goes to many activities (see Jason Tangen’s Forensic Reasoning Project, for example). What I find interesting is that such funding is approved only for projects considered to have significant and high value.

Strangely, for an article from a law review journal, the discussion is clear and written in everyday English. It is, like all such articles written by and for lawyers (or law professors), replete with footnotes and clarifying commentary. But unlike other articles of this type, the clarifying commentary actually serves to clarify matters.

In some ways the article serves as a nice ‘companion piece’ to an earlier article by Edmond. In that article he addressed the professional responsibilities of prosecutors dealing with expert evidence.4 Indeed, many of the authors have written extensively on the general topic elsewhere, either together or individually. It is safe to say the credentials of the authors are excellent and they know whereof they write.

The introduction delves into the need for proper cross-examination and factors relating to how to perform that task, in general. I particularly appreciate the attention paid to “experimental validation, measures of reliability and proficiency”. I agree that these absolutely critical elements are often overlooked by lawyers and courts when assessing admissibility of evidence. One of the best parts of the article is, in fact, Appendix A which discusses validity, reliability and proficiency in some detail. These are concepts many forensic examiners misunderstand so having a proper explanation of the terms is beneficial.

The point of cross-examination is to challenge the evidence provided by an expert to try and ensure that the true probative value of that evidence comes to light. A truly unbaised, fair and competent expert witness will do their best to present their evidence properly in the first place. But they will also participate fully and openly in the cross-examination process knowing that it is intended to help the court gain a correct understanding of their evidence.

It is important to realize that forensic evidence is always ‘conditional’ in nature, meaning that the opinion applies only when certain conditions are met.  Ideally, the relevant conditions are specified by the examiner in their report and testimony.  This suggests that a key purpose behind both direct and cross-examination is to discuss and clarify both those conditions and the effect of changing them. What might happen to an opinion if the conditions were different? The answer to that sort of question is critical to proper understanding of the weight that an expert’s testimony should be given.

The authors discuss the cross-examination procedure in terms of nine key issues to be considered when dealing with expert evidence; specifically, 1. relevance, 2. validation, 3. limitations and error, 4. personal proficiency, 5. expressions of opinion, 6. verification, 7. cognitive biases and contextual effects, 8. cross-contamination of evidence, and 9. codes of conduct and rules about the content of reports.

Each of these is covered in its own section of the article with reasonably detailed background commentary, followed by a set of sample questions. For the most part, the questions are focused and reasonable though there is some, perhaps unavoidable, cross-over between sections. No specific answers are provided for the questions, but the authors provide some guidance in what would, in their opinion, constitute a good or acceptable response. That approach is quite understandable. First, a given expert witness is supposed to respond according to their own personal knowledge and ability. Second, while the issues are common to all forensic science disciplines, the response will clearly vary by discipline and by examiner. Third, an acceptable or adequate answer is hard to define. All one can do is point out reasonable criteria to be applied and hope that it is done — in this case, by the lawyers and the courts.

To give a taste of the questions posed by the authors the following are some that I found particularly interesting.

I accept that you are highly qualified and have extensive experience, but how do we know that your level of performance regarding . . . [the task at hand — eg, voice comparison] is actually better than that of a lay person (or the jury)?

What independent evidence… [such as published studies of your technique and its accuracy] can you direct us to that would allow us to answer this question?

Can you direct us to specific studies that have validated the technique that you used?

Have you ever had your ability formally tested in conditions where the correct answer was known? (ie, not a previous investigation or trial)

Might different analysts using your technique produce different answers? Has there been any variation in the result on any of the validation or proficiency tests you know of or participated in?

Can you tell us about the error rate or potential sources of error associated with this technique?

Can you point to specific studies that provide an error rate or an estimation of an error rate for your technique?

Might someone using the same technique come to a different conclusion?

Would some analysts be unwilling to analyse this sample (or produce such a confident opinion)?

Have you ever had your own ability… [doing the specific task/using the technique] tested in conditions where the correct answer was known?

If not, how can we be confident that you are proficient?

If so, can you provide independent empirical evidence of your performance?

Can you explain how you selected the terminology used to express your opinion?

Would others analyzing the same material produce similar conclusions, and a similar strength of opinion? How do you know?

Is the use of this terminology derived from validation studies?

You would accept that forensic science results should generally be expressed in non-absolute terms?

Can you explain your peer review (or verification) process?

Is the person undertaking the review of the result blinded to the original decision?

How often does a reviewer… [in your institution] disagree with the original conclusion? What happens when there are disagreements or inconsistencies? Are these reported? Are these errors or limitations?

Are you familiar with cognitive bias and contextual effects?

Can you explain the processes employed to avoid exposure to information that is not relevant to your analysis? Can you tell us about them?

Did your letter of instruction indicate . . . [implicitly or explicitly] what it was expected you would find (eg, confirm the suspect is the perpetrator)? Can you indicate where you documented that in your report? Or, were the instructions open ended (eg,, assess whether or not any of these people feature in the crime scene images)?

Were you told about other evidence or the opinions of other investigators or forensic analysts?

Could you indicate where you made reference to alternative approaches and assumptions, or criticisms of your techniques and expressions?

I am not going to try to answers these questions in this post. I may do so in future posts, time permitting. The quotes listed above are just intended as examples of the careful and thorough approach taken by the authors. It is impressive.

Edit/Update: The 2015 ASQDE meeting included a panel discussion led by Linton Mohammed that discussed this paper.  Panel participants included G. Anavi, S. Birchall, C. Bird, S. Brown, L. Goz, J. Lewis, L. Mitchell, N. Neev, G. Niburg, K. Nissan, M. Novotny, V. Nuger, J. Osmond, J. Parker, A. Szymanski, T. Tanaka, P. Tytell, T. Welch, and P. Westwood.

The article also discusses the concept of ‘Authoritative reports’ with the US National Academy of Science report being the main example used. I concur with the authors that every examiner should be conversant with all such reports. Or, at least, the relevant parts of them. This is not as simple as it might seem. Some reports are well-known while others, particularly from different legal jurisdictions or countries, are relatively unfamiliar. Still, I think it is important that examiners understand what each of these reports has to say about the profession so some effort must be made to find and read them as much as is reasonable. Otherwise, how can an examiner answer a question like:

Could you tell the court what the report says about . . . [eg, latent fingerprint evidence]?

or, alternatively,

What have you got to say in response to the recommendations in these Reports?

Another, possibly less controversial, topic covered in the article is that of ‘Ad hoc experts’. I say less controversial because most forensic examiners would agree completely with the comments in this section. At the same time, in the FDE domain we have a serious problem with “unqualified” persons offering their services. Such persons were not the target of the commentary, but some of it would be very applicable.

The conclusion of the article is fair and open about issues that persist in Australian courts in terms of the failure to cross-examine experts, or to cross-examine in an incomplete manner. The authors wrote, “most of the problems with the forensic sciences are yet to be ventilated in Australian courts.” I would suggest the same could be said about Canada, the USA, and elsewhere.

It may surprise some to learn that many forensic experts are just as concerned about these issues as are the critics. The vast majority of forensic examiners work very hard to provide unbiased, sound and accurate results aimed at assisting the court in reaching the “correct” decision. Most experts would never intentionally mislead the court. Furthermore, those who practice good science are not afraid of questions and criticism. Critical questions are expected and examiners are not afraid to admit to uncertainty in their results, when it exists. They want to do good work and make every effort possible to make that happen.

Bringing this post back to its title, I hope that all forensic examiners take the time to read this article and, more importantly, take the criticism in it to heart. As a minimum, they should try to answer the questions as they would on the stand. Then spend some time in honest reflection about the quality of those answers. The results may be surprising, but it will definitely be time well-spent.

After all, forewarned is forearmed.

Footnotes

  1. Edmond, Gary, Martire, Kristy, Kemp, Richard, Hamer, David, Hibbert, Bryan, Ligertwood, Andrew, Porter, Glenn, San Roque, Mehera, Searston, Rachel, Tangen, Jason, Thompson, Matthew, & White, David. (2014). How to cross-examine forensic scientists: A guide for lawyers. Australian Bar Review, 39 (2), 174—197. PDF available online — here or here
  2. Edmond, G. “(ad)Ministering justice: Expert evidence and the professional responsibilities of prosecutors” (2013) 36 UNSWLJ 921. Available online – University of New South Wales Law Journal.
  3. Edmond, Gary, Martire, Kristy, Kemp, Richard, Hamer, David, Hibbert, Bryan, Ligertwood, Andrew, Porter, Glenn, San Roque, Mehera, Searston, Rachel, Tangen, Jason, Thompson, Matthew, & White, David. (2014). How to cross-examine forensic scientists: A guide for lawyers. Australian Bar Review, 39 (2), 174—197. PDF available online — here or here
  4. Edmond, G. “(ad)Ministering justice: Expert evidence and the professional responsibilities of prosecutors” (2013) 36 UNSWLJ 921. Available online – University of New South Wales Law Journal.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.