Many years ago I came across an interesting, if a bit limited, discussion in a blog post entitled “Expert testimony in pattern evidence cases – is absolute uniqueness necessary?”1,2That post is from Sept 4, 2009 just after the publication of National Academy of Science’s report “Strengthening Forensic Science in the United States: a Path Forward”.3 The question posed is still quite relevant today. I would say that most forensic practitioners today would answer the basic question regarding ‘absolute uniqueness’ with a negative response, but their reasons behind that answer applies will still vary.
For many people, ‘absolute uniqueness’ is a ‘forbidden’ concept mostly because of some policy they must follow, or because of a more personal recognition of some (often vague) issue relating to the ‘limits of science’. For other people the issue is a well-defined matter of science and logic, being dictated by the nature of information, what information can tell us about the world, and how information can and should be used to update belief about something. For the latter group (which is steadily increasing in size as awareness and understanding increases), the concept of ‘absolute uniqueness’ is neither required, nor beneficial in forensic work.
There has been a LOT of discussion about this in recent years, but I found the blog post interesting at the time (and I don’t think too much has changed since then). Since the blog itself is no longer active, I have reposted the complete series of posts here (pulling them from archive.org).
The topic started with a post from the moderator:
Expert testimony in pattern evidence cases – is absolute uniqueness necessary? What information is needed to form a conclusion about an identification? Do conclusions require statistical data, as in DNA cases, to offer an opinion? Is it possible to state that two items of evidence come from a sole source? What may an expert opine when no statistical data is readily available and only experience suggests a conclusion? The National Academy report raises some profound questions and some intriguing research possibilities. But in the interim, while we wait for academics to study the multitude of pattern evidence forensic scientists encounter in their day to day work, who may report cases and testify in court? Readers are invited to speak to these issues.
David H. Kaye (DHK) is one of my favourite writers. He is truly prolific and always manages to provide great insights for the reader. His grasp of statistics, logic, and the law is second-to-none, and his ability to communicate those very challenging topics to his audience is equally impressive.
As a mini introduction, David “…is Distinguished Professor, and Weiss Family Scholar in the School of Law, a graduate faculty member of Penn State’s Forensic Science Program, and a Regents’ Professor Emeritus, ASU.” If you would like to see a list of his publications check out http://personal.psu.edu/dhk3/cv/cv_pubs.html
In 1958 Ordway Hilton participated in Session #5 of the RCMP Seminar Series. His article was originally published in that series by the RCMP, and subsequently republished in 1995 in the International Journal of Forensic Document Examiners.1
The later republication included the following abstract:
In every handwriting identification we are dealing with the theory of probability. If an opinion is reached that two writings are by the same person, we are saying in effect that with the identification factors considered the likelihood of two different writers having this combination of writing characteristics in common is so remote that for all practical purposes it can be disregarded. Such an opinion is derived from our experience and is made without formal reference to any mathematical measure. However, the mathematician provides us with a means by which the likelihood of chance duplication can be measured. It is the purpose of this paper to explore the possibility of applying such mathematical measure to the handwriting identification problem to see how we might quantitatively measure the likelihood of chance duplication.
Hilton’s article was written in 8 main sections with references, and is followed by a discussion between seminar participants. Today’s review will discuss each section of the article in turn. Read more
Okay, determining the ‘best’ of anything is always a challenge. It is, in almost every instance, a highly subjective decision based on some set of appealing features or characteristics… appealing to the person making the determination, of course. And, because this is my blog, that person happens to be me.
In fairness, there are a number of authors who have written extensively on the topic: Osborn, Ellen, Hilton, Harrison, Hilton, among others (and I apologize to those I have left off this list). I have read all of those textbooks (including most editions) and each has its strengths and weaknesses.
Nonetheless, in my opinion the best general textbook written to date on the topic of handwriting identification was done by co-authors Roy A. Huber and A.M. (Tom) Headrick, both long-time document examiners in the R.C.M. Police laboratory system.
That textbook is Handwriting Identification: Facts and Fundamentals.1
This year’s International Conference on Forensic Inference and Statistics (ICFIS) is being held at Leiden University in the Netherlands. ICFIS conferences are always very good and this is the 9th such event. I am hoping to attend to present my thoughts on the topic of education relating to the logical (a.k.a. likelihood-ratio or LR) approach to evidence evaluation. Over the last few years I have given several one and two-day seminars and workshops on this topic, mainly for Forensic Document Examiners (FDEs) though the subject matter relates to all disciplines equally. Those workshops have been great and provided a relatively unusual opportunity to learn about how fully trained examiners come to grips with a complicated and difficult topic. One that is fundamental to FDE work. Read more