The concepts of ‘prior odds’, a.k.a., prior probabilities or simply priors, and ‘posterior odds’ come up in most discussions about the evaluation of evidence. The significance and meaning of both terms becomes clear when viewed in the context of a “Bayesian approach”, or the logical approach, to evidence evaluation. That approach has been discussed at length elsewhere and relates to the updating of one’s belief about events based upon new information. A key aspect is that some existing belief, encapsulated as the ‘prior odds’ of two competing possibilities or events, will be updated on the basis of new information, encapsulated in the ‘likelihood-ratio’1 (another term you will undoubtedly have seen), to produce some new belief, encapsulated as ‘posterior odds’ about those same competing possibilities.
But what precisely do these terms, ‘prior odds’ and ‘posterior odds’, mean and how do they relate to the work of a forensic examiner?
In 1958 Ordway Hilton participated in Session #5 of the RCMP Seminar Series. His article was originally published in that series by the RCMP, and subsequently republished in 1995 in the International Journal of Forensic Document Examiners.1
The later republication included the following abstract:
In every handwriting identification we are dealing with the theory of probability. If an opinion is reached that two writings are by the same person, we are saying in effect that with the identification factors considered the likelihood of two different writers having this combination of writing characteristics in common is so remote that for all practical purposes it can be disregarded. Such an opinion is derived from our experience and is made without formal reference to any mathematical measure. However, the mathematician provides us with a means by which the likelihood of chance duplication can be measured. It is the purpose of this paper to explore the possibility of applying such mathematical measure to the handwriting identification problem to see how we might quantitatively measure the likelihood of chance duplication.
Hilton’s article was written in 8 main sections with references, and is followed by a discussion between seminar participants. Today’s review will discuss each section of the article in turn.
Like many document examiners I consider Huber and Headrick’s 1999 textbook, Handwriting Identification: Facts and Fundamentals, to be a seminal work.1
In my opinion, it is the best textbook written to date on the topic of handwriting identification. The authors provide a comprehensive overview as well as some less conventional perspectives on certain concepts and topics. In general I tend to agree with their position on many things. A bit of disclosure is need here: I was trained in the RCMP laboratory system; the same system in which Huber and Headrick were senior examiners and very influential. Hence, I tend to be somewhat biased towards their point-of-view.
But that does not mean I think their textbook is perfect. While it is well written and manages to present a plethora of topics in reasonable depth, some parts are incomplete or misleading; particularly when we take developments that have happened since it was written into account.
One area of particular interest to me relates to the evaluation of evidence; specifically evaluation done using a coherent logical (or likelihood-ratio) approach. I have posted elsewhere on the topic so I’m not going to re-hash the background or details any more than necessary.
This post will look at the topic of ‘Bayesian concepts’ as discussed by Huber and Headrick in their textbook. These concepts fall under the general topic of statistical inference found in Chapter 4 “The Premises for the Identification of Handwriting”. The sub-section of interest is #21 where the authors attempt to answer the question, “What Part Does Statistical Inference Play in the Identification Process?” Much of their answer in that sub-section relates to Bayesian philosophy, in general, and the application of the logical approach to evidence evaluation. However, while they introduce some things reasonably well, the discussion is ultimately very flawed and very much in need of correction. Or, at least, clarification.
The 2014 ASQDE–ASFDE conference included an interesting panel discussion with the title “Conclusions… Signature and Handwriting Conclusion Terminology and Scales”. I was fortunate to be able to take part, albeit only remotely via Skype.
The abstract for the session was as follows:
A current and global issue in our field is the topic of conclusion terminology and conclusion scales, particularly in respect of signature and handwriting conclusions. It is an important yet difficult topic to address because, while there is some commonality in the conclusion scales used in different geographical regions around the world, within a number of geographical regions there are multiple scales in use. It is for this very reason that it is also a topic in great need of discussion and there is a strong argument that we should attempt to reach a consensus (even if the result is that we agree to disagree).
This panel discussion is a collaboration of insights from numerous colleagues in our field in person, via Skype and in writing from private and government laboratories in geographical regions across the Americas, Australia, Asia, Africa, the Middle East and Europe.