The expression “better late than never” applies to this post. Over the span of two days in June 2013 the Measurement Science and Standards in Forensic Handwriting Analysis (MSSFHA) conference was held. It explored the (then) current state of forensic handwriting analysis, aka, forensic handwriting examination (FHE). Presentations varied in content but most discussed recent advancements in measurement science and quantitative analyses as it relates to FHE.
The conference was organized by NIST’s Law Enforcement Standards Office (OLES) in collaboration with the AAFS — Questioned Document Section, the ABFDE, the ASQDE, the FBI Laboratory, the NIJ and SWGDOC.
Every ASQDE meeting is worth attending. They are great fun with lots of useful and interesting content. Unfortunately, I could not make it to the 2016 ASQDE conference held in Pensacola, Florida. Nonetheless I managed to participate, albeit via Skype.
One of the activities at the conference was a panel discussion discussing “Approaches to Evaluation and Reporting of Expert Evidence” and I was invited to participant with three other people. It was a very interesting session…
The concept of ‘prior odds’, a.k.a., prior probabilities or simply priors, comes up in most discussions about the evaluation of evidence. A related term, posterior odds, also arises. The significance and meaning of both these terms becomes reasonably clear when viewed in the context of a “Bayesian approach”, or logical approach, to evidence evaluation. That approach has been discussed at length elsewhere and relates to the updating of one’s belief about events based upon new information.
A key aspect is that some existing belief, encapsulated as ‘prior odds’ about conflicting possibilities, is updated on the basis of new information, encapsulated in the ‘likelihood-ratio’ (another term you will undoubtedly have seen), to produce some new belief, encapsulated as ‘posterior odds’ about those same conflicting possibilities.
But what precisely do these terms, ‘prior odds’ and ‘posterior odds’, mean and how do they relate to the work of a forensic examiner?
In 1958 Ordway Hilton participated in Session #5 of the RCMP Seminar Series. His article was originally published in that series by the RCMP, and subsequently republished in 1995 in the International Journal of Forensic Document Examiners.
The later republication included the following abstract:
In every handwriting identification we are dealing with the theory of probability. If an opinion is reached that two writings are by the same person, we are saying in effect that with the identification factors considered the likelihood of two different writers having this combination of writing characteristics in common is so remote that for all practical purposes it can be disregarded. Such an opinion is derived from our experience and is made without formal reference to any mathematical measure. However, the mathematician provides us with a means by which the likelihood of chance duplication can be measured. It is the purpose of this paper to explore the possibility of applying such mathematical measure to the handwriting identification problem to see how we might quantitatively measure the likelihood of chance duplication.
Hilton’s article was written in 8 main sections with references, and is followed by a discussion between seminar participants. Today’s review will discuss each section of the article in turn.