In 1958 Ordway Hilton participated in Session #5 of the RCMP Seminar Series. His article was originally published in that series by the RCMP, and subsequently republished in 1995 in the International Journal of Forensic Document Examiners.
The later republication included the following abstract:
In every handwriting identification we are dealing with the theory of probability. If an opinion is reached that two writings are by the same person, we are saying in effect that with the identification factors considered the likelihood of two different writers having this combination of writing characteristics in common is so remote that for all practical purposes it can be disregarded. Such an opinion is derived from our experience and is made without formal reference to any mathematical measure. However, the mathematician provides us with a means by which the likelihood of chance duplication can be measured. It is the purpose of this paper to explore the possibility of applying such mathematical measure to the handwriting identification problem to see how we might quantitatively measure the likelihood of chance duplication.
Hilton’s article was written in 8 main sections with references, and is followed by a discussion between seminar participants. Today’s review will discuss each section of the article in turn.
Okay, determining the ‘best’ of anything is always a challenge. It is, in almost every instance, a highly subjective decision based on some set of appealing features or characteristics… appealing to the person making the determination, of course. And, because this is my blog, that person happens to be me.
In fairness, there are a number of authors who have written extensively on the topic: Osborn, Ellen, Hilton, Harrison, Hilton, among others (and I apologize to those I have left off this list). I have read all of those textbooks (including most editions) and each has its strengths and weaknesses.
Nonetheless, in my opinion the best general textbook written to date on the topic of handwriting identification was done by co-authors Roy A. Huber and A.M. (Tom) Headrick, both long-time document examiners in the R.C.M. Police laboratory system.
That textbook is Handwriting Identification: Facts and Fundamentals.
This year’s International Conference on Forensic Inference and Statistics (ICFIS) is being held at Leiden University in the Netherlands. ICFIS conferences are always very good and this is the 9th such event. I am hoping to attend to present my thoughts on the topic of education relating to the logical (a.k.a. likelihood-ratio or LR) approach to evidence evaluation. Over the last few years I have given several one and two-day seminars and workshops on this topic, mainly for Forensic Document Examiners (FDEs) though the subject matter relates to all disciplines equally. Those workshops have been great and provided a relatively unusual opportunity to learn about how fully trained examiners come to grips with a complicated and difficult topic. One that is fundamental to FDE work.
It is absolutely true that most forensic scientists want to be completely logical, open and transparent in their approach to the evaluation of evidence. Further, I am sure that most document examiners believe this is exactly what they are achieving when they apply the procedures outlined in various traditional textbooks or the SWGDOC/ ASTM standards; for example, the SWGDOC Standard for Examination of Handwritten Items.
Given the very understandable desire to be logical, I find it strange that so many people have a negative attitude towards anything Bayesian in nature. After all, an approach to evidence evaluation conforming to the Bayesian philosophy or approach would be quite literally the embodiment of logic (more specifically, probabilistic logic).