Measurement Science and Standards in Forensic Handwriting Analysis Conference

The expression “better late than never” applies to this post. Over the span of two days in June 2013 the Measurement Science and Standards in Forensic Handwriting Analysis (MSSFHA) conference was held. It explored the (then) current state of forensic handwriting analysis, aka, forensic handwriting examination (FHE). Presentations varied in content but most discussed recent advancements in measurement science and quantitative analyses as it relates to FHE.

NIST Forensic logoThe conference was organized by NIST’s Law Enforcement Standards Office (OLES) in collaboration with the AAFS — Questioned Document Section, the ABFDE, the ASQDE, the FBI Laboratory, the NIJ and SWGDOC.
Continue reading “Measurement Science and Standards in Forensic Handwriting Analysis Conference”

Prior odds — their meaning and significance

The concept of ‘prior odds’, a.k.a., prior probabilities or simply priors, comes up in most discussions about the evaluation of evidence.  A related term, posterior odds, also arises. The significance and meaning of both these terms becomes reasonably clear when viewed in the context of a “Bayesian approach”, or logical approach, to evidence evaluation.  That approach has been discussed at length elsewhere and relates to the updating of one’s belief about events based upon new information.

A key aspect is that some existing belief, encapsulated as ‘prior odds’ about conflicting possibilities, is updated on the basis of new information, encapsulated in the ‘likelihood-ratio’1 (another term you will undoubtedly have seen), to produce some new belief, encapsulated as ‘posterior odds’ about those same conflicting possibilities.

But what precisely do these terms, ‘prior odds’ and ‘posterior odds’, mean and how do they relate to the work of a forensic examiner?
Continue reading “Prior odds — their meaning and significance”

Hilton and Mathematical Probability

In 1958 Ordway Hilton participated in Session #5 of the RCMP Seminar Series. His article was originally published in that series by the RCMP, and subsequently republished in 1995 in the International Journal of Forensic Document Examiners.1

The later republication included the following abstract:

In every handwriting identification we are dealing with the theory of probability. If an opinion is reached that two writings are by the same person, we are saying in effect that with the identification factors considered the likelihood of two different writers having this combination of writing characteristics in common is so remote that for all practical purposes it can be disregarded. Such an opinion is derived from our experience and is made without formal reference to any mathematical measure. However, the mathematician provides us with a means by which the likelihood of chance duplication can be measured. It is the purpose of this paper to explore the possibility of applying such mathematical measure to the handwriting identification problem to see how we might quantitatively measure the likelihood of chance duplication.

Hilton’s article was written in 8 main sections with references, and is followed by a discussion between seminar participants. Today’s review will discuss each section of the article in turn.
Continue reading “Hilton and Mathematical Probability”

Huber, Headrick & Bayes…

Huber and Headrick Handwriting Identification
Like many document examiners I consider Huber and Headrick’s 1999 textbook, Handwriting Identification: Facts and Fundamentals, to be a seminal work.1

In my opinion, it is the best textbook written to date on the topic of handwriting identification. The authors provide a comprehensive overview as well as some less conventional perspectives on certain concepts and topics.  In general I tend to agree with their position on many things. A bit of disclosure is need here: I was trained in the RCMP laboratory system; the same system in which Huber and Headrick were senior examiners and very influential.  Hence, I tend to be somewhat biased towards their point-of-view.

But that does not mean I think their textbook is perfect. While it is well written and manages to present a plethora of topics in reasonable depth, some parts are incomplete or misleading; particularly when we take developments that have happened since it was written into account.

One area of particular interest to me relates to the evaluation of evidence; specifically evaluation done using a coherent logical (or likelihood-ratio) approach.  I have posted elsewhere on the topic so I’m not going to re-hash the background or details any more than necessary.

Rev Bayes

This post will look at the topic of ‘Bayesian concepts’ as discussed by Huber and Headrick in their textbook.  These concepts fall under the general topic of statistical inference found in Chapter 4 “The Premises for the Identification of Handwriting”.  The sub-section of interest is #21 where the authors attempt to answer the question, “What Part Does Statistical Inference Play in the Identification Process?”  Much of their answer in that sub-section relates to Bayesian philosophy, in general, and the application of the logical approach to evidence evaluation.  However, while they introduce some things reasonably well, the discussion is ultimately very flawed and very much in need of correction. Or, at least, clarification.
Continue reading “Huber, Headrick & Bayes…”