The 11th International Conference on Forensic Inference and Statistics, or ICFIS 2023, is set for June 12-15 of this year. It will be held at the Faculty of Law (Juridicum) of Lund University, Lund, Sweden. While I am saddened that I cannot attend this particular meeting, several years ago I had the pleasure of going to the 2014 International Conference on Forensic Inference and Statistics, or ICFIS which was the 9th iteration of the conference. I wrote a blog post about that meeting some time ago.
I can say, based on past experience alone, that this meeting is well worth attending. That’s particularly true if you are interested in the logical approach to evidence evaluation, but it would benefit any forensic scientist. You will not find a better collection of brilliant people all focused on forensic inference, in the broadest sense.
Forensic scientists, lawyers, academics—they will all be there.
When an examiner expresses an opinion along the lines of ‘the findings support one proposition over another proposition’, a question often follows. Specifically, does that opinion mean ‘it is more likely than not that the favored proposition actually happened’? The short answer is “no, it does not mean that.” At least, not necessarily.
In order to reach such a conclusion one must consider information that goes beyond the FDE evidence. As a rule, any opinion I provide will be constrained to the probability of the findings/observations in terms of one of at least two possible explanations. Ultimately, equating the two statements is inappropriate because they are not equivalent.
Years ago, in 2013 to be precise, I was invited to speak at the ICA conference held in Montréal, Québec. The conference had a special session on “distinguishing between science and pseudoscience in forensic acoustics”. Now, I am definitely not an expert in forensic acoustics. In fact, I know almost nothing about the field other than what I’ve read from time to time. So I wasn’t there to tell the audience anything about forensic acoustics, per se.
Many years ago I came across an interesting, if limited, discussion in a blog post entitled “Expert testimony in pattern evidence cases – is absolute uniqueness necessary?”, That post is dated Sept 4, 2009, shortly after the publication of National Academy of Science’s report “Strengthening Forensic Science in the United States: a Path Forward”. But the basic question posed in it is still relevant today. I would say that most forensic practitioners today would answer the question about the necessity of ‘absolute uniqueness’ in the negative. However, their individual reason(s) for their answer will still vary.
For many people, ‘absolute uniqueness’ is mainly a ‘forbidden’ concept because of some policy they must follow, or because of a more personal recognition of an (often vague) issue relating to the ‘limits of science’. For other people the matter is a well-defined issue in science and logic dictated by the nature and limits of information (knowledge), what information can really tell us about the world, and how information can and should be used to update beliefs about the world. For the latter group (which is steadily increasing in size as awareness and understanding improves), the concept of ‘absolute uniqueness’ is neither required, nor even beneficial in forensic work.
There has been a LOT of discussion about this in recent years, but I found the blog post interesting at the time even though it focused mainly on latent print examination. I feel that not much has changed since then so, even now, it deserves recognition and consideration. Since the blog itself is no longer active, I have reposted the complete series of messages here (pulling them from archive.org).
The topic started with a post from the moderator (Barry Fisher) who wrote:
Expert testimony in pattern evidence cases – is absolute uniqueness necessary?
What information is needed to form a conclusion about an identification? Do conclusions require statistical data, as in DNA cases, to offer an opinion? Is it possible to state that two items of evidence come from a sole source? What may an expert opine when no statistical data is readily available and only experience suggests a conclusion? The National Academy report raises some profound questions and some intriguing research possibilities. But in the interim, while we wait for academics to study the multitude of pattern evidence forensic scientists encounter in their day to day work, who may report cases and testify in court? Readers are invited to speak to these issues.
The 79th Annual General Meeting of the American Society of Questioned Document Examiners (ASQDE, Inc) was held August 10th to 12th, 2021. It was again conducted online due to the COVID-19 pandemic. The theme for this year’s meeting was “ASQDE_AGM 2.0 ver. 2021 – The Future is Now”.
The 73rd Annual General Meeting of the American Academy of Forensic Sciences was held February 15th to 19th, 2021. It was an online meeting and had the theme of “One Academy Pursuing Justice through Truth and Evidence”.
The 78th Annual General Meeting of the American Society of Questioned Document Examiners (ASQDE, Inc) was held August 10th through 14th, 2020. It was a new type of meeting necessitated by the COVID-19 pandemic. The meeting was originally planned to be held in Frankenmuth, Michigan but a (very wise) decision was made to hold an entirely virtual meeting instead. The theme for this year was “Future-Proofing Questioned Documents”.
The expression “better late than never” applies to this post. Over the span of two days in June 2013 the Measurement Science and Standards in Forensic Handwriting Analysis (MSSFHA) conference was held. It explored the (then) current state of forensic handwriting analysis, aka, forensic handwriting examination (FHE). Presentations varied in content but most discussed recent advancements in measurement science and quantitative analyses as it relates to FHE.
The conference was organized by NIST’s Law Enforcement Standards Office (OLES) in collaboration with the AAFS — Questioned Document Section, the ABFDE, the ASQDE, the FBI Laboratory, the NIJ and SWGDOC.
The concepts of ‘prior odds’, a.k.a., prior probabilities or simply priors, and ‘posterior odds’ come up in most discussions about the evaluation of evidence. The significance and meaning of both terms becomes clear when viewed in the context of a “Bayesian approach”, or the logical approach, to evidence evaluation. That approach has been discussed at length elsewhere and relates to the updating of one’s belief about events based upon new information. A key aspect is that some existing belief, encapsulated as the ‘prior odds’ of two competing possibilities or events, will be updated on the basis of new information, encapsulated in the ‘likelihood-ratio’ (another term you will undoubtedly have seen), to produce some new belief, encapsulated as ‘posterior odds’ about those same competing possibilities.
But what precisely do these terms, ‘prior odds’ and ‘posterior odds’, mean and how do they relate to the work of a forensic examiner?
In 1958 Ordway Hilton participated in Session #5 of the RCMP Seminar Series. His article was originally published in that series by the RCMP, and subsequently republished in 1995 in the International Journal of Forensic Document Examiners.
The later republication included the following abstract:
In every handwriting identification we are dealing with the theory of probability. If an opinion is reached that two writings are by the same person, we are saying in effect that with the identification factors considered the likelihood of two different writers having this combination of writing characteristics in common is so remote that for all practical purposes it can be disregarded. Such an opinion is derived from our experience and is made without formal reference to any mathematical measure. However, the mathematician provides us with a means by which the likelihood of chance duplication can be measured. It is the purpose of this paper to explore the possibility of applying such mathematical measure to the handwriting identification problem to see how we might quantitatively measure the likelihood of chance duplication.
Hilton’s article was written in 8 main sections with references, and is followed by a discussion between seminar participants. Today’s review will discuss each section of the article in turn.