Forewarned…

Forewarned is forearmed or, if Latin is your thing, “praemonitus, praemunitus”. So the saying goes and clearly there is great value in knowing what lies ahead for us. If we know what is coming our way we can, in theory, prepare properly for any challenge.

Challenges are nothing new to forensic scientists. Critics routinely point out issues they have with our work. Some of those criticisms are fair and reasonable, others not so much. Much of the critical commentary affects a discipline as a whole demanding an overall, or group, response by members of each discipline. In my experience, disciplines tend to be behind the curve in their responses to critics. Nonetheless, over time some issues have been addressed, at least partially if not completely, through empirical research. Others have not. To be fair, the activities needed to properly address the critics are not trivial and require both time and resources; scarce commodities in modern forensic labs. Overall, things are improving, albeit very slowly.

Criticism takes on a whole new meaning in the context of a court of law. Indeed, I think that criticism is the essence of cross-examination — a fundamental and important aspect of any adversarial justice system. Although essential, it is rarely an enjoyable part of the proceedings for any expert.

Read more

When is a ‘Bayesian’ not a ‘Bayesian’?

Several of the posts on this blog relate to the logical approach to evidence evaluation; aka, the coherent logical approach, or the likelihood-ratio (LR) approach. In my opinion, it is the best way to evaluate evidence for forensic purposes no matter what type of evidence is being discussed. I say “best” because it is simple, logically sound, and relatively straight-forward to apply in forensic work.  It helps to promote transparency through the application of a thorough and complete evaluation process (all points I have explained in other posts).

The reality is, however, that this approach is still not well understood by forensic practitioners, nor by members of the legal profession.

I hope that in time, and with education, that will change. Several workshops I have presented have been aimed at helping examiners understand what it really means, how it works, the philosophical basis behind the approach as well as the need for and benefit of doing things that particular way. It really does work to the benefit of both the examiner and their ultimate client, the court.

One recurring issue at these workshops relates to the very basic and fundamental concept of what the term “Bayesian” means. For various reasons, but mainly just misunderstanding, many people in the forensic document examination community hold the term “Bayesian” in negative regard. When the word ‘Bayes’, or any of its many derivations, come up in the conversation eyes glaze over while heads sag ever so slightly. And those are the positive people in the crowd.

I find such reactions understandable, but unfortunate.  The fact is that an understanding of the term is beneficial for anyone interested in how it might be applied in a forensic evidence context, whether or not one chooses to do so.  Indeed, for myself the answer to the question posed above — when is a Bayesian not a Bayesian? — lies in knowing how the overall Bayesian philosophy and theorem (or rule) differs from the more constrained and limited logical approach to evidence evaluation. These two are not the same or even close to equivalent.

Read more

Can of worms…

Worms in a can

When someone “opens a can of worms” it usually spells trouble. For many people, that phrase evokes a powerful image of a writhing mess of worms escaping from a previously-sealed, but now opened, can or container. With the result of such action being serious problems for the owner of said can, often problems of an unanticipated or uncertain nature. In the context of our work as Forensic Document Examiners, I sometimes hear this coming up in discussions of how to handle questions on the stand. The advice goes along the lines of ‘keep your answers simple and say as little as possible in order to limit any opportunity for questions from the other side.’

It is suggested that lengthy or complex answers will only lead to more questions and more discussion. The latter are the proverbial “can of worms” that one must strive to avoid opening.

That makes little sense to me.

Read more

ICFIS 2014 — Teaching the Logical Approach for Evidence Evaluation to FDEs

This year’s International Conference on Forensic Inference and Statistics (ICFIS) is being held at Leiden University in the Netherlands.  ICFIS 2014 logoICFIS conferences are always very good and this is the 9th such event.  I am hoping to attend to present my thoughts on the topic of education relating to the logical (a.k.a. likelihood-ratio or LR) approach to evidence evaluation. Over the last few years I have given several one and two-day seminars and workshops on this topic, mainly for Forensic Document Examiners (FDEs) though the subject matter relates to all disciplines equally.  Those workshops have been great and provided a relatively unusual opportunity to learn about how fully trained examiners come to grips with a complicated and difficult topic.  One that is fundamental to FDE work.
Read more

ASQDE “Conclusions and Logical Inference” Workshop 2013

This year the Annual General Meeting of the American Society of Questioned Document Examiners (ASQDE) ASQDE 2013 is being held in Indianapolis, Indiana on August 24 through 29, 2013. In keeping with the theme, “Demonstrative Science: Illustrating Findings in Reports and Court Testimony”, I will be presenting a one-day workshop entitled “Conclusion Scales and Logical Inference” on Sunday, August 25.
Read more

Introduction to the Logical Approach to Evidence Evaluation

Forensic scientists, individually and as a group, want to be completely logical, open and transparent in their approach to the evaluation of evidence. Such an assertion is unquestionable. Further, I am sure that most document examiners believe this is exactly what they are achieving when they apply the procedures outlined in various traditional textbooks or the SWGDOC/ ASTM standards; for example, the SWGDOC Standard for Examination of Handwritten Items. Given the very understandable desire to be logical, I find it strange that so many people have a negative attitude towards anything and everything “Bayesian” in nature. After all, a logical approach to evidence evaluation that conforms to the overall Bayesian philosophy or approach is, quite literally, the embodiment of logic (more specifically, probabilistic logic).

Read more

Intra- vs inter-source Variation

Some time ago, in 2009 to be precise, a series of posts was made to the CLPEX.com chat board (a discussion group mainly for latent print examiners) that discussed intra-source versus inter-source variation.1 I’ve replicated key parts of the discussion below, with quotes for the original posters, interspersing some of my own thoughts.

The discussion focused on latent print examination (LPE) but many of the concepts cross over to other disciplines, like handwriting examination.

Terminology is key to understanding so this discussion is worth a review.   

The original post was by L.J. Steele who asked,

Anyone have a good set of pictures to illustrate significant intra-source variation — two good-quality rolled prints or two latents known to be from the same person that might trip up a trainee (or even a veteran)? I’m looking for something for an article and/or powerpoint to help attorneys understand what I mean when I talk about intra-source explainable differences.

There were several replies which I’ll leave out as they addressed the original question, but didn’t get into the topic explored in this post.

Then Pat A. Wertheim commented:

I don’t think I have ever heard the term “intra-source.” It is quite common to talk about “same source.” I am not even sure the meaning would be the same or whether there might be some fine distinction between the two.

Has anyone else ever used or heard the term “intra-source?” Is there any difference between that term and “same source?”

That’s where I’ll pick up the response provided by Glenn Langenburg:

(g.) Yeah in the community of folks looking at fingerprint statistics, these are commonly used terms.

I hold a much broader view on this as it applies far beyond the fingerprint realm. In reality, these terms are common to many applications and fields of study. The underlying concepts relating to the source(s) of variation are found throughout statistical theory and methods.

In fact, the differentiation of intra-source variation and inter-source variation is fundamental to most traditional parametric tests for statistical hypothesis testing; at least when it involves a comparison of means (i.e., t-test, ANOVA, etc.). For reference, I would say that the terms ‘intra-source’ and ‘inter-source’ are less often seen in the literature than the similar terms, ‘within-source’ and ‘between-source’. 

Glenn explains the terms as they pertain to the LPE realm, as follows: 

(g.) Intra-source variation is essentially represented by the concept of distortion (i.e. “how different can two impressions appear when in fact, they are from the same source skin”) versus Inter-source variation (i.e. “how similar can two impressions appear when in fact, they are from different sources)–what we might think of as close non-matches.

Given the nature of fingerprints, these fundamental concepts reduce to the points made by Glenn. However, in other domains such as handwriting comparison, the situation is a bit more complicated.  Nonetheless, exact parallels are present.2

The latter term sounds rather like a type of random match probability (RMP), doesn’t it?  What is the likelihood/probability that a set of common features would be observed, by chance alone, when the samples are in fact drawn from different sources taken from some (hopefully specified) population?  Without some estimation of the second factor (inter-source), how is it possible to determine the value of the first factor (intra-source)?  The short answer is, you can’t. 

Any given feature observed in a comparison will be ‘possible’ under either proposition; only the likelihood of observation changes.

(g.) In the statistical approaches proposed by Neumann, Champod, Mieuwly, Egli, and others, likelihood ratios represent these two competing parts: intra-source versus inter-source variations.  This is intuitive, since analysts are already doing this everytime we offer an opinion.  Everytime we report an identification, at some point we weighed the differences observed and asked ourselves, are these differences likely due to a distortion (within tolerance for Intra-source variation) or are they true discrepancies (within tolerance for Inter-source variation)?

As Glenn, notes this is all encapsulated perfectly in the concept of the likelihood-ratio used in the logical approach to evidence evaluation.

Ultimately, and in terms of the classical ‘identification’ opinion, this also means the examiner came to a conclusion that the evidence can only be explained in one way. All other possible explanations are deemed to be unreasonable to the point that they can be rejected outright. The main issue for most critics who disapprove of such opinions is the implicit application of some unknown threshold beyond which the expression of such a conclusion, an identification, can be justified. What is that threshold and how do we know it has been exceeded? Another obvious, and very important, issue is who should be making such decisions — the examiner or someone else? 

Any and all statistical methods, not just those of a ‘Bayesian’ nature, must take variation into account.3 Generally, this is done by contrasting and comparing within- and between- sources of variation. A simple truism that derives from these concepts is, as follows:

Differentiation between two potential sources can be achieved if and only if between-source variation exceeds within-source variation  

Basically, the spread between different (multiple) samples must exceed the spread for any given individual sample within the set of all possible samples. If there is too much overlap, the samples cannot be effectively distinguished from one another.

Glenn ended his comments with:

So you had experienced these concepts before, but maybe not heard these exact terms. Also, they differ from Intra-observer variation versus Inter-observer variation.  Whereas, the concept in the previous paragraph deals with how the features can present themselves in an impression (what arrangements are possible)…Inter/Intra observer variations deal with how analysts perceive features.  What features did I perceive today in an impression versus yesterday or last week (in the same impression) (INTRA-OBSERVER) v. How different are the observation from analyst to analyst all examining the same impression (INTER-OBSERVER).  I have some good data on this concept to share with the community soon (in the thesis).

I have to agree with Glenn on all his points.

The concepts of intra- versus inter- variation are both common to, and critical for, all forms of comparison (and, obviously) decision-making. This is a very interesting topic that comes into play for everything forensic examiners do on a regular basis — even though, as Glenn points out, the terms may not be particularly familiar to some people. 

Accuracy and precision

The terms accuracy and precision are often confused or misunderstood.  But every scientist, forensic or otherwise, should understand what they mean.  In simple terms, ‘accuracy’ relates to how closely the value comes to the real score or true value (being ‘on target’). ‘Precision’, on the other hand, relates to the consistency of the value in repeated testing.  Any given test, statistic or process may produce results that are one or the other, both or neither.

Read more