Years ago, in 2013 to be precise, I was invited to speak at the ICA conference held in Montréal, Québec.  The conference had a special session on “distinguishing between science and pseudoscience in forensic acoustics”. Now, I am definitely not an expert in forensic acoustics.  In fact, I know almost nothing about the field other than what I’ve read from time to time. So I wasn’t there to tell the audience anything about forensic acoustics, per se.

My contribution had two goals, one specific and one more general.  The latter was simply to increase Canadian content at the event — I fit the bill in that regard since (as those ads used to proclaim), ‘I am Canadian’.  The second purpose, perhaps more important, is that I have some knowledge of the basic topic having  lectured about it in the past.

My talk was entitled “A Canadian Perspective on Forensic Science versus Pseudoscience”.1

The talk presented my personal perspective on the matter.  Being a professional forensic scientist; specifically, a  forensic document examiner, with over 35 years of experience I am reasonably familiar with the issues faced both by experts and the courts in Canada, and could speak to the matter with some authority.  Ultimately, the ‘discipline’ does not matter for this discussion.  All of the issues apply across the board for anyone and everyone professing to be a forensic expert, regardless of the domain being discussed.

The issues presented in my presentation and article go well beyond what will be discussed in this blog post, but I mention it because it relates to the main topic of the post. Anyway, if you are interested in my personal point-of-view on the topic I would recommend downloading the paper which is freely available for download from the POMA website.


As for this blog post, I was prompted to compose it after coming upon another blog post at the Daily Kos that touched on the same topic; albeit somewhat peripherally. The DK post from Sept 2013 was discussing a decision by Popular Science to shut down comments on their website while lamenting how people spew nonsense and skew important discussions away from science and scientific knowledge.

Although it was not aimed at forensic science at all, the author (Land of Enchantment) put together an interesting list to show how one might differentiate science from pseudoscience. The key points were summarized in a table that looks like the following:

SciencePseudo-science
Willingness to change to new evidenceFixed Ideas
Ruthless peer reviewNo peer review
Take account of all new discoveriesSelects only favourable discoveries
Invites criticismSee criticism as conspiracy
Verifiable resultsNon-repeatable results
Limits claims of usefulnessClaims of widespread usefulness
Accurate measurement“Ball-park” measurement

On quick review, this all seems pretty good.


It got me wondering, how does the discipline of Forensic Document Examination fair when viewed in this light? Where do we fit? In the right-hand column? The left? Or both? 

A complication arises because FDE can be considered in a couple of different ways. First, we might think about the underlying premises or bases for the discipline itself. In other words, are the underpinnings of the discipline ‘scientific’?

Or we might consider the practical application. Are the routine and daily activities of a forensic document examiner ‘scientific’ in nature? That is, can we say that an examiner also a scientist?

The latter is arguably most important because it relates to real-world casework — the stuff that is the essence of the information we ultimately provide to a court as evidence.

At any rate, I think both aspects are important so I’ll try to consider both (as appropriate) in my analysis that follows. Now, let’s consider each row from the above table in turn…

  1. Open to change with new evidence vs fixed ideas?  This is tough. I think that, for the most part, the community as a whole tends towards the latter. Not everyone, of course, but our discipline is like most forensic disciplines—very cautious about ‘new’ ideas.2 One example is the discipline’s grudging acceptance that the concept of error applies in our work. Our work, like every other forensic disciplines, should not be considered error-free. That does not mean it is bad or worthless—far from it. The key is ensuring we, and the courts, understand the limits of the work. What we do is not perfect. So what? Nothing can be perfect so it would be best to just get over it. Examiners have not been particularly willing to accept the notion of error despite plenty of information showing that it is a valid concern and something we should be discussing as well as addressing. At the same time, examiners are very interested whenever something new comes along that can help them in their work. In that regard, every examiner I know is very open-minded. So where does FDE fit in this category?  I think it’s a split.
  2. Ruthless PR vs no PR? In my experience, the concept of peer review is poorly understood. First of all, one can legitimately question the value of peer review in any domain.3 Second, the definition of what constitutes (meaningful) peer review is not the same for everyone. And, of course, it is likely that the continuum is more nuanced than is suggested by the question. In traditional scientific domains, PR refers to the review of one’s work by peers in the general (or a specific) scientific community, most often through publication in ‘respected’ journals. Those journals conduct their own, initial review of articles before publication, but the ‘real’ review happens when others read the article and comment on it (or don’t—silence being a form of acceptance or acquiescence). One can certainly point to forensic science publications that fulfill this role for QD. Good examples might be the Journal of Forensic Sciences or the Journal of the ASQDE (among others). But the field of forensic document examination isn’t driven by research nearly as much as other domains have been. There is some, of course, but it is a well-established field. Other people also speak of “peer review” in casework. Strictly speaking, this is not peer review as seen in other sciences. It is a valuable quality assurance measure to be sure, but it is not peer review, per se. So where does the discipline come out on this one? I would say somewhere in the middle mainly because 1) most examiners do not publish much, if anything, in journals, 2) our journals are not known as being strongly critical of submissions, and 3) casework review is not ‘peer review’ in the same sense. So, which column? Again, it’s a split.
  3. Takes into account all new discoveries or information?  Selects only favourable information? Sadly I have to say that there is a tendency in the discipline to select favourable information, that which supports the status quo. As noted above examiners are always interested in new information when it helps them with their casework and that’s positive. However, when criticisms are raised about the way we do our work most examiners become completely dismissive of the author(s), rather than seeking to understand what the results may actually mean.  Sure, some critics are “out to lunch”, but not all of them. At times, it seems like there is a pre-determined belief that every qualified examiner is doing everything perfectly. Anyone trying to do research into how well we do our work or how it might be improved is often met with criticism or a dismissive attitude. I feel that things are improving in this regard, but I have to rate the discipline negatively in terms of their general open-ness. Which column? This one goes to the right, I’m afraid.
  4. Invites criticism or see criticism as a conspiracy? This fits in with my previous point. For various reasons, most of which are not particularly valid, examiners do not and will not “invite criticism”. Rather, they see most critical review as part of a conspiracy or organized attempt to discredit the discipline. Which is not the case for every ‘critic’. It’s important to acknowledge that some critics are guilty of this, as charged. However, not all critical review is part of a conspiracy by people working against the discipline. The number of those critics is small, yet that is generally how criticism is viewed. Which column? It’s a split, in my opinion.
  5. Verifiable results vs non-repeatable results? This one is interesting. In those instances where formal testing has been done examiners have demonstrated that their practical skills are both valid and reliable (not perfect, but much better than any alternative). However, this point is aimed more at replication of results in published scientific work. In that regard, our discipline is a bit weak for most things because 1) we have not studied as many different facets as we should, and 2) we rarely perform replication studies. Once something has been published it is generally taken to be ‘gospel’. Mind you, that happens in many other domains so I’m not sure it’s a key criticism. Which column? This one falls on the right side.
  6. Limited claims of usefulness vs claims of widespread usefulness? This one is tough. The former would be true if, in fact, examiners adhered to what is written in the literature which generally recommends that examiners be cautious and circumspect with their opinions. But, in reality, they often aren’t. Far too often, examiners make claims about what can be done or achieved that don’t really conform to the body of accepted knowledge. In a certain sense, this line in the table is about knowing the limitations of the work and not claiming anything beyond those limits. Do FDEs make exaggerated claims? Sometimes. So, which column? This one falls more to the left, than the right. But not all the time.
  7. Accurate measurement vs ‘ball-park’ measurement. Another tough one to call, but not for the reason some might think. There are two issues. The first, in my opinion, is the concept of measurement. Without measurement, accuracy is meaningless. And most of the work we do is not amenable to measurement — at least, not easily. Second, assuming measurement is possible, there is the question of the degree of accuracy required for this work. In other words, what constitutes ‘accurate measurement’? There are attempts being made to measure some things within the discipline. And where that is being done, the accuracy attained is usually reasonable. Unfortunately, I am someone who feels we do not do this often enough. But that doesn’t mean we use ‘ball-park’ measurements either.  Evaluations in FDE, like most forensic domains, are expert assessments of a subjective nature. They are not quantified (as a rule) so the concept of measurement doesn’t really apply (though it can still be done). However, when measurements are done, they are generally done well. Which column? I’m going to say that this one leans towards the left side (though I still wish we would do more to ‘measure’ things in our work).

In summary, it can be argued that we, as a discipline, have slight issues with #1, 2, 4 (being in the middle isn’t very good). 3 and 5 are not good, while 6 and 7 are better.Overall, none of this surprises me very much.  Nor, in my opinion, does it suggest that our discipline is a pseudoscience (to the contrary, I personally consider it to be a legit scientific pursuit).  Instead, I take this as another indication that there are things we could and should be doing better. 


Footnotes

  1. A Canadian perspective on forensic science versus pseudoscience, POMA, Volume 19, pp. 060002 (June 2013).
  2. In fairness, things are changing. Examiners are much more willing to discuss controversial topics today than they were 10 years ago. That’s a very good and positive change.
  3. And I wonder if ‘ruthless’ peer review happens for any domain.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.