Please feel free to share:
Forensic evidence needs to be considered impartially and without prejudice. Recently cognitive bias has become somewhat a buzzword in FS circles…
Recently I read a slightly tongue in cheek tweet from an attendee at a Forensic Science conference along the lines of “you’re no-one if you don’t mention cognitive bias in your talk”. So what is it? Have I got it and and how can I go about removing it from my practice?
Much of forensic science evidence is seen as black or white, right or wrong, but what about when the forensic examiner has to use some of their experience and learning to provide an expert opinion? At that point it goes beyond pure experimental measurement and includes subjective comparison. At this point the analysis can become biased by the expert in many ways. In scientific terms bias in an analytical system can be defined as the difference between the real value and that measured in the lab. In a laboratory the bias may be controlled by calibration of an instrument, proper training of the staff and use of quality controls (often including blind samples). In an expert opinion it is hard to overcome as usually the expert is not aware they are being biased. I have been aware of this term for a number of years, and even had some training from two academics from the University of Auckland whilst I was at ESR in New Zealand, probably about 4 or 5 years ago. It is however a very current issue this year, such that the Forensic Science Regulator has published draft guidance on how to recognise and guard against it. The regulator (currently this is not an actual person, waiting on the arrival in November of Dr Gillian Tully) has asked for feedback on the document and would like it before the end of October (in case anyone is interested see link here). In the document cognitive bias is defined as:
A pattern of deviation in judgement whereby influences about other people and situations may be drawn in an illogical fashion.
The famous example given for fingerprint bias is that of the suspected Madrid train bomber Brandon Mayfield. His prints were sent from Spain to the FBI for help. According to the FBI a partial match to his fingerprint was found on some detonators. When the examiners looked at his fingerprints they found ten points of similarity. At this point rather than acknowledge there were some differences in the prints the examiners then found more similarities to back their hypothesis up, which weren’t actually there. According to the official report the examiners were ” unconsciously seeking out information to confirm their hypothesis of identification” and subsequently used a lower level of scrutiny than usual.
Two week after his arrest the Spanish Police announced they had a match for the print to an Algerian, the FBI had to release Mr Mayfield and has subsequently apologised and settled for at least $2M in damages. They blame poor quality prints for the mix up.
This means the examiner has jumped to a conclusion without the evidence to really back it up, or they have used their knowledge of the case to make an item of evidence fit the pieces together. This could be due to knowledge of the case, the victim or suspect or more generally about the type of crime, the type of criminal, the people of the city involved or many other factors that make up personal prejudice. What kind of forensic situation may this be applicable to? One example would be if the detective on a case gave the forensic lab only partial information to appear that they had a very cut and dried case built against someone, “we just need to prints to match and then we’re done” kind of approach. Unknowingly this information could lead a fingerprint examiner to find a level of agreement where they are happy to call them a match when in other circumstances they would suggest it is insufficient or exclude that individual. There have been many studies on this where the same prints have been re-examined with a different scenario presented and different results recorded. This is apparently more marked when the prints are of low quality (for example through age or being poorly developed). See the box to the right for a prime example.
The document from the Regulator goes on to define various other types of bias that may be latent and have implication to forensic science, such as confirmation bias. This may be defined as the search for answers to the question you want answered rather than objectively analysing the evidence. This is rather similar in definition to the above, but definitely uses specific case knowledge rather than any other personal knowledge.
How to avoid bias
There are several ways to try and remove bias from your work. Using ‘blind’ analysis is one. This means you remove the context from the examination until the results are in, then you determine if they fit the hypothesis. this avoids the searching for extra match points or glossing over any differences. Other counter measures may be to have a very structured approach to analysis, the examples given are CAI (case assessment and interpretation model, using Bayesian methods) and fingerprinting’s ACE-V (analysis, comparison, evaluation and verification). It is suggested contemporaneous notes are key – this prevents against recollection bias later on.
One final very important point is the avoidance of playing a role in the case – remembering that scientists as expert witnesses are not part of the prosecution or defence, just there to help the court.
Thanks for reading! Any comments?
Is a collaboration of both used ever, with one examiner given all information about a certain case and another given the same sample but without that information? if so which tends to be more reliable? as while someone who knows the context of a sample may make more leaps of information, surely those ‘blind’ testing may miss some details which the examiner with more contextual knowledge may pay more attention to detail?
Good question. I am not aware of any such QA scheme. It is an interesting thought, I am sure you are right on the second comment but the real danger is going the next step and imagining the result leads to one answer rather than real hard evidence. Double checking of work is therefore very important before the results are reported!