Fingerprints Aid Injustice
One of law enforcement’s most reliable techniques could help you end up in jail.
Considered to guarantee to catch a criminal, a science taught in many universities and law enforcement academies around the world, fingerprinting is widely considered one of the most accurate forensics techniques out there. However, evidence such as the technique’s origin, wrongful convictions, human bias, and outdated processes, suggest otherwise.
Fingerprinting entered the forensics world in the late 1800s when Azizil Haque, the head of identification for a local police department in Bengal, India, created a fingerprinting system based on common pattern types. The method was deemed successful and soon spread across the world, being established as a core tool in 1901 after it was introduced to the British Government.
Many governments believed that they could establish themselves as science-based with the use of fingerprint identification. They were considered factual, unlike subjective testimonies that were dismissable. People believed that fingerprinting was so accurate, even criminals were fearful, attempting to erase traces of, and sometimes even, completely remove their fingerprints.
Fingerprinting is still widely used as evidence in criminal cases today. Crime scene investigators use special powders to make the prints visible then place tape on top, lift them up and preserve them for later analysis by forensic scientists, technicians, or trained police officers. Crime scene fingerprints are then matched to people in a database by finding points of similarities.
The issue? Criteria for what constitutes a “match” varies. There are no legal requirements for the number of similar points. Though most criminal courts require eight to twelve, some experts may require a minimum of sixteen, eighteen, or even twenty.
The lack of requirements has contributed to false accusations and convictions of innocent people. Most famously, in 2004, an Oregon lawyer, Brandon Mayfield (who also just happened to be Muslim), was jailed for two weeks because FBI experts “matched” his fingerprints to those found on a bag during the investigation of the Madrid train bombings. As the investigation continued, Spanish law enforcement revealed that the prints are actually linked to an Algerian man. Even though Mayfield was then released, if every person has distinct fingerprint patterns, then mistakes shouldn’t have been made.
When experts claim that “no two people have the same fingerprints,” it is not the number or the configuration of the pattern types, but, the minutiae or “Galton” details –which are small deviations from a path that includes; forks, ends, islands, and enclosures–are unique to a person.
The Smithsonian Magazine, on the other hand, points out that the uniqueness argument hasn’t been proven, or even carefully studied, though no case throughout history has revealed the possibility of some people having the same fingerprints.
However, the question, according to Criminology professor, Simon A. Cole at UC Irvine, claimed that the uniqueness of fingerprints and the accuracy of fingerprinting are different arguments. The uniqueness of a fingerprint reveals nothing about the reliability of the process.
According to an article published by Rutgers University, Cole claimed, “Courts failed to grasp the gap in logic between [fingerprint uniqueness and process reliability] and uniqueness became enshrined as the foundation of the accuracy of forensic fingerprint identification.”
Even with computer technology, made to improve the efficiency and reliability of fingerprinting, computers only have the ability to present possible candidates and there is no guarantee the criminal is within the database. Human interpretation is still the ultimate judge in the analysis process.
A study published by the University of British Columbia revealed only a .1% false positive rate but a 7.5% false negative rate, concluding that the quality of fingerprints and human error can influence the examiner.
Human biases have existed throughout our justice system, primarily targeting minorities and low-income citizens. In the case of Brandon Mayfield, partial fingerprints on a paper bag at the terrorist attack scene were enough for the FBI to put him behind bars.
A released FBI report detailing the causes of error in the case revealed that bias from the known prints of Mayfield helped lead to the wrongful conviction. “Circular reasoning” the report claims, is an “important pitfall to be avoided.” The examiners “began to ‘find’ additional features in [the crime scene fingerprints] that were not really there,” resulting in those to be ruled as “identified…points of similarity.”
The Mayfield case took place in 2004, three years after the events of the 9/11 terrotrist attacks. The country was still affected by the events, attempting to ‘crack down’ and prevent any possible terrorist activity, being involved in the Iraq war, enacting policies such as the PATRIOT ACT, where Muslims, Arabs, and anyone who “looked close enough” were often targets for government agencies and hate crimes. In the report, the FBI also revealed religious biases within the investigation.
Though the examiners had no knowledge of Mayfield’s religion or other personal details in the initial analysis, Mayfield’s religion as well as his position as an attorney representing other Muslims in court, “likely contributed to the examiner’s failure to sufficiently reconsider the identification after legitimate questions about it were raised,” according to the report.
Evidence such as these has the ability to influence court judges like in 2002 when U.S District Court Judge Louis H. Pollak ruled that fingerprint evidence may not be used in a Philadelphia murder case because it does not meet the Supreme Court’s standards of scientific scrutiny as decided in Daubert v Merrel Dow Pharmaceuticals (1993).
The court’s (“Daubert ”) criteria, claimed that a technique or method qualifies as science if; it can be tested, was subjected to peer review, possesses known rates of error, and is generally accepted as science.
Jennifer Mnookin, an evidence expert, told ABC that “the use of fingerprinting has never withstood rigorous scientific testing standards,” and is widely accepted without going through the process of scientific scrutiny.
In order to improve the fingerprinting analysis process, in 2013, researchers at Penn State created a process using computer programs to “grade a fingerprint” for identification. The article, published in Penn State News, claims, “Computerized grading ensures standardized evaluation to a degree finer than any human can accomplish.”
The process includes three separate computer programs; the FBI’s Universal Latent Workstation, image editor GIMP, and a custom program written in Mathematica.
The Universal Latent Workstation creates a simplified map of the fingerprint by assigning colors to four areas. The background is black, definite ridges are white with debatable ridges being yellow and blue.
GIMP them converts the map file into an image with red-green-blue (RGB) color values that are stored as number values so a computer program is able to translate it into binary sequences.
Mathematica calculates the total percentage of white pixels from the RGB images, creating a zero to one-hundred grading scale. The higher number of white pixels, the higher the quality of the fingerprint.
Penn State researchers hope that the CTF technique can “give fingerprint grading unprecedented consistency and objectivity” and help eliminate false conclusions by examiners in situations where fingerprints on crime scenes have been altered by natural causes such as weather.
Though improvements in forensic fingerprinting have been proposed, none have been implemented, leaving a flawed analysis process, ruled by human error, bias, and outdated techniques, encouraged by a lack of legal requirements to further perpetuate the wrongful accusations and convictions of innocent citizens while letting the real criminal roam free.