Presenter: Dylen Matovu
Faculty Sponsor: Devan Walton
School: Northern Essex Community College
Research Area: Cybersecurity
ABSTRACT
Facial recognition technology is now used routinely in criminal investigations, yet the algorithms driving these searches remain scientifically biased and legally unregulated. This project investigates the admissibility and constitutional implications of biometric digital identity evidence derived from such technologies, asking whether current evidentiary rules adequately protect defendants when the “witness” identifying them is a biased algorithm. Police frequently use social media photos as probe images, and NIST studies confirm that error rates for minority populations can be up to one hundred times higher than for white individuals. Compounding this scientific unreliability, law enforcement often conceals the use of facial recognition altogether, presenting algorithmic matches as definitive identifications rather than low-confidence leads a practice illustrated in cases like State v. Tolbert. Using doctrinal analysis of Brady, Daubert, and Fourth Amendment warrant requirements, alongside empirical review of NIST data on false positive rates and training set bias, this research argues that facial recognition technology fails to meet the reliability standards required for forensic evidence. Because these systems operate as proprietary black boxes, defendants cannot meaningfully cross-examine the process by which they were identified. The project concludes that without mandated disclosure of the five stages of a facial recognition search, the use of digital identity evidence violates due process. These findings carry broader significance as legislatures in New York and at the federal level consider restrictions on warrantless biometric surveillance; absent strict evidentiary rules or legislative bans, the current ease of use for police creates an unacceptable risk of false imprisonment.