Facial recognition in law enforcement: privacy, ethics and innovation
Facial recognition technology (FRT) is increasingly used by law enforcement agencies worldwide. This technology enables departments to identify and track suspects, missing persons, and criminals by analyzing surveillance footage and public camera images.
In the UK, the London Metropolitan Police's deployment of FRT between 2020 and 2022 resulted in 34 arrests. Additionally, the 2023 directive for UK police to intensify retrospective FRT searches emphasizes the growing reliance on this technology.
In the US, federal and local law enforcement agencies have made heavy use of FRT - most use it to secure sensitive data and locations, aid investigations, and develop systems like touchless prisoner identification.
While it has proven to be a useful tool, its growing use and the lack of comprehensive and up-to-date legislation raises several ethical and privacy concerns.
The ethical dilemmas and privacy concerns of facial recognition
The collection, storage, and handling of sensitive biometric data by FRT systems pose substantial risks of misuse and privacy breaches. This is exacerbated by the lack of transparency around the data sources used to train these systems. For example, law enforcement agencies have come under scrutiny for working with Clearview AI, known for its extensive database of over 40 billion public images scraped from the internet.
Additionally, the deployment of FRT in public spaces often lacks explicit consent from those being monitored, raising concerns about personal autonomy and privacy invasion.
The technology's inherent risk of algorithmic bias also introduces another layer of complexity, particularly the potential for FRT to produce false positives or engage in unjust profiling. In the context of the criminal justice system, these vulnerabilities run the risk of exacerbating existing social inequities and seriously impacting those already facing discrimination.
The evolving regulatory landscape and police response
The regulatory environment in law enforcement surrounding FRT is marked by gaps of inconsistency.
In the US, the absence of federal regulation on FRT use by law enforcement has led to a patchwork of local policies, with some states and cities implementing bans or restrictions. This fragmented regulatory landscape complicates efforts to manage FRT's ethical and privacy implications consistently.
A 2024 report from the National Academies of Sciences, Engineering, and Medicine warns that a lack of clear and consistent regulation raises significant concerns regarding privacy, equity, civil liberties, and the potential for misuse of technology.
To address these issues, some states like Massachusetts have implemented legislation requiring a warrant for FRT use in criminal investigations; Kentucky and Louisiana are also considering legislation to mandate similar measures. The growing call for standardized policies is echoed by entities like the RAND Corporation, who advocate for increased authorization levels and transparency about FRT usage.
Proactive measures by law enforcement to build trust around FRT
Internationally, law enforcement agencies are taking proactive steps to enhance trust and ensure responsible FRT use. Due to the complexities of the technology, it undoubtedly requires a comprehensive approach to balance its benefits against potential risks to human rights.
In the UK, police inform the public before deployments, as a means of maintaining transparency. The biometric data of individuals not matched to watchlists is immediately and automatically deleted, with watchlists destroyed after each operation. The College of Policing has also issued guidance on using FRT and emphasized deployment to be necessary, proportionate, and fair. Plus, the UK's National Physical Laboratory's independent testing of algorithms is part of an effort to ensure fairness and reduce bias.
In 2023, members of the US Congress introduced the Facial Recognition Act of 2023. This bill aimed to provide transparency, prevent discriminatory algorithms, give defendants protection with due process rights, and limit the use of the technology to only necessary cases. This bill would also limit the use of FRT without a warrant, prevent its use from tampering with the freedom to protest, as well as other safeguards.
European lawmakers have started to lay down significant foundations with the EU’s AI Act, setting high standards that FRT can be held to. These are good initial efforts that bodies and regulators internationally can build on.
Globally, law enforcement agencies are also developing best practices for using FRT, focusing on ethical considerations and protecting individual rights. Independent bodies are also developing best practices focusing on ethical considerations and protecting individual rights. For example, the World Economic Forum has led a global multistakeholder effort to mitigate the risks of FRT and create a governance framework for responsible use.
By championing principles like necessity and proportionality, independent oversight, rigorous algorithm testing, community engagement, and ethical guidelines and regulations, law enforcement agencies can promote responsible use of FRT.