What is the future of AI-enhanced video evidence in the US legal system?

Artificial intelligence (AI) is revolutionizing various sectors, and its potential impact on the US legal system is no exception. With technology rapidly advancing, the legal community faces growing questions about how to incorporate AI-enhanced evidence in ways that uphold justice. 

One landmark case has brought these issues to the forefront.

In March 2024, a Washington state court ruled against the admissibility of AI-enhanced video evidence in a high-profile trial, marking one of the first major cases where AI-modified footage was presented in a US courtroom. 

The decision has sparked debates about the role of artificial intelligence in legal processes and highlighted critical concerns regarding the reliability of AI-enhanced evidence.


The context and legal implications of AI-enhanced video evidence in the Puloka case

In State of Washington v. Puloka, the defendant, Joshua Puloka, was charged with three counts of murder following a shooting captured on a bystander’s smartphone. The defense attempted to introduce an AI-enhanced version of the video to upscale the resolution and sharpen the image, arguing that the original footage was blurry and lacked sufficient detail to aid in the jury's understanding.

However, the prosecution objected to the use of the AI-modified video, citing concerns that the enhancements added new details that were not present in the original footage. This included alterations to facial features and the visibility of potential weapons. 

Ultimately, the court ruled against using the AI-enhanced video, determining it failed to meet the Frye standard - a legal standard that requires scientific evidence to be generally accepted within the relevant community - in this case, the forensic video analysis community. 

The court found that the AI process lacked transparency, had not been peer-reviewed, and introduced alterations that undermined the integrity of the original footage. The Judge also expressed concern about the potential for AI-enhanced video to mislead juries, as it presented an image of the events that may not be an accurate reflection of reality.


How should AI-enhanced video evidence be handled in legal settings?

AI offers powerful capabilities for improving video quality—such as reducing blur, enhancing resolution, and increasing clarity—which can be valuable in legal contexts. These tools can transform otherwise unusable footage into clearer, more understandable evidence. 

For example, AI can make license plates legible, clarify facial features in low-light conditions, or highlight subtle but crucial actions that could otherwise be missed. 

However, the use of AI-enhanced video evidence also presents risks. 

One of the core issues with AI-enhanced video lies in its tendency to generate data rather than simply process it. Unlike traditional video editing, which adjusts contrast, brightness, or sharpness, AI-based enhancement often involves the addition of new pixels, essentially “guessing” what should be there. 

This process can blur the line between authentic representation and speculative reconstruction. It can result in what experts have termed "the illusion of clarity" - where a video appears more detailed and accurate, but in reality, it introduces elements that mislead jurors.

With the rise of AI tools like deepfakes — AI-generated videos that can make people appear to say or do things they never did — and enhanced video technology, courts will have to confront difficult questions about the reliability of AI-generated evidence and its potential to undermine the credibility of the judicial process. Legal professionals must carefully assess the admissibility of AI-enhanced materials, especially in light of the potential for AI to introduce bias or inaccuracy.

To address concerns around AI-enhanced video evidence, several authentication measures are being implemented. 

First, AI-enhanced methods are increasingly subject to peer review, ensuring that the technology is reliable and accepted by the forensic community. Additionally, courts now require transparency from AI tools, meaning they must clearly explain how enhancements are applied to maintain the integrity of the original footage.

Another key practice is preserving metadata, which allows any changes made to video footage to be tracked, ensuring proper forensic analysis and maintaining the chain of evidence. Finally, legal teams have the opportunity to cross-examine AI-enhanced evidence, allowing the AI process to be scrutinized just like any other type of forensic testimony.


Our approach to AI-powered video solutions at Pimloc

At Pimloc, we offer a unique approach to AI in video evidence with our flagship solution, Secure Redact. While courtrooms typically require clear, unaltered video to present a full view of the facts, there are scenarios where privacy concerns are critical, even in legal proceedings. Secure Redact is designed to anonymize sensitive data, by blurring faces, license plates, and other personally identifiable information (PII), without altering the underlying content of the video itself.

In certain court cases, particularly those involving sensitive subjects like minors, confidential informants, or private locations, the need to protect personal information becomes paramount. 

By ensuring that privacy is maintained, Secure Redact allows these videos to be shown in court without violating privacy laws or endangering individuals involved. It preserves the integrity of the footage for evidentiary purposes while complying with regulations like GDPR or HIPAA in cases involving private information. 

This approach is crucial when balancing the need for transparency in evidence with the protection of individual rights, especially when footage is made public or used in multiple jurisdictions with varying privacy laws.


The ruling in Puloka sheds light on the complexities surrounding AI-generated evidence in legal trials. It sets an important precedent for future trials, underscoring the need for clear guidelines and rigorous validation of such tools before their use in courtrooms. As AI technology continues to evolve, legal professionals must critically assess its role in the courtroom, and ensure that such tools support, rather than compromise, justice.


Secure Redact offers a privacy-first solution that maintains the integrity of the footage.

Next
Next

Detroit’s new facial recognition policies: a potential model for the nation?