SecureRedact

View Original

Leveraging facial recognition technology: compliance, ethics, and business opportunities

As AI evolves, it shifts from being a perceived threat to a strategic opportunity - provided there are clear guidelines for lawful technology use. This enables organizations to harness AI's benefits effectively while ensuring compliance and data protection. 

A recent panel discussion, featuring experts Pauline Norstrom - CEO of Anekanta AI, Mike Gillespie - Founder of Advent IM Limited, and Tony Porter - Chief Privacy Officer at Corsight AI and former UK Surveillance Camera Commissioner, delved into the complexities of integrating facial recognition technology (FRT) into businesses. 


The regulatory landscape and its impact

Tony Porter highlighted the increasing scrutiny and legal challenges surrounding FRT globally. High-profile lawsuits against companies like Rite Aid and Target illustrate the heightened regulatory environment. Amazon faced legal action under Illinois' Biometric Information Privacy Act (BIPA) for using FRT to track employee time and attendance.

Porter emphasized the need for compliance, noting that staying updated with regulatory changes and implementation of AI in a compliant manner presents a significant business opportunity. Companies that navigate this environment successfully can build trust through transparent and ethical practices, gaining a competitive edge. He also pointed out that while significant progress has been made in addressing FRT's accuracy and bias issues, the main challenge is ensuring lawful deployment. He notes that many of these systems have a high level of accuracy and know what they are looking at; the challenge comes in companies deploying them without proper oversight, transparency, proper data protection measures, or for reasons beyond what could be considered reasonable or lawful. 


Ethical challenges and AI accountability

Pauline Norstrom discussed the ethical challenges businesses face when implementing AI. Businesses need to engage in critical thinking and risk modeling to anticipate potential unintended consequences of new AI innovations. Norstrom also highlighted the principles of ethical AI, which align with those in the EU AI Act and OECD guidelines. 

These principles include respect for the rule of law, human rights, transparency, and security throughout the AI lifecycle.

Norstrom stressed that AI accountability should come from the top down, with a clear chain of responsibility within the organization. This ensures consistency and balance between ethical considerations and business goals. Training and seeking independent advice are crucial for integrating AI responsibly.


Cybersecurity and managing AI risks

Mike Gillespie brought his cybersecurity expertise to the discussion. He noted that managing AI regulations and risks is part of a broader cybersecurity strategy. For one, protecting sensitive biometric data from malicious actors is essential. 

Gillespie also emphasized involving information managers, data protection, and privacy managers in AI projects to ensure comprehensive oversight and protection. AI projects should be run as agile, iterative parts of the business - this means a system of continuous oversight and governance to prevent “scope creep” (where the purpose and scope of a system gradually expand beyond its originally intended purpose).


Navigating complex challenges and building trust

Porter emphasized the need to combine AI principles, ethical considerations, data protection management, and cybersecurity management into a comprehensive privacy package. For sensitive live FRT applications, he recommended conducting Data Protection Impact Assessments (DPIAs) and making them easily accessible to consumers - for example, through QR codes for consumers to read in their own time. 

Norstrom added that businesses need to be clear about their goals and the purpose of using FRT. She advised running pilots, seeking feedback from affected groups, and ensuring operational policies are effective. For example, the new British Standard (BS) 9347 provides a blueprint for implementing AI responsibly by citing relevant UK laws and operationalizing OECD trustworthy principles.

Gillespie reiterated that businesses should continuously revisit questions of necessity, proportionality, and safeguards. Continuous governance and oversight are essential, and any scope additions should undergo the same due diligence as the initial pilot.


The insights shared offer a valuable roadmap for businesses looking to integrate FRT responsibly and effectively. Compliance is not just a legal requirement; it's a pathway to earning customer trust and loyalty. Through transparency and ethical practices, businesses can turn regulatory challenges into opportunities for growth and trust-building.

A multifaceted approach is essential for successful FRT integration. Leadership commitment to ethical AI practices sets the tone for the entire organization. Regular training and independent audits as well as thorough vetting of technology partners help businesses stay ahead of potential ethical pitfalls Continuous oversight and agile project management are key to maintaining the integrity and security of AI systems, to ensure they meet stringent data protection standards.

A massive thank you to the speakers for the insightful discussion. 


To start protecting sensitive data today, try Secure Redact.