Privacy under pressure: regulators continue to rein in data breaches 

No. 81: Bringing you the news that matters in video privacy and security

There is an increasing spotlight on how sensitive and biometric data is managed by corporations. Recent high-profile privacy breaches have highlighted the widespread consequences of inadequate data protection across various sectors.

In the UK, the ICO has reprimanded Clyde Valley Housing Association for failing to protect customer data on their online portal. Organizations that neglect proper testing and escalation procedures for data breaches will face repercussions. 

In Illinois, retail giant Target has come under fire for allegedly using facial recognition technology without customer consent in Illinois - in violation of the Biometric Information Privacy Act (BIPA). There is a growing concern about retailers exploiting personal data without transparent consent mechanisms, raising questions about privacy in commercial spaces.

Privacy concerns are increasingly intersecting with issues of technological advancement, consumer rights, and regulatory oversight. Whether it’s through inadvertent lapses in data security or deliberate intrusions into personal privacy, the consequences of failing to safeguard consumer data are becoming increasingly severe - and regulators are cracking down on it. Organizations worldwide must prioritize transparent, fair, and lawful processing of personal data to maintain public trust and remain compliant with data laws. 

As always, please send any feedback or topics of interest you would like to be covered.

Seena, Editor


News

EU Data Protection Board challenges Meta's "consent or pay" model

The European Data Protection Board (EDPB) has ruled against Meta's "consent or pay model", whereby users either consent to data processing for targeted ads or pay for ad-free access. The decision will require Meta to give users a clear choice regarding personalised advertising and shift their business model focus away from user-tracking. 

Tech Crunch: EU privacy body adopts view on Meta’s controversial ‘consent or pay’ tactic

Fortune: Meta is out of options’: EU regulators reject its privacy fee for Facebook and Instagram

 

Target faces a class-action lawsuit over alleged biometric privacy violations 

Target is being sued for reportedly using facial recognition technology to collect biometric data without consent, violating Illinois' Biometric Information Privacy Act (BIPA). The lawsuit claims Target did not inform customers or obtain their consent before capturing their biometric data, which it uses primarily for anti-theft purposes. 

NBC Chicago: Target hit with class-action lawsuit claiming it violated Illinois' biometric privacy law

USA Today: Woman files lawsuit accusing Target of illegally collecting customers' biometric data

 

The ICO reprimands Clyde Valley Housing Association for data breach 

The ICO has reprimanded Clyde Valley Housing Association after a new online portal leaked residents' personal details. Due to inadequate testing and poor staff training on data breach protocols, this oversight made personal information accessible to other users for five days.

The ICO: Housing association reprimanded for exposing personal information on online portal

Inside Housing: Data watchdog rebukes Scottish landlord for data breach

 

New Zealand supermarket's facial recognition leads to racial profiling concerns  

A facial recognition trial in a Rotorua supermarket mistakenly identified a Māori woman as a thief, highlighting concerns about racial bias in AI technologies used in New Zealand. Experts argue that the AI systems, trained predominantly on data from European-looking individuals, often fail to accurately identify people of color, which can result in wrongful accusations. 

The New Zealand Herald: Foodstuffs facial recognition trial: AI mistaking Māori woman as thief not surprising, experts say

1 News: Rotorua mother wrongly identified by supermarket as a thief

 

Pimloc Company Announcements

Pimloc partners with Cisco Meraki to expand redaction capabilities with Secure Redact

Pimloc has partnered with Cisco Meraki to offer Secure Redact on the Meraki Marketplace. Pimloc's AI-powered solution automatically redacts sensitive data, like faces and license plates, from various types of video footage to bolster data privacy compliance. 

Secure Redact: Pimloc and Cisco Meraki Partnership elevates redaction capabilities and efficiencies with Secure Redact


AI Snippet of the Week

UK government cracks down on non-consensual explicit deepfake images

The UK government announced that creating sexually explicit deepfake images without consent will soon be a criminal offense in England and Wales. This new law aims to tackle the increasing misuse of AI in generating such content and will apply even if creators do not intend to distribute the images.

BBC: Creating sexually explicit deepfakes to become a criminal offence

Gov.uk: Government cracks down on ‘deepfakes’ creation


Policy Updates

California Privacy Protection agency critiques federal American Privacy Rights Act

The California Privacy Protection Agency has expressed concerns about the American Privacy Rights Act, arguing that the proposed federal law might undermine state-level protections. The agency highlights that the act could dilute existing consumer rights by overriding state laws like the California Consumer Privacy Act and limiting the enforcement capabilities of state authorities.

IAPP: CPPA outlines concerns with proposed APRA

California Privacy Protection Agency: Re: American Privacy Rights Act Discussion Draft


To subscribe to our fortnightly newsletter, please click here

Thanks for reading, if you have any suggestions for topics or content that you want to see covered in future please drop a note to: info@secureredact.co.uk

Previous
Previous

What lessons can we learn about data privacy in the education sector? 

Next
Next

Where does the balance lie between surveillance and security in public safety?