Privacy concerns and mistrust towards video surveillance and facial recognition technology

In today’s world, cameras are ubiquitous and, particularly in smart cities, it is hard to go anywhere without passing some sort of video-capturing device. This means that there is an insurmountable level of video data, oftentimes consisting of sensitive biometric information, being collected, analysed, and shared every day, presenting major privacy challenges for the public.  

Surveillance cameras, of course, do have their place. For example, surveillance cameras were instrumental in identifying and helping to locate the Tsarnaev brothers, the perpetrators of the Boston Marathon bombing in 2013, and interestingly, in the aftermath of the attack, there were increased calls for more video camera surveillance for public safety (1)(2). 

In 2017, former UK surveillance camera commissioner, Tony Porter, publicly said he was concerned that overt surveillance systems including CCTV, body-worn cameras, and drones could become even more invasive than intended, also admitting that regulators and the government were struggling to keep up with the rapidly changing pace of technology (3). While we have since had the introduction of the GDPR and requirements for properly handling the data from this technology, there is arguably still not enough current legislation to keep abreast of these innovations and developments.

In the UK, there are roughly 5.2 million CCTV cameras, with one for every 13 people, alongside other forms of overt surveillance like body-worn cameras, drones, ANPR, doorbell cameras, etc. (4)

Whilst the UK public generally is largely used to CCTV cameras and video surveillance, their concerns start to rise when these forms of captured video become integrated with smarter video analytics and facial recognition technology (FRT). 

There are issues of bias and discrimination in these systems, as numerous studies have shown their inaccuracies in identifying women and people of darker skin tones (5). As FRT is particularly susceptible to inaccuracies, this also raises the issue of false positives which are particularly problematic when this technology is used in the law enforcement context; 2019 studies showed that FRT misidentified members of the UK public as potential criminals in 96% of scans in London (6). Moreover, there has been a lack of substantive evidence that increased surveillance helps to significantly reduce crime (7).

In 2019, Ada Lovelace Institute commissioned YouGov to undertake a survey of roughly 4,000 people nationwide in the UK to understand public attitudes to public and private sector deployment of FRT (8). 

They found that the majority of the UK public wants greater limitations on the use of FRT - 46% think they should be able to opt-out or at least consent to the use of FRT.  Another conclusion was that a deeper understanding of public perspectives is required to know how best to approach these technologies.


Seize the benefits of automated video redaction today.


People are concerned about the normalisation of surveillance as a result of increased use of facial recognition, but are inclined to accept the trade-off when facial recognition technologies serve a demonstrable public benefit.
— Ada Love Institute, Beyond face value: public attitudes to facial recognition technology (9)

Moreover, despite in-depth knowledge about facial recognition being relatively low, the public is beginning to form nuanced opinions about FRT and the privacy trade-offs it requires.

70% think FRT should be allowed for police investigations, 54% are in favour of its use for smartphones, and 50% for use in airports to replace passports - all under the condition that there are sufficient safeguards in place. 

There was a majority consensus that police FRT in public spaces should be allowed provided it helps to reduce crime. Notably, 80% of those comfortable with police use note the trade-off of public security. 

The majority are, however, largely opposed to the private sector use of FRT, with respectively 77% and 76% uncomfortable with its use by shops for tracking customers or by human resource departments recruiting candidates.

Law enforcement’s use of FRT is an interesting question and has differing responses depending on the jurisdiction. For example, several US cities including Boston, Minneapolis, San Francisco, Oakland, and Portland have banned its use by police, citing fears about secret police use as part of their reasoning (10). In comparison, in countries where surveillance is a lot more normalised, for example, China, FRT is seen less negatively (11). 

Additionally, there is still currently no official piece of legislation governing the use of FRT in the UK and the guidelines remain unclear about its use. The Ada Love Institute paper also found that 55% of the UK public are in favour of government regulation to limit the use of facial recognition technology to specific circumstances (12). 

In general, there seems to be a mix of views and opinions depending on FRT’s use and if there is some sort of tangible benefit. As a result, further dialogue between the public and private sectors, policy-makers, and the public are required. 

Whilst not knowing everything, many consumers are getting savvier about their data privacy rights and expect companies to prioritise these issues. 

One of the main concerns among consumers is not knowing how their data is being used, and so it is essential that businesses are aware of consumer concerns regarding their data privacy, and show initiative in being forthright about how they collect, share, and protect data. 

Reputation is perhaps even more important to closely maintain than consumer experience. Regardless of the company, AI integrations used, or video surveillance methods implemented, it is vital to give people more control over their data. Being transparent with easily understandable explanations of how their data is used, and letting consumers feel like they have agency can help garner what all organisations need above all else - consumer trust.


This is part of a 5 part series, “Consumers are moving to services that protect their data and privacy”, which will explore consumer attitudes towards data privacy, social media and video surveillance - in an age where technology is relying more and more on personal and biometric data.


Previous
Previous

Retail shrinkage and staff abuse - is more video surveillance the answer?

Next
Next

Attitudes towards AI: there remains a lack of trust and general awareness from consumers