SecureRedact

View Original

AI in surveillance: national safety vs public privacy

Imagine a world where crime rates plummet by 30%, emergency services respond 20% faster, and communities can monitor natural disasters in near real-time, revolutionising disaster management. This is the promise of AI-powered surveillance - predictive policing, facial recognition, smart cameras, and biometrics can all be major assets to security agencies worldwide. 

Today, at least 75 out of 176 countries are actively using AI technologies for surveillance purposes. With enhanced national safety and disaster response capabilities, there’s no doubt that AI has emerged as a powerful tool for the security sector.

Despite the promise of this innovation, there is a tension between the need for security, the pressure from dated regulations, and the growing awareness for privacy to be kept in balance.


The privacy predicament

AI surveillance requires thought - especially pertaining to public privacy. We’re already starting to see where AI technologies in surveillance can be misinterpreted, misused and mistrusted. 

China is often accused of excessive surveillance, which could erode civil liberties or be misused to target minorities and political dissidents. Meanwhile, France's AI surveillance at the 2023 Olympics ignited public outrage, which highlights the potential for misuse and overreach. 

On top of that, AI-driven cyberattacks, autonomous weapons, and sophisticated phishing and malware campaigns pose a significant risk. 

US National security officials have already raised alarm bells about terrorists and criminals who are leveraging AI to develop cyber, chemical, and bio-weapons. Similarly, the UK government has labelled AI as a "chronic risk" to its national security. Generative AI, capable of producing convincing misinformation, only adds fuel to the fire. 

National governments and security agents should be able to leverage technology to carry out their duties and protect national safety, but this does not mean an individual’s personal privacy must be compromised or data made vulnerable. 


Where does the balance lie?

A balance between national security and privacy should come in the form of a commitment to responsible and ethical deployment with relevant safeguards and precautions. Transparency, privacy safeguards, and public debate are essential components for this. 

In fact, in March 2023, over a thousand AI experts called for a pause on the creation of "giant" AIs  to study and mitigate their dangers. The United Nations has also called upon the international community to confront the realities of AI. Meanwhile, the UK has committed to host the first global summit on AI Safety, which brings together key countries, tech giants, and researchers. This is a step in the right direction and insights from experts, policymakers, and privacy advocates will be crucial. 

As we wait for legislation to make significant and concrete changes; encryption, anonymisation, auditing, and regulatory oversight are all tools to protect privacy in the age of AI surveillance. Industry and business leaders, should also continue to participate in the public debate, advocate for ethical and responsible AI surveillance, and stay informed on the pressing issues in the space. 


As we set our sights on the horizon of AI surveillance, what can we expect? 

Companies will increasingly grapple with the ethical and reputational implications of their involvement in AI surveillance. As public awareness grows, consumers will demand accountability, which will push businesses to prioritise responsible practices over short-term gains.

Sometimes national safety and privacy can be posed as opposing forces, but the two can coexist. The path ahead is not entirely straightforward: meaningful steps can be taken to ensure we protect the right to privacy and enable responsible surveillance, whilst keeping the public safe. 


Need help putting privacy into your AI video surveillance?