4 Apr 2024

Editor Introduction

Technology can be a powerful tool, but it can also be misused. Ethical principles help ensure that technology is used in a way that minimises risks and avoids causing harm to people or society. Issues could include factors such as data privacy and algorithmic bias of certain technologies. As the security industry embraces advanced and evolving technologies, we asked this week’s Expert Panel Roundtable: What are the biggest ethical considerations of using emerging technologies in physical security? 


Steve Bell Gallagher Security

Physical security is focused on people and how to keep them safe while allowing authorised and authenticated people access. Emerging Technology, especially AI-based, involves the training of machine learning systems with large datasets, and those datasets mustn't discriminate against any groups in the community based on race, gender, or varying abilities. Ensuring fairness and transparency in AI algorithms is essential. The manufacturer, integrator, and system owner also have the added responsibility to ensure that the embedding of a physical security system into an enterprise does not have any unintended consequences, such as the leaking of private information. This means implementing robust encryption, access controls, and secure storage mechanisms to ensure data collection and storage are secure and mitigate any potential threats of breaches.  

Fredrik Nilsson Axis Communications

Before development even begins, manufacturers should always be aware of a product’s potential for misuse. Once a piece of technology is made available, enforcing how it is used can be extremely difficult, but manufacturers must clearly define and communicate its intended applications. Now more than ever, businesses need to be forward-thinking, ensuring their products and services are not placed in the hands of those likely to use them in unethical ways. The stakes are particularly high in the physical security industry. When manufacturers are exploring ways to integrate AI into new and existing products, they need to avoid working with customers and partners who have a history of irresponsible behaviour with emerging technology. As regulators around the world begin enforcing new guidelines around AI, it is increasingly vital for ethical behaviour to be made a high priority. 

Florian Matusek Genetec, Inc.

Using emerging technologies such as generative AI in physical security raises important ethical considerations, especially regarding data protection, privacy, accuracy, and bias mitigation. Solutions that make use of these emerging technologies should comply with evolving AI, privacy, and data protection regulations. Wherever possible, developers should use synthetic or anonymised data that does not contain any identifiable information to protect data privacy. Generating a more diverse and representative synthetic dataset can also help minimise biases. AI recommendations should always be scrutinised and should not be relied upon to make critical decisions without human supervision. Guardrails and content moderation can also help ensure the ethical behaviour of AI-generated data. Guardrails prevent AI systems from engaging in harmful behaviours like bias and discrimination, while content moderation uses AI to monitor and manage user-generated content, detecting and removing harmful material. 

Many of the emerging technologies in physical security revolve around collecting data, analysing data, and making decisions based on that data. This brings up considerations of data privacy and whether it is important to identify people in images or whether knowledge of there is a person or a car at a certain place and time is enough. It also brings up whether decisions are automatically made based on this data and how fair the rules engine is in making its decisions, e.g., how is the decision made to grant a person access to a building based on an image of the person trying to enter? 

When exploring ethics in physical security, a predominant concern is around privacy. Video surveillance devices can collect detailed information about people which can be used to analyse behaviours and potentially identify individuals. As manufacturers, we must enable our customers to comply with the ever-evolving ethical landscape around AI-based data collection and privacy. While the metadata typically captured from AI-based cameras is anonymous, it’s important to adopt a privacy-by-design development standard that includes complete transparency about what data is collected, how it is used, and who has access to it. For example, some organisations and geographies may require opt-in face recognition while it remains illegal in others. Machine learning algorithms can inject bias into analytics if they have been trained on skewed data sets. Detection based on race or ethnicity should never be allowed. As tools continue to evolve, so must our ethical use of them. 

Algorithmic bias is one of the primary risks associated with emerging physical surveillance technologies. While the risks of facial recognition software are well known and documented, efforts are being taken to adapt computer vision to new and novel use cases. For example, one of the more deeply flawed failures was an attempt to detect aggressive behaviour or body language, which was unfeasible as there was not enough training data available. Other physical security systems will face a similar challenge of not discriminating against individuals based on protected factors due to a lack of training data, or more likely, a lack of gender or racially unbiased training data. Companies considering purchasing advanced or emerging physical security systems should enquire about the training data used in the development of those systems to not be subject to civil penalties resulting from discrimination caused by using said systems. 

Alan Stoddard Intellicene

When we talk about physical security, we also must talk about cyber security. How companies are keeping themselves protected from cyber threats can be just as critical as how they are protecting your environment from physical security threats. This is where we can see some emerging technologies skimping. Emerging technologies often rely on collecting and processing vast amounts of data, and ensuring the security of this data against cyber threats is crucial in preventing breaches that could compromise both physical security systems, the enterprise network itself, and the privacy of countless individuals. A data breach could be detrimental to environments including hospitals, retailers, cities, government entities, and airports. We must continue holding all technology companies in the physical security industry accountable for protecting not only our people and assets but our data by focusing on the security of the software, devices, users, and network connections across the operation. 

Andy Cease Entrust Inc.

According to the World Bank, 850 million people globally do not have a physical identity – many of whom are members of marginalised groups, and the majority are children whose births have not been registered. Today, identity, whether that be a driver's license, passport, or birth certificate, is key to accessing the fundamentals of life such as school registration, getting a job, opening a bank account, buying a house, or receiving social assistance payments. It's foundational to social and economic mobility, which presents significant regulatory, ethical, and practical implementation concerns. Physical security continues to blend with digital for hybrid security, meaning there is an even greater need for secure and convenient identity verification, online or in-person. With this increased reliance, ethical considerations around access and inclusion are paramount. This could mean improving access to mobile smartphones, ensuring apps use basic language instead of technical, offering setup assistance at major travel points, etc.

Igal Dvir BriefCam

Physical security has entered a transformative area, catalysed by advanced AI models. While these technologies enhance safety, their deployment raises discussion-worthy questions – especially regarding individual rights. The potential collection and analysis of personal data, for instance, causes unease about the extent to which surveillance for the public good infringes on individual freedoms. Moreover, considerations around algorithmic biases, suboptimal equipment, poor environmental conditions, and even faulty camera placement raise another wave of concern regarding the reliability of technology for supporting sensitive decisions. Balancing the trade-off between individual rights and enhanced safety is a delicate endeavor and must be handled judiciously to ensure that security measures do not overshadow fundamental human rights. Instances of successful, responsible implementation of AI-based technologies support the fact that these challenges are not insurmountable. For instance, identity verification at passport control extracts and compares biometric features in real time without storing any data, preserving privacy while enabling streamlined security. Similarly, differentiating real-time and post-incident data processing is another way end users minimise potential invasiveness: Instead of indiscriminately collecting data, a more targeted approach that compares a specific identity with recorded footage can mitigate unnecessary privacy breaches. 


Editor Summary

Ethical use of technology is about harnessing its power for good while minimising the potential for negative consequences. It's an ongoing conversation as physical security technology develops, but it's essential for building a future where everyone can benefit from technological advancements. 

Quick poll
What's the primary benefit of integrating access control with video surveillance?