The physical security industry has been in love with the cloud for quite some time. And understandably so. The promise of instant scalability, centralised access, and simplified maintenance is hard to ignore, especially in an era of remote work and distributed facilities. But reality is catching up to the hype.
For many, especially those dealing with video surveillance at scale, the cloud is no longer the catch-all solution it once seemed. Rising costs, bandwidth limitations, and latency issues are exposing its shortcomings. And the more resolution increases, from HD to 4K and beyond, the heavier that burden becomes.
Modern security cameras
This is where edge computing, specifically AI-enabled edge processing available in modern security cameras, starts to look less like an option and more like a necessity.
But it’s not just about adding intelligence to cameras. It’s about how that intelligence is deployed, scaled, and maintained. This leads us to containerisation and tools such as Docker, which are a revolutionary piece of the puzzle.
When cloud isn't enough
Cloud analytics for video sounds great in theory: stream everything to the cloud
Let’s start with a basic issue. Cloud analytics for video sounds great in theory: stream everything to the cloud, let powerful servers do the thinking, then serve up results to end-users in real time. However, in practice, this model can break down quickly for many end-users. Raw video is heavy. A single 4K camera streaming 24/7 can generate terabytes of data per month. Multiply that by hundreds or thousands of cameras, and the bandwidth and storage costs become unsustainable.
Then there’s latency. If AI needs to detect a person entering a restricted area or identify a licence plate in motion, seconds count. Routing video to a cloud server for analysis and waiting for a response can introduce delays. Adding in concerns about uptime, such as what happens if the internet connection goes down, it becomes clear why relying exclusively on the cloud creates friction for mission-critical deployments.
The edge advantage
Edge processing turns that model on its head. Instead of sending everything out for analysis, edge-enabled cameras do the heavy lifting on-site. AI algorithms run directly on the device, interpreting what they see in real time. They generate metadata—lightweight descriptions of events, objects, or behaviors—rather than raw video. This metadata can be used to trigger alerts, inform decisions, or guide further review.
The benefits are obvious: latency drops, bandwidth use plummets, and storage becomes more efficient. Edge processing solves many cloud deployment issues by keeping the compute where the data is generated, on the device. This frees the cloud up to do what it’s best at: providing scalable and centralised access to important footage. But where does the edge go from here? How do we evolve these powerful IoT devices to deliver even more situational awareness?
Enter Docker: An app store for Edge AI
They package an app along with everything it needs to run: the code, settings, libraries, and tools
This is where the concept of containerisation and open development platforms like Docker comes in. Let’s start with an analogy that is helpful for understanding containers. Imagine you're getting ready for a trip. Rather than hoping your hotel has everything you need, you pack a suitcase with all your essentials: clothes, toiletries, chargers, maybe even snacks.
When you arrive at your destination, you open the suitcase and you’re ready to go. You don’t need to borrow anything or adjust to whatever the hotel has, since you’ve brought your own reliable setup.
Containers in software work the same way. They package an app along with everything it needs to run: the code, settings, libraries, and tools. This means the application behaves exactly the same, whether it’s running on a developer’s laptop, on the edge in an IoT device, or in the cloud.
Security camera with a powerful edge processor
There’s no last-minute scrambling to make it compatible with the environment it lands in, because it’s self-contained, portable, and consistent. Just like a well-packed suitcase simplifies travel, containers simplify software deployment. They make applications faster to start, easier to manage, and more predictable, no matter where they’re used.
For a security camera with a powerful edge processor, it’s like giving the camera its own specialised toolkit that can be swapped out or upgraded without touching the rest of the system. It also means you can run multiple AI applications on a single camera, each in its own isolated environment.
Integrators and end-users
These applications don’t interfere with each other and can be updated independently
Want to add fall detection to a healthcare facility’s camera network? Just deploy the analytics in a container. Need to monitor loading docks for pallet counts at a warehouse? Spin up a different container. These applications don’t interfere with each other and can be updated independently.
As a developer, if you use an open container platform like Docker, any system that supports Docker can utilise your software. This removes the need to do expensive custom work for each partner and ecosystem. This is one reason Docker containers are tried and true in the larger IT space and are just starting to get traction in the security sector.
Docker also makes this scalable. Developers can build AI tools once and push them out to hundreds or thousands of devices. Integrators and end-users can customise deployments without being locked into proprietary ecosystems. And because containers isolate applications from core system functions, security risks are minimised.
Metadata, not megabytes
Traditional video analytics systems often require full video streams to be processed
One of the most underappreciated aspects of this method is the way it redefines data flow. Traditional video analytics systems often require full video streams to be processed in centralised servers, either on-premises or in the cloud. This model is brittle and costly, and it’s also unnecessary. Most of the time, users aren’t interested in every frame. They’re looking for specific events.
Edge AI enables cameras to generate metadata about what they see: “Vehicle detected at 4:02 PM,” “Person loitering at entrance,” “Package removed from shelf.” This metadata can be transmitted instantly with minimal bandwidth. Video can still be recorded locally or in the cloud, but only accessed when needed.
This dramatically reduces network load and allows the cloud to be used more strategically: for remote access, long-term archiving, or large-scale data aggregation, without being overwhelmed by volume.
Building smarter systems, together
A single camera can run analytics from multiple third parties, all within a secure, containerised framework
An equally important aspect of containerisation is how it opens up the ecosystem. Traditional security systems are often built as closed solutions. Everything—from the cameras to the software to the analytics—comes from a single vendor. While this simplifies procurement, it limits innovation and flexibility.
Docker flips that model. Because it’s an open, well-established standard, developers from any background can create applications for edge devices. Integrators can mix and match tools to meet unique customer needs. A single camera can run analytics from multiple third parties, all within a secure, containerised framework.
This is a profound shift. Security cameras stop being fixed-function appliances and become software-defined platforms. And like any good platform, their value increases with the range of tools available.
Hybrid: The realistic future
So, where does this leave the cloud? It is still essential, but in a more specialised role. The most robust, future-proof architectures will be hybrid: edge-first and cloud-supported. Real-time detection and decision-making happen locally, where speed and uptime matter most. The cloud handles oversight, coordination, and data warehousing.
Real-time detection and decision-making happen locally, where speed and uptime matter most
This hybrid model is especially useful for organisations with complex deployments. A manufacturing plant might retain video locally for 30 days but push older footage to the cloud to meet retention requirements.
A retail chain might analyse customer flow on-site but aggregate trend data in the cloud for HQ-level insight. Hybrid gives organisations the flexibility to optimise cost, compliance, and performance.
Regulatory realities
It’s also worth noting that not every organisation can, or should, store data in the cloud. Privacy regulations like GDPR in Europe or similar laws elsewhere require strict control over where data is stored.
In many cases, sensitive footage must remain in-country. Edge and hybrid models can make compliance easier by minimising unnecessary data movement.
Conclusion: Smart security starts at the edge
The next wave of innovation in physical security won’t come from bigger cloud servers or faster internet connections. It will come from smarter edge devices, with cameras and sensors that don’t just record, but understand and classify events. And the foundation for that intelligence isn’t just AI, but how that AI is deployed. Containerisation via platforms like Docker is unlocking new levels of flexibility, security, and scalability for the physical security industry.
By embracing open standards, supporting modular applications, and rethinking how data flows through the system, physical security professionals can build solutions that are not only more effective but also more sustainable, secure, and adaptable.
The cloud still has its place. But the edge is essential to the future for real-time intelligence, mission-critical uptime, and cost-effective deployment.
From facial recognition to LiDAR, explore the innovations redefining gaming surveillance
