Unlocking human-like perception in sensor-based technology deployments
Like most industries, the fields of security, access and safety have been transformed by technology, with AI-driven automation presenting a clear opportunity for players seeking growth and leadership when it comes to innovation.
In this respect, these markets know exactly what they want. They require solutions that accurately (without false or negative positives) classify and track people and/or vehicles as well as the precise location and interactions between those objects. They want to have access to accurate data generated by best-of-class solutions irrespective of the sensor modality. And, they need to be able to easily deploy such solutions, at the lowest capex and opex, with the knowledge that they can be integrated with preferred VMSs and PSIMs, be highly reliable, have low install and maintenance overheads and be well supported.
With these needs in mind, camera and computer vision technology providers, solutions providers and systems integrators are forging ahead and have created exemplary ecosystems with established partnerships helping to accelerate adoption. At the heart of this are AI and applications of Convolutional neural networks (CNN), an architecture often used in computer vision deep learning algorithms, which are accomplishing tasks that were extremely difficult with traditional software.
But what about 3D sensing technologies and perception?
The security, safety and access market have an additional crucial need: they must mitigate risk and make investments that deliver for the long-term. This means that if a systems integrator invests in a 3D sensing data perception platform today, it will support their choice of sensors, perception strategies, applications and use cases over time without having to constantly reinvest in alternative computer hardware and perception software each time they adopt new technology or systems.
This begs the question - if the security industry knows what it needs, why is it yet to fully embrace 3D sensing modalities?
Intelligent perception strategies are yet to evolve which sees designers lock everything down at the design phase Well, one problem facing security, safety and access solutions providers, systems integrators and end-users when deploying first-generation 3D sensing-based solutions is the current approach.
Today, intelligent perception strategies have yet to evolve beyond the status quo which sees designers lock everything down at the design phase, including the choice of the sensor(s), off-the-shelf computer hardware and any vendor-specific or 3rd party perception software algorithms and deep learning or artificial intelligence.
This approach not only builds in constraints for future use-cases and developments, it hampers the level of perception developed by the machine. Indeed, the data used to develop or train the perception algorithms for security, access and safety use cases at design time is typically captured for a narrow and specific set of scenarios or contexts and are subsequently developed or trained in the lab.
As those in this industry know too well, siloed solutions and technology gaps typically block the creation of productive ecosystems and partnerships while lack of commercial whole products can delay market adoption of new innovation. Perception systems architectures today do not support the real-time adaptation of software and computing engines in the field. They remain the same as those selected during the design phase and are fixed for the entire development and the deployment stage.
Crucially, this means that the system cannot deal with the unknowns of contextually varying real-time situations where contexts are changing (e.g being able to reflex to security situations they haven’t been trained for) and where the autonomous system’s perception strategies need to dynamically adjust accordingly. Ultimately, traditional strategies have non-scalable and non-adaptable competing computing architectures that were not designed to process the next generation of algorithms, deep learning and artificial intelligence required for 3D sensor mixed workloads.
What this means for industries seeking to develop or deploy perception systems, like security, access and safety, is that the available computing architectures are generic and designed for either graphic rendering or data processing. Solutions providers, therefore, have little choice but to promote these architectures heavily into the market. Consequently, the resulting computing techniques are defined by the computing providers and not by the software developers working on behalf of the customer deploying the security solution.
Context…. we don’t know what we don’t know
Perception platform must have the ability to adjust to changes in context, thereby improving the performance post-deployment To be useful and useable in the security context and others, a perception platform must have the ability to adjust to changes in context, can self-optimise and crucially, can self-learn, thereby improving the performance post-deployment. The combinations of potential contextual changes in a real-life environment, such as an airport or military base, are innumerable, non-deterministic, real-time, often analogue and unpredictable.
The moment sensors, edge computing hardware and perception software are deployed in the field, myriad variables such as weather, terrain as well as sensor mounting location and orientation all represent a context shift where the perception systems’ solution is no longer optimal. For example, it might be that a particular sensor system is deployed in an outdoor scenario with heavy foliage.
Because the algorithm development or training was completed in the lab, the moving foliage, bushes or low trees and branches are classified as humans or some other false-positive result. Typically, heavy software customisation and onsite support then ensue, requiring on-site support by solutions vendors where each and every sensor configuration needs to be hand-cranked to deliver something that is acceptable to the end customer.
A new approach for effective perception strategies
Cron AI is building senseEDGE, which represents a significant evolution in the development of sensing to information strategy. It is a 3D sensing perception and computer vision platform built from the ground up to address and remove the traditional deployment and performance bottlenecks we’ve just described.
senseEDGE is aware of the user application reaction plan indication to trigger an alarm or turning on a CCTV camera The entire edge platform is built around a real-time scalable and adaptable computing architecture that’s flexible enough for algorithms and software to scale and adapt to different workloads and contexts. What’s more, it has real-time contextual awareness, which means that the entire edge platform is, at any time, aware of the external context, the sensor and sensor architecture and the requirements of the user application.
Furthermore, when it produces the object output data, it also aware of the user application reaction plan indication, which could be triggering an alarm or turning on a CCTV camera when a specific action is detected. This approach turns traditional perception strategies on their head: it is software-defined programmable perception and computing architecture, not hardware-defined.
It is free from the constraints imposed by traditional CPU or GPU compute dictated by hardware architecture providers and not limited to the perception built defined during design time. And, being fully configurable, it can be moved from one solution to another, providing computation for different modalities of sensors designed for different use cases or environments, and lower risk of adoption and migration for those developing the security solution.
Future perception requirements
senseEDGE is also able to scale to future perception requirements, such as algorithms and workloads produced by future sensors as well as computational techniques and neural networks that have yet to be invented. Meanwhile, latency versus throughput is totally software-defined and not limited by providers of computing architecture.
Finally, contextually aware, it is fully connected to the real world where the reflexes adapt to even the subtlest changes in context, which makes all the difference in time and accuracy in critical security situations.
This is how CronAI sees the future of perception. It means that security and safety innovators can now access and invest with low risk in a useable and scalable perception solution that can truly take advantage of current and future 3D sensor modalities.