CCTV monitors - Expert commentary

How AI and security guards work together using video analytics
How AI and security guards work together using video analytics

How AI and humans can work together is a longstanding debate. As society progresses technologically, there’s always the worry of robots taking over jobs. Self-checkout tills, automated factory machines, and video analytics are all improving efficiency and productivity, but they can still work in tandem with humans, and in most cases, they need to. Video analytics in particular is one impressively intelligent piece of technology that security guards can utilise. How can video analytics help with certain security scenarios? Video analytics tools Before video analytics or even CCTV in general, if a child went missing in a shopping centre, we could only rely on humans. Take a crowded Saturday shopping centre, a complex one with a multitude of shops and eateries, you’d have to alert the security personnel, rely on a tannoy and search party, and hope for a lockdown to find a lost or kidnapped child. With video analytics, how would this scenario play out? It’s pretty mind-blowing. As soon as security is alerted, they can work with the video analytics tools to instruct it precisely With the same scenario, you now have the help of many different cameras, but then there’s the task of searching through all the CCTV resources and footage. That’s where complex search functions come in. As soon as security is alerted, they can work with the video analytics tools to instruct it precisely on what footage to narrow down, and there’s a lot of filters and functions to use. Expected movement direction For instance, they can tick a ‘human’ field, so the AI can track and filter out vehicles, objects etc., and then they can input height, clothing colours, time the child went missing, and last known location. There’s a complex event to check too, under ‘child kidnap’. For a more accurate search, security guards can then add in a searching criterion by drawing the child’s expected movement direction using a visual query function. A unique function like this enables visual criteria-based searches rather than text-based ones. The tech will then narrow down to the images/videos showing the criteria they’ve inputted, showing the object/child that matches the data and filter input. Detecting facial data There are illegal demonstrations and troublesome interferences that police have to deal with A white-list face recognition function is then used to track the child’s route which means the AI can detect facial data that has not been previously saved in the database, allowing it to track the route of a target entity, all in real time. Then, security guards can confirm the child’s route and current location. All up-to-date info can then be transferred to an onsite guard’s mobile phone for them to confirm the missing child’s movement route, face, and current location, helping to find them as quickly as possible. Often, there are illegal demonstrations and troublesome interferences that police have to deal with. Video analytics and surveillance can not only capture these, but they can be used to predict when they may happen, providing a more efficient process in dealing with these types of situations and gathering resources. Event processing functions Picture a public square with a number of entries into the main area, and at each entry point or path, there is CCTV. Those in the control room can set two events for each camera: a grouping event and a path-passing event. These are pretty self-explanatory. A grouping event covers images of seeing people gathering in close proximity and a path-passing event will show when people are passing through or entering. The video analytics tool can look out for large gatherings and increased footfall to alert security By setting these two events, the video analytics tool can look out for large gatherings and increased footfall to alert security or whoever is monitoring to be cautious of protests, demonstrations or any commotion. Using complex event processing functions, over-detection of alarms can also be prevented, especially if there’s a busy day with many passing through. Reducing false alarms By combining the two events, that filters down the triggers for alarms for better accuracy to predict certain situations, like a demonstration. The AI can also be set to only trigger an alarm when the two events are happening simultaneously on all the cameras of each entry to reduce false alarms. There are so many situations and events that video analytics can be programmed to monitor. You can tick fields to monitor any objects that have appeared, disappeared, or been abandoned. You can also check events like path-passing to monitor traffic, as well as loitering, fighting, grouping, a sudden scene change, smoke, flames, falling, unsafe crossing, traffic jams and car accidents etc. Preventing unsafe situations Complex events can include violations of one-way systems, blacklist-detected vehicles Complex events can include violations of one-way systems, blacklist-detected vehicles, person and vehicle tracking, child kidnaps, waste collection, over-speed vehicles, and demonstration detections. The use of video analytics expands our capabilities tremendously, working in real time to detect and help predict security-related situations. Together with security agents, guards and operatives, AI in CCTV means resources can be better prepared, and that the likelihood of preventing unsafe situations can be greatly improved. It’s a winning team, as AI won’t always get it right but it’s there to be the advanced eyes we need to help keep businesses, premises and areas safer.

Video surveillance as a service (VSaaS) from an integrator and user perspective
Video surveillance as a service (VSaaS) from an integrator and user perspective

Technology based on the cloud has become a popular trend. Most IT systems now operate within the cloud or offer cloud capabilities, and video surveillance is no exception: virtually every major hardware and software vendor offers cloud-based services. Users benefit from the cloud due to its numerous advantages, such as ease of implementation, scalability, low maintenance costs, etc. Video surveillance as a service (VSaaS) offers many choices, so there is an optimal solution for each user. However, what about integrators? For them, VSaaS is also a game-changer. Integrators are now incentivised to think about how they can maintain their markets and take advantage of the new business opportunities that the cloud model provides.   Hosted video surveillance The cloud service model has drastically changed the role of an integrator. Traditionally, integrators provided a variety of services including system installation, support, and maintenance, as well as served as a bridge between vendors and end-users. In contrast, hosted video surveillance as a service requires a security system installer to simply install cameras and connect them to the network, while the provider is in direct contact with each end-user. The cloud service model has drastically changed the role of an integrator There is no end to on-premises systems. However, the percentage of systems where the integrator’s role is eliminated or considerably reduced will continue to increase. How can integrators sustain their markets and stay profitable? A prospective business model might be to become a provider of VSaaS (‘cloud integrator’) in partnership with software platform vendors. Cloud-based surveillance Some VMS vendors offer software VSaaS platforms that form the basis for cloud-based surveillance systems. Using these solutions, a data centre operator, integrator, or telecom service provider can design a public VSaaS or VSaaS in a private cloud to service a large customer. The infrastructure can be built on any generic cloud platform or data centre, as well as resources owned by the provider or client. So, VSaaS providers have the choice between renting infrastructure from a public cloud service like Amazon Web Services, Microsoft Azure, or Google Cloud or using their own or clients’ computing infrastructure (virtual machines or physical servers). Gaining competitive advantage When integrators purchase commitment use contracts for several years, they can achieve significant savings As an example, a telecom carrier could deploy VSaaS on their own infrastructure to expand their service offering for clients, gaining a competitive advantage and enhancing profits per user. Using a public cloud, a smaller integrator can host the computing infrastructure immediately, without incurring up-front costs and with no need to maintain the system. These cloud services provide scalability, security, and reliability with zero initial investment. When integrators purchase commitment use contracts for several years, they can achieve significant savings. Next, let’s examine VSaaS options available in the market from an end-users point of view. With hosted (or cloud-first, or true-cloud) VSaaS solutions, all the video feeds are transmitted directly from cameras to the cloud. Optionally, video can be buffered to SD cards installed on cameras to prevent data losses in case of Internet connection failures. Dedicated hardware bridges There are many providers of such services that offer their own brand cameras. Connecting these devices to the cloud should only take a few clicks. Firmware updates are usually centralised, so users don’t have to worry about security breaches. Service providers may offer dedicated hardware bridges for buffering video footage and secure connections to the cloud for their branded and third-party cameras. Service providers may offer dedicated hardware bridges for buffering video footage Typical bridges are inexpensive, basic NVRs that receive video feeds from cameras, record on HDD, and send video streams to the cloud. The most feature-rich bridges include those with video analytics, data encryption, etc. Introducing a bridge or NVR makes the system hybrid, with videos stored both locally and in the cloud. At the other end of the spectrum relative to hosted VSaaS, there are cloud-managed systems. Video management software In this case, video is stored on-site on DVRs, NVRs, video management software servers, or even locally on cameras, with an option of storing short portions of footage (like alarm videos) in the cloud for quick access. A cloud service can be used for remote viewing live video feeds and recorded footage, as well as for system configuration and health monitoring. Cloud management services often come bundled with security cameras, NVRs, and video management software, whereas other VSaaS generally require subscriptions. Keep in mind that the system, in this case, remains on-premises, and the advantages of the cloud are limited to remote monitoring and configuring. It’s a good choice for businesses that are spread across several locations or branches, especially if they have systems in place at each site. On-site infrastructure All that needs to be changed is the NVRs or VMS with a cloud-compatible model or version All locations and devices can be remotely monitored using the cloud while keeping most of the existing on-site infrastructure. All that needs to be changed is the NVRs or VMS with a cloud-compatible model or version. Other methods are more costly and/or require more resources to implement. Hosted VSaaS helps leverage the cloud for the highest number of benefits in terms of cost and technological advantages. In this case, the on-site infrastructure consists of only IP cameras and network equipment. This reduces maintenance costs substantially and also sets the foundation for another advantage of VSaaS: extreme and rapid scalability. At the same time, the outgoing connection at each site is critical for hosted VSaaS. Video quality and the number of cameras directly depend on bandwidth. Broadband-connected locations Because the system does not work offline, a stable connection is required to stream videos. In addition, cloud storage can be expensive when many cameras are involved, or when video archives are retained for an extended period. The hosted VSaaS is a great choice for a small broadband-connected location The hosted VSaaS is a great choice for small broadband-connected locations and is also the most efficient way to centralise video surveillance for multiple sites of the same type, provided they do not have a legacy system. Since it is easy to implement and maintain, this cloud technology is especially popular in countries with high labour costs. Using different software and hardware platforms, integrators can implement various types of VSaaS solutions. Quick remote access For those who adhere to the classic on-premises approach, adding a cloud-based monitoring service can grow their value proposition for clients with out-of-the-box capabilities of quick remote access to multiple widely dispersed sites and devices. For small true-cloud setups, there is a possibility to rent a virtual machine and storage capacity in a public cloud (such as Amazon, Google, or Microsoft) and deploy the cloud-based VMS server that can handle dozens of cameras. In terms of features, such a system may include anything from plain video monitoring via a web interface to GPU-accelerated AI video analytics and smart search in recorded footage, depending on the particular software platform. Optimising internet connection Hybrid VSaaS is the most flexible approach that enables tailoring the system to the users’ needs High-scale installations, such as VSaaS for public use or large private systems for major clients, involve multiple parts like a virtual VMS server cluster, web portal, report subsystem, etc. Such systems can also utilise either own or rented infrastructure. Some vendors offer software for complex installations of this kind, though there are not as many options as for cloud-managed systems. Finally, hybrid VSaaS is the most flexible approach that enables tailoring the system to the users’ unique needs while optimising internet connection bandwidth, cloud storage costs, and infrastructure complexity. It’s high time for integrators to gain experience, choose the right hardware and software, and explore different ways of building systems that will suit evolving customer demands in the future.

Changing the landscape of event security with Martyn’s Law
Changing the landscape of event security with Martyn’s Law

Martyn’s Law (also known as ‘Protect Duty’) could forever change the landscape of event security if changes to legislation are passed. Some would argue it already has. In 2017, just as concertgoers were leaving the Manchester Arena, a terrorist detonated an improvised explosive device in a suicide attack killing 22 and injuring more than 250. The mother of one of the victims, Martyn Hett, has tirelessly campaigned for tighter security and a duty of care to be placed upon venues to protect their patrons. As a result, Martyn’s Law (‘Protect Duty’) has been proposed in UK legislation to protect the public from terrorism. At the same time, other global trends have indicated the need for action on this front. Labour-intensive task The Global Terrorism Index 2020, for instance, reported a steep increase in far-right attacks in North America, Western Europe, and Oceania, stating a 250% rise since 2014, with a 709% increase in deaths over the same period. But, how do we implement the measures proposed by Martyn’s law without intruding on our lives through mass surveillance? The Global Terrorism Index 2020, reported a steep increase in far-right attacks in North America Traditionally, cameras and CCTV have been the go-to solution for monitoring. However, maintaining a comprehensive view of locations with complex layouts or venues that host large crowds and gatherings can be a challenging and labour-intensive task for operatives. Camera outputs have been designed to be interpreted by people, which, in turn, requires a significant human resource that’s liable to inconsistent levels of accuracy in complex environments where getting things wrong can have a catastrophic impact. Highly accurate insights Fortunately, technology is evolving. AI-based perception strategies are being developed alongside advancements in 3D data capture technologies – including lidar, radar, and ToF cameras - that are capable of transforming surveillance with enhanced layers of autonomy and intelligence. As a result, smart, automated systems will be able to work alongside the security workforce to provide an always-on, omniscient view of the environment, delivering highly accurate insights and actionable data. And, with the right approach, this can be achieved without undue impact on our rights as private citizens. While much of this innovation isn’t new, it has been held back from at-scale adoption due to the gaps that remain between the data that’s captured and the machine’s ability to process it into an actionable insight. High traffic environments It’s crucial that they are able to detect all individuals and track their behaviour as they interact In security, for example, this gap is most present when it comes to addressing occlusion (in other words, recognising objects that move in and out of view of the sensors scanning a space). For security systems to provide the high levels of accuracy required in high traffic environments, such as concert venues, it’s crucial that they are able to detect all individuals and track their behaviour as they interact with a space and those within it. This, of course, is possible using multiple sensor modes. However, without the right perception platform to interpret the data being captured, the risk of missing crucial events as a result of the machine misinterpreting a partially concealed individual as an inanimate object, for instance, is significant. Identifiable personal data This gap is narrowing, and thanks to the first wave of sensor innovators, this shift in dependence from video read by people to 3D data point clouds read by machines have meant that we are now able to capture much richer information and data sets that can precisely detect and classify objects and behaviours – without capturing biometric and identifiable personal data. But what we need to fully close the gap are perception strategies and approaches that can adapt to the ever-changing nature of real-world environments. This gap is narrowing, and thanks to the first wave of sensor innovators Until now, this has been a lengthy and costly process requiring those implementing or developing solutions to start from scratch in developing software, algorithms, and training data every time the context or sensor mode is changed. But, by combining proven 3D sensor technologies like lidar with the deep learning first approach, this needed to be the case. Edge processing platform That’s why we are developing an adaptive edge processing platform for lidar that’s capable of understanding the past and present behaviour of people and objects within a given area. Through deep learning, it can predict the near-future behaviour of each object with some degree of certainty, thereby accurately and consistently generating real-time data and tracking the movement of people in the secured environment at scale. This approach has value beyond security. Facilities teams, for example, can extract a wealth of information beyond the primary function of security to support other priorities such as cleaning (tracking facility usage so that schedules can be adjusted), while retailers can optimise advertising and display efforts by identifying areas of high footfall. Likewise, health and safety teams can gather much deeper insights into the way spaces are used to enhance processes and measures to protect their users. Programming limitless scenarios Martyn’s Law will leave them with no option but to rethink their approach to security and safety As we’ve explained, perception is reaching new levels of sophistication through deep learning. By continually programming limitless scenarios, our approach can provide consistently accurate and rich data that users can trust. This will ultimately change the way we manage environments at a time when liability comes with ever-increasing consequences. For venue providers, Martyn’s Law will leave them with no option but to rethink their approach to security and safety. But, with new, smarter, more accurate tools at their disposal that will enable them to predict and protect, rather than just react, risks – both human and commercial – can be addressed. Meanwhile, the public can take comfort in knowing that measures to keep them safe needn’t mean sacrificing their privacy.

Latest FLIR Systems news

Visualising sound with the Teledyne FLIR Si124, an ultrasonic leak detection camera
Visualising sound with the Teledyne FLIR Si124, an ultrasonic leak detection camera

It is possible to visualise sound and the user doesn’t have to be an acoustic engineer to make sense of it either: the FLIR Si124 industrial acoustic imaging camera produces a precise acoustic image that is overlaid in real-time on top of a digital camera picture. The blended visual and sound image is presented live on screen, visually displaying ultrasonic information and allowing the user to accurately pinpoint the source of the sound. Acoustic imaging is used for two primary purposes: air leak detection, and locating partial discharge from high-voltage systems. Using sound imaging from 124 built-in microphones, the FLIR Si124 can help professionals identify leaks and partial discharge up to 10 times faster than with traditional methods. Compressed air leak detection Compressed air is the single most expensive energy source across all factory types, yet up to one-third of that compressed air gets lost to leaks and inefficiencies. The human ear can sometimes hear an air leak in a quiet environment, but in a typical industrial environment, it’s impossible to hear even bigger leaks due to loud background noise. Fortunately, the Si124 filters out the industrial noise, allowing professionals to “visualise” sound even in noisy environments. With Si124, professionals can safely detect problems from up to 100 meters away and analyse discharge patterns Partial discharge in high-voltage systems In electrical systems, partial discharge can lead to equipment failures and unplanned downtime. With the Si124, professionals can safely detect problems from up to 100 metres away and analyse discharge patterns. The camera classifies three partial discharge types, including surface discharge, floating discharge, and discharge into the air. Knowing the type and severity of the discharge enables the facility to schedule maintenance to minimise failures and downtime. Acoustic camera viewer cloud service What sets the Si124 further apart from other acoustic imaging cameras is the FLIR Acoustic Camera Viewer cloud service. Image captures are quickly uploaded over Wi-Fi to the cloud service and then immediately analysed, providing the user in-depth information such as the size and energy cost of a compressed air leak, or the partial discharge classification and pattern of an electric fault. In addition, users get 8 GBs of storage and wireless data transfer capabilities, making sharing photos and data simple and efficient. The Si124 requires minimal training and can be used one-handed. Through a regular maintenance routine, professionals can identify issues fast helping utilities keep the power flowing and manufacturing operations going.

Teledyne FLIR launches A50 and A70 thermal cameras to offer turnkey solutions
Teledyne FLIR launches A50 and A70 thermal cameras to offer turnkey solutions

When decision makers seek to integrate new hardware into their automation process, they are often looking at a few key areas: the ease of use, price point, features, and the ability to utilise the hardware at multiple points throughout their system. The new A50 and A70 thermal cameras come in three options—Smart, Streaming, and Research & Development to fit the needs of professionals across a variety of industries—from manufacturing to utilities to science. The new cameras offer improved accuracy of ±2°C or ±2% temperature measurement, compared to the previous accuracy of ±5°C, or ±5% temperature measurement. Early fire detection The cameras all include an IP66 rating, along with a small, compact size with higher resolution options compared to previous versions. Featuring a thermal resolution of 464 x 348 (A50) or 640 x 480 (A70), professionals can deploy the A50 or A70 cameras in a variety of capacities. With improved temperature measurement accuracy of ±2°C, professionals can rely on consistent readings These include condition monitoring programs to maximise uptime and minimise cost through planned maintenance, or when used in early fire detection applications to safeguard the lives of workers and secure the profitability of the business by protecting materials and assets. With improved temperature measurement accuracy of ±2°C, professionals can rely on consistent readings over a period of time, or through varying environmental factors, eliminating any guesswork from data analysis. Condition monitoring programs The IP66 rating for both the A50 and A70 provides protection from dust, oil, and water, making the cameras ideal for tough, industrial environments. This ruggedness is especially helpful when the camera is being moved from one application to the next. Whether the camera is fix-mounted inspecting a production line or when required for bench testing, professionals benefit from its versatility. Designed for condition monitoring programs to reduce inspection times, improve production efficiency, and increase product reliability, the A50 and A70 Smart cameras introduce ‘on camera / on edge’ smart functionality. This means temperature measurement and analysis can be done on the camera, easily and effectively without the need for a PC. Reducing operating costs The A50/A70 image streaming cameras improve through-put time and the quality of what is being produced These cameras allow automation system solution providers to hit the ground running with a camera that is easy to add, configure, and operate in HMI/SCADA systems (with REST API, MQTT and Modbus master functionality). Built for process and quality control, the A50/A70 image streaming cameras improve through-put time and the quality of what is being produced, all while reducing operating costs. With its GigE Vision and GenICam compatibility, professionals can simply plug the camera into their PC and choose their preferred software. In most cases, the addition of A50 or A70 image streaming cameras complement machine vision systems that looks at defects such as size, with the A50 or A70 providing temperature variance in these products. Thermal imaging analysis Primarily used as a research and development solution, the A50 and A70 Research & Development Kit provides an easy entry point into thermal imaging analysis for applications within academia, material studies, and electronic and semi-conductor research. The research & development kit include the advanced image streaming versions of the A50 and A70 cameras and FLIR research studio software for camera control, live image display, recording, and post‐processing for decision support. The A50 and A70 Research & Development Kit provides an easy entry point into thermal imaging analysis “In addition to its small, compact packaging, which makes it easier to mount these cameras inside of machinery with tight spaces, we are excited about the A50/A70’s IP66 rating, a feature that eliminates the need to add an enclosure around the camera when it is deployed it in tough environments.” Thermal temperature monitoring “Another added benefit is having the ability to inspect and analyse data through the camera’s Wi-Fi capability, eliminating the need for users to run wires through a manufacturing facility from a computer to the piece of equipment that is being inspected.” - Roy Ray, Vice President, Emitted Energy. “The launch of the new A50/A70 camera is really exciting for FLIR customers. Its new software capabilities now allow for dual functionality with two different spectrums - thermal temperature monitoring and visual inspection - within the same camera. This adds an extra level of functionality to new and existing integrated FLIR systems.” This can now be completed by one camera without the need for any additional hardware" “Users can combine thermal functionality, to check packaging seals for example, with visual inspection of the packaging itself. This can now be completed by one camera without the need for any additional hardware - just a simple software change.” - John Dunlop, Founder and Chief Technical Officer, Bytronic Vision Automation. Accomplishing multiple tasks “We are excited about the launch of the new FLIR A50/A70 thermal imaging camera and the value it will add for our customers. With its expanded communication capabilities, ViperVision can do more with the camera information we’ve always had. The more data and more ways to access that data the customers have, the better.” “We are now able to take the same critical data that has always been provided by FLIR cameras and make it more available and thus more useful to our customers. Whether this means integration with VMS and security systems or more plant control networks (DCS, PLC, SCADA), we are now able to accomplish multiple tasks simultaneously and more efficiently.” - Andy Beck, Co-Founder and Co-Owner, Viper Imaging. The FLIR A50 and A70 are available globally today through FLIR authorised distributors.

Teledyne FLIR introduces Neutrino SX8 mid-wavelength infrared camera module and four Neutrino IS series models with integrated continuous zoom lenses
Teledyne FLIR introduces Neutrino SX8 mid-wavelength infrared camera module and four Neutrino IS series models with integrated continuous zoom lenses

Teledyne FLIR, part of Teledyne Technologies Incorporated introduced the Neutrino SX8 mid-wavelength infrared (MWIR) camera module and four additional Neutrino IS Series models designed for integrated solutions requiring HD MWIR imagery with size, weight, power, and cost (SWaP+C) constraints for commercial, industrial, defense original equipment manufacturers (OEM), and system integrators. High performance and imagery Based on Teledyne FLIR HOT FPA technology, the Neutrino SX8 offers high-performance, 1280x1024 HD MWIR imagery for ruggedised products requiring long life, low power consumption, and quiet, low vibration operation.  The SX8 and the Neutrino IS series models are ideal for integration with small gimbals, airframes, handheld devices, security cameras, targeting devices, and asset monitoring applications. Reduced time-to-market and development risk Teledyne FLIR provides highly qualified technical services teams for integration support and expertise The latest additions to the Neutrino MWIR camera portfolio continue to provide shortened time-to-market and reduced project risk with off-the-shelf design and delivery. Teledyne FLIR also provides highly qualified technical services teams for integration support and expertise throughout the development and design cycle. All the cameras and solutions in the Neutrino series are classified under US Department of Commerce jurisdiction as EAR 6A003.b.4.a and are not subject to International Traffic in Arms Regulations (ITAR). Neutrino IS products include a Teledyne FLIR CZ lens integrated with a Neutrino SWaP Series camera module (VGA or SXGA). All four models using the Neutrino LC and two models using the Neutrino SX8 provide crisp, long-range MWIR imaging. The purpose-designed, factory-integrated CZ lenses and MWIR camera modules provide performance, cost, schedule, and risk benefits unmatchable by other camera or lens suppliers.

Related white papers

Connected video technology for safe cities

Monitoring traffic flow: Everywhere, all the time

Ensuring cybersecurity of video