Articles by Jerome Gigot
Where are video surveillance cameras headed? At the core of next-generation Internet Protocol (IP) cameras are advanced chips with artificial intelligence (AI) at the edge, enabling cameras to gather valuable information about an incident: scanning shoppers at a department store, monitoring city streets, or checking on an elderly loved one at home. Thanks to advanced chip technology, complex analytics operations are becoming more affordable across the full spectrum of surveillance cameras —professional to consumer — fueling the democratisation of AI in the IP camera market. Complex analytics operations are becoming more affordable across the full spectrum of surveillance cameras Expanding the global IP camera market The video surveillance equipment market grew to $18.5 billion in 2018 and is expected to increase this year, according to IHS Markit. The latest research points to video everywhere, edge computing, and AI as the top technologies that will have a major impact in both commercial and consumer markets in 2019. Computing at the edge means that the processors inside the camera are powerful enough to run AI processing locally, while still encoding and streaming video, and are able to do it all at the low-power required to fit into the limited thermal budget of an IP camera. New SoC chips will be able to perform all of the processing on camera and provide accurate AI information, with no need to send data to a server or the cloud for processing. Instead, data can be analysed right in the camera itself, offering high performance, real-time video analytics, and lower latency — all critical aspects of video surveillance. This new AI paradigm is made possible by a new generation of SoCs, a key driver behind the market growth of IP cameras. Complex analytics operations are becoming more affordable across the full spectrum of surveillance cameras to fuel the advent of AI in the IP camera market Micro-processor-enabled video analytics Next-generation video cameras will be able to create heat maps of stores to see where people spend the most time Microprocessor-enabled analytics allow users to more easily extract valuable data from video streams. How about an insider’s view into retail customer behavior? Consider video cameras at a department store, monitoring shoppers’ behavior, traffic patterns, and areas of interest. Next-generation cameras will recognise how long a shopper stays in front of a specific display, if the shopper leaves and returns, and if the shopper ultimately makes a purchase. Next-generation video cameras will be able to create heat maps of stores to see where people spend the most time, so retailers will be able to adjust product placement accordingly. Analytics will also help identify busy/quiet times of the day, so retailers can staff accordingly. By understanding customers’ behavior, retailers can determine the best way to interact with them, target specific campaigns, and tailor ads for them. Cue the coupons while the shopper is still onsite! Analytics will also help identify busy/quiet times of the day, so retailers can staff accordingly Fast processing for rapid response at city level City surveillance and smart cities are depending on advanced video surveillance and intelligence to keep an eye on people and vehicles, identify criminals, flag suspicious behavior, and identify potentially dangerous situations such as loitering, big crowds forming, or cars driving the wrong way.Quick local decisions on the video cameras are also used to help analyse traffic situations Quick local decisions on the video cameras are also used to help analyse traffic situations, adjust traffic lights, identify license plates, automatically charge cars for parking, find a missing car across a city, or create live and accurate traffic maps. Real-time HD video monitoring and recording When it comes to home monitoring, what will next-generation video surveillance cameras offer? Real-time monitoring and notification can detect if a person is in the back yard or approaching the door, if there’s a suspicious vehicle in the driveway, or if a package is being delivered (or stolen). Advanced video cameras can determine when notifications are and aren’t required, since users don’t want to be notified for false alerts such as rain, tree branches moving, bugs, etc. Next-generation video camera capabilities can also help monitor a loved one, person or pet, helping put families at ease if they are at work or on vacation. For example, helpful analytics may be used to detect if someone has fallen, hasn’t moved for a while, or does not appear for breakfast according to their typical schedule. City surveillance and smart cities are depending on advanced video surveillance and intelligence to keep an eye on people and vehicles, identify criminals, flag suspicious behavior, and identify potentially dangerous situations Next-gen IP cameras When evaluating next-generation IP cameras (cameras on the edge), look at the brains. These cameras will likely be powered by next-generation SoCs chips. Here is what this means to you: Save on network bandwidth, cloud computing and storage costs. There is no need to constantly upload videos to a server for analysis. Analysis can be performed locally on the camera, with only relevant videos being uploaded. Faster reaction time. Decisions are made locally, with no network latency. This is critical if you need to sound an alarm on a specific event. Privacy. In the most extreme cases, no video needs to leave the camera. Only metadata needs to be sent to the cloud or server. For example, the faces of people can be recognised in the camera and acted upon, but the video never reaches the cloud. The cameras can just stream a description of the scene to the server “suspicious person with a red sweater walking in front of the train station, has been loitering for the last 10 minutes, suggest sending an agent to check it out.” This could become a requirement in some EU countries with GDPR rules. Easier search. Instead of having to look through hours of video content, the server can just store/analyse the metadata, and easily perform searches such as “find all people with a red sweater who stayed more than five minutes in front of the train station today.” Flexibility/personalisation. Each camera at the edge can be personalised to work better for the specific scene it is looking at, compared to a generic server. For example, “run a heat map algorithm on camera A (retail) as I want to know which sections of my store get the most traffic; and run a license plate recogniser on camera B (parking lot) as I want to be able to track the cars going in/out of my parking lot.” No cloud computing required. For cameras in remote locations or with limited network bandwidth, users have the ability to perform all analytics locally, without relying on uploading video to a server/cloud. Higher resolution/quality. When AI processing is performed locally, the full resolution of the sensor can be used (up to 4K or more), while typically the video streamed to a server will be lower resolution, 1080p or less. This means more pixels are available locally for the AI engine so that you will be able to detect a face from a higher distance than when the video is streamed off camera. AI at the edge Professional-level IP cameras capable of performing AI at the edge are coming soon with early offerings making their debut at this year’s ISC West. As we enter 2020, we will begin to see the availability of consumer-level cameras enabling real-time video analytics at the edge for home use. With rapid technology advancement and increased customer demand, AI is on the verge of exploding. When it comes to image quality and video analytics, IP cameras now in development will create a next-generation impact at department stores, above city streets, and keeping an eye on our loved ones.
Ambarella, Inc., an AI vision silicon company, announces that it will demonstrate its new robotics platform during CES 2020. Based on Ambarella’s CVflow® architecture, it targets automated guided vehicles (AGV), consumer robots, industrial robots, and emerging Industry 4.0 applications. The robotics platform provides a unified software infrastructure for robotics perception across Ambarella’s CVflow SoC family including the CV2, CV22, CV25, and S6LM. It provides easy access and acceleration for the most common robotics functions including stereo processing, key points extraction, neural network processing, and Open Source Computer Vision Library (OpenCV) functions. Advanced image processing Ambarella will demonstrate the highest-end version of the platform during CES 2020: A single CV2 chip will perform stereo processing (up to 4Kp30 or multiple 1080p30 pairs), object detection, key points tracking, occupancy grid, and visual odometry. This high level of computer vision performance combined with Ambarella’s advanced image processing—with native multi-camera support (up to 6 direct camera inputs on CV2, and 3 on CV25)—enables robotics designs that are both simpler and more powerful than traditional robotics architectures. We are thrilled to introduce and demonstrate this high performance robotics platform during CES" “With all eyes on the future of home and industrial robotics, we are thrilled to introduce and demonstrate this high performance robotics platform during CES to our manufacturing partners and customers,” said Jerome Gigot, senior director of marketing at Ambarella. “Combining the best of Ambarella’s advanced imaging capabilities with our high-performance CVflow architecture for computer vision, the new platform will help enable a new breed of smarter and more efficient consumer and industrial robots.” Neural network porting The platform supports both the Linux operating system as well as the ThreadX® RTOS for systems requiring functional safety, and it comes with a complete toolkit for image tuning, neural network porting, and computer vision algorithm development. It also supports the Robotics Operating System (ROS) for easier development and visualisation. A rich set of APIs makes it possible for application developers to efficiently run higher-level algorithms including optical flow, visual odometry, and obstacle detection. The new robotics platform and its related development kits are available today and can be paired with various mono and stereo configurations, as well as rolling shutter, global shutter, and IR sensor options. Ambarella will demonstrate the new platform to select customers and partners during CES 2020.
Ambarella is a big player in the video surveillance market, but not a familiar name to many buyers of security cameras. They don’t make cameras, but they make the computer chips inside. Founded in 2004, Ambarella began in the broadcast infrastructure encoders market and entered the market for professional security cameras in 2008. More recently, the company has also entered the market for automotive OEM solutions. Between 2005 and 2015, the company has produced a progression of advanced camera system on chips (SoCs) designed, developed and mass-produced for the consumer electronics, broadcast and IP camera markets. An SoC includes an image processor as well as capabilities to run software and provide computer vision (analytics). Development has been happening fast at Ambarella. In January, they introduced the CV22 camera SoC, combining image processing, 4K and 60fps video encoding and computer vision (video analytics) processing in a single, low-power-design chip. CVflow architecture provides DNN (deep neural network) processing required for the next generation of intelligent cameras. The even newer CV2 camera SoC, introduced in late-March, delivers up to 20 times the deep neural network performance of Ambarella's first generation CV1 chip, also with low power consumption. I caught up with Chris Day, Ambarella’s vice president of marketing and business development, at the ISC West show to find out more about the company. Q: Your company is not as well known in the industry as it should be, given its widespread impact on the market. Would you prefer otherwise? Day: I think we would prefer more visibility. If you talk to any camera maker, they know who we are. We do business with all the top-10 camera companies – Hikvision, Dahua, Avigilon, Pelco and the rest. Because we are a chip supplier, the end-customer deciding to buy a camera may not know what chip is inside. For that reason, we may not have the visibility. But if you are a camera maker, you know who we are. Typically, it takes nine months to develop a camera, longer with an intelligent camera because you are importing so much software Q: What are you hearing from your camera customers in terms of what they need, and how are they directing where you go with R&D? Day: We have become a major supplier to those companies based on years of developing image processing – wide dynamic range, low light, and similar features – as well as AVC (advanced) and HEVC (high-efficiency) video encoding. That’s the heritage of our company and why we do business with all these companies. The next treadmill is computer vision – adding the intelligence into the camera. The goal is still being best-in-class in imaging and encoding, but now being best-in-class in adding the intelligence and being able to do all those things with very low power, within the “thermal budget” of the camera. That’s the next big wave. Q: How far away is that in terms of the end-customer? How soon will he or she be able to reap the benefits? Day: By the end of 2018, or maybe next year. We’re just beginning to sample the CV22, for instance, which is the first SoC directed to security cameras. Typically, it takes nine months to develop a camera, maybe longer with an intelligent camera because you are importing so much software. So, we’re talking about the end of this year or next year. Q: Tell me about your current products and the next generation. Day: The CV22 is sampling this quarter. CV2 we announced [in late March], which is a high-performance chip. The idea is that we provide our customers with different price/performance points, so they can produce a family of cameras with different capabilities. They have the same basic software model, so someone can invest in software once and then have different performance points without completely rewriting the software. That’s key. They might have 100 software engineers developing neural networks and all the features, so if you have to recreate that at different price points, it’s a lot of work. Ambarella provides customers with different price/performance points, so they can produce a family of cameras with different capabilities Q: Historically, video analytics have over-promised and under-delivered. What would you say to a sceptical user in terms of how much confidence they should have in the next wave of products? Day: Ambarella has been in the security business for 10 years, and some of us have been in the business for 15 years. Every year I’ve been disappointed by the analytics I have seen at the ISC West show. Every year there are incremental improvements – 2 percent, 5 percent, whatever – but in general, I became a sceptic, as well. What is fundamentally different now is the neural network approach to computer vision. Even for us developing these chips: In CV1 we had a certain level of deep neural network performance. We produced CV22 in the same year with four times the performance, and then CV2 has 20 times the performance in the space of one year. That’s just at the chip level. But the neural network approach to analytics and computer vision is game changing if you look at the things you can do with it compared to traditional analytics approaches. If you look at what it’s doing in automotive and security, you will see significant development. I totally appreciate the scepticism, but I think it is completely game-changing at this point, based on the technology in the chips and based on what’s happening with neural networks. Q: What do you think the next big thing is? Day: I think the next big thing is the neural networks; it’s the intelligence in the camera. People have been pushing toward higher resolution, we’ve done 4K, we have incredible imaging even in really dark scenes. So we have been solving all those problems. And so now to add the computer vision and be able to do that in parallel with the image processing and high-resolution encoding, and all in a chip that is low-power. That’s the differentiator. Q: What else is happening? Jerome Gigot, Senior Director Marketing: There is a lot happening on the consumer side, too, with the home security market. You will see cameras in your home with more and more intelligence. Some are used for video doorbells. On some of the new cameras, we have package notification – you get notified if a package arrives, or if someone steals your package. And new battery-powered cameras are very easy to install with no wires.
Reducing the cost of video surveillance system deployment and operationDownload
RFID and smartphone readers in physical access controlDownload
Access control & intelligent vehicle screeningDownload