It is the Image Signal Processing that defines the final image quality on the screen
Many end-users shell out the cash to acquire the newest high-end devices, plug in, and expect to be wowed

A well-developed surveillance system can give a single security guard the power to see what otherwise might take a hundred pairs of eyes to see. But what happens when all the components are all connected and powered up, and the resulting image on the screen is, well, indiscernible, or, at the very least, terribly pixelated?

Many end-users shell out the cash to acquire the newest high-end devices, plug in, and expect to be wowed. Often enough, however, what they see on the screen is not what they were expecting – and they wonder what they just paid for. In a good high-definition system, what factors actually create the best image quality? With so many variables involved, from the camera’s lens to the imaging algorithms to the monitor resolution – just to name the obvious ones – how do system integrators achieve the best on-screen images?   

The lens

The first component to handle light from an object, this may be the one most taken for granted in cameras of any sort. (Just try scratching or cracking one and you’ll agree.) In the days of analogue cameras, it seemed that any old lens would do just fine. However, as the technology inside cameras evolved and more powerful sensors (more pixels) became available, engineers and programmers demanded more from lenses. Moreover, intelligent video content analyses would be impossible without high-accuracy lenses.

In what way do lenses impact the image quality? The key factor here is light transmission. The quality of light passing through the lens itself will forever be critical to the quality of image reproduced. A lens made using ultra-precision molding aspherical technology achieves more accurate colour, better light, and clearer images. Multilayer broadband anti-reflection coating further maximises a lens's light transmission while minimising the residual reflection of light on the surface of each optical lens.

Actual image performance depends upon variables such as low light illumination, signal to noise ratio, dynamic range of light, and more
Variables involved include the camera’s lens, the imaging algorithms and the monitor resolution


When it comes to fabricating a megapixel lens that hits the mark, the materials used and the processes by which lenses are produced are the two most critical criteria. The materials most often used to create lenses are glass and specialised plastics. An HD lens made of ultra-low-dispersion optical glass – which, by using dispersion characteristics that are different from those of conventional optical glass – will deliver better HD performance. Machine-automated lens production using specialised plastics results in high output for camera producers, and the lenses produced are more uniform in design and quality.

For an HD vari-focal lens, its image quality depends largely on the precision of the cam. The cam rotates to drive the zoom and focus lens groups forward and backward for a smooth continuity of focal length and adjustment of the focal point. A lack of precision with the cam inevitably causes an offset or tilt of the lens' optical axis during zooming and focusing, leading to a serious loss of image quality.

Lens production is a delicate
balancing act. The slightest
errors or imperfections will
be very noticeable when tested

Lens production is a delicate balancing act. The slightest errors or imperfections will be very noticeable when tested. The features of a lens that affect image resolution, clarity, and contrast must be perfect. Achieving uniformity of image resolution at the centre and the edges of a lens requires high-precision machinery. And once a lens has been properly crafted, the assembly of the camera, the lens housing materials, and the alignment of the optical axis demand utmost accuracy. To put it mildly, quality control must be rigorous.

Image signal processing

As light passes through the lens, the sensor captures it and converts it to data. Raw RGB data is transmitted by the camera sensor and undergoes Image Signal Processing (ISP) such as noise reduction, white balance, WDR, curve correction and colour correction, etc. The data is then transformed to true colours for each pixel point, for people to see images that look “normal” to the human eye. It is the Image Signal Processing that defines the final image quality on the screen.

Collecting data in different conditions is vital, for instance, outdoor data should be analysed with natural light on days with sun, overcast, rain, and fog, at dawn, at dusk, and so on. Similarly, when using cameras equipped with infrared sensors, testing the IR light signals in various conditions is necessary as well. 

Actual image performance depends upon variables such as low light illumination, signal to noise ratio, dynamic range of light, and more. ISP algorithms aim at increasing the signal data and decreasing noise. Cameras with Wide Dynamic Range (WDR) will yield improved video imaging with both background and foreground objects in high contrast or high-backlight environments, maximising the amount of detail in brighter and darker areas in one field-of-view. In scenes with low contrast and low light, the sensors deliver digital image signals and at the same time send some amount of digital noise that directly hinders image clarity. Three-dimensional digital noise reduction (3D DNR) removes unwanted artifacts from an image, reducing graininess. Where cloudy weather poses a challenge, auto-defogging technology helps to identify the density of fog or rain with gray-white colour ratio analysis, and imbues images with true colour reproduction.

A more efficient video encoding solution would allow an improvement in compression efficiency of 40–50% over H.264
Ramping up the megapixels and frame rates yields great video, but also results in more bandwidth used and more storage occupied


Matching megapixels to image quality

When the factors mentioned above line up well, correlating cameras and monitors creates the best viewing experience. When a high definition camera is in place, a monitor with a high resolution will display images much more clearly. But if the monitor’s resolution is low, it will not deliver the high-quality images expected – or possible – from that HD camera. For an 8 MP camera, for instance, users do best to apply monitors with 4K × 2K resolution. Though common sense, this deserves to be mentioned because users might decide to upgrade their systems with 4K monitors, but with perhaps 1.3 MP cameras installed. In such a scenario, there’s no guarantee the on-screen image quality will automatically improve.

Managing data and bandwidth

In terms of a complete, high definition surveillance system, when the right factors come together and the calibrations are set, image quality – even in a standard HD 1080p setup – can be extremely good. The final piece of the puzzle is managing the data. Ramping up the megapixels and frame rates yields great video, but also results in more bandwidth used and more storage occupied. Squeezing bandwidth threatens image quality and clarity, but keeping ample room for signal transmission and storage will eventually increase the overall cost for customers. Is it possible for integrators to optimise their customer’s system and, at the same time, stay within budget constraints? Luckily, it can be done.

Squeezing bandwidth threatens
image quality and clarity, but
keeping ample room for signal
transmission and storage increases
the overall cost for customers

To do this, a more efficient video encoding solution would allow an improvement in compression efficiency of 40–50% over H.264. Improvements to algorithms that are adaptive to a particular scene give users control over bitrate. Another option would be to start recording video only when an event triggers an alarm, since most security guards are primarily concerned with moving objects rather than a scene’s generally stagnant background. This intelligently helps optimise bandwidth and storage consumption. Another method is to use a single panoramic or fisheye camera in place of several HD cameras for coverage – the reduced number of security devices will reduce bandwidth demands and the rate of storage consumption as well.

Getting the best image quality

Now let’s put this all together. Naturally, integrators and users will refer to their product specs to understand features and functions, fine-tuning each component for best results. Also, as suggested above, users should select an HD camera comprehensively in terms of lens performance, pixels, image quality, and overall system compatibility and performance. Next, matching the backend device and management platform should be carefully considered in a complete security system. Installing equipment that has been engineered for a given scene is a must, along with strategising how to get the most coverage out of the lowest number of cameras. Finally, product quality, warranty, price, and on-going customer service are all important factors that customers should take into account as well.

Share with LinkedIn Share with Twitter Share with Facebook Share with Facebook
Download PDF version Download PDF version

Author profile

Can You Senior Embedded Software Development Manager, Hikvision

In case you missed it

New markets for AI-powered smart cameras in 2021
New markets for AI-powered smart cameras in 2021

Organisations faced a number of unforeseen challenges in nearly every business sector throughout 2020 – and continuing into 2021. Until now, businesses have been on the defensive, reacting to the shifting workforce and economic conditions, however, COVID-19 proved to be a catalyst for some to accelerate their long-term technology and digitalisation plans. This is now giving decision-makers the chance to take a proactive approach to mitigate current and post-pandemic risks. These long-term technology solutions can be used for today’s new world of social distancing and face mask policies and flexibly repurposed for tomorrow’s renewed focus on efficiency and business optimisation. For many, this emphasis on optimisation will likely be precipitated by not only the resulting economic impacts of the pandemic but also the growing sophistication and maturity of technologies such as Artificial Intelligence (AI) and Machine Learning (ML), technologies that are coming of age just when they seem to be needed the most.COVID-19 proved to be a catalyst for some to accelerate their long-term technology and digitalisation plans Combined with today’s cutting-edge computer vision capabilities, AI and ML have produced smart cameras that have enabled organisations to more easily implement and comply with new health and safety requirements. Smart cameras equipped with AI-enabled intelligent video analytic applications can also be used in a variety of use cases that take into account traditional security applications, as well as business or operational optimisation, uses – all on a single camera. As the applications for video analytics become more and more mainstream - providing valuable insights to a variety of industries - 2021 will be a year to explore new areas of use for AI-powered cameras. Optimising production workflows and product quality in agriculture Surveillance and monitoring technologies are offering value to industries such as agriculture by providing a cost-effective solution for monitoring of crops, business assets and optimising production processes. As many in the agriculture sector seek to find new technologies to assist in reducing energy usage, as well as reduce the environmental strain of modern farming, they can find an unusual ally in smart surveillance. Some niche farming organisations are already implementing AI solutions to monitor crops for peak production freshness in order to reduce waste and increase product quality.  For users who face environment threats, such as mold, parasites, or other insects, smart surveillance monitoring can assist in the early identification of these pests and notify proper personnel before damage has occurred. They can also monitor vast amounts of livestock in fields to ensure safety from predators or to identify if an animal is injured. Using video monitoring in the growing environment as well as along the supply chain can also prove valuable to large-scale agriculture production. Applications can track and manage inventory in real-time, improving knowledge of high-demand items and allowing for better supply chain planning, further reducing potential spoilage. Efficient monitoring in manufacturing and logistics New challenges have arisen in the transportation and logistics sector, with the industry experiencing global growth. While security and operational requirements are changing, smart surveillance offers an entirely new way to monitor and control the physical side of logistics, correcting problems that often go undetected by the human eye, but have a significant impact on the overall customer experience. Smart surveillance offers an entirely new way to monitor and control the physical side of logistics, correcting problems that often go undetected by the human eye. Video analytics can assist logistic service providers in successfully delivering the correct product to the right location and customer in its original condition, which normally requires the supply chain to be both secure and ultra-efficient. The latest camera technology and intelligent software algorithms can analyse footage directly on the camera – detecting a damaged package at the loading dock before it is loaded onto a truck for delivery. When shipments come in, smart cameras can also alert drivers of empty loading bays available for offloading or alert facility staff of potential blockages or hazards for incoming and outgoing vehicles that could delay delivery schedules planned down to the minute. For monitoring and detecting specific vehicles, computer vision in combination with video analysis enables security cameras to streamline access control measures with license plate recognition. Smart cameras equipped with this technology can identify incoming and outgoing trucks - ensuring that only authorised vehicles gain access to transfer points or warehouses. Enhance regulatory safety measures in industrial settings  Smart surveillance and AI-enabled applications can be used to ensure compliance with organisational or regulatory safety measures in industrial environments. Object detection apps can identify if employees are wearing proper safety gear, such as facial coverings, hard hats, or lifting belts. Similar to the prevention of break-ins and theft, cameras equipped with behaviour detection can help to automatically recognise accidents at an early stage. For example, if a worker falls to the ground or is hit by a falling object, the system recognises this as unusual behaviour and reports it immediately. Going beyond employee safety is the ability to use this technology for vital preventative maintenance on machinery and structures. A camera can identify potential safety hazards, such as a loose cable causing sparks, potential wiring hazards, or even detect defects in raw materials. Other more subtle changes, such as gradual structural shifts/crack or increases in vibrations – ones that would take the human eye months or years to discover – are detectable by smart cameras trained to detect the first signs of mechanical deterioration that could potentially pose a physical safety risk to people or assets. Early recognition of fire and smoke is another use case where industrial decision-makers can find value. Conventional fire alarms are often difficult to properly mount in buildings or outdoor spaces and they require a lot of maintenance. Smart security cameras can be deployed in difficult or hard-to-reach areas. When equipped with fire detection applications, they can trigger notification far earlier than a conventional fire alarm – as well as reduce false alarms by distinguishing between smoke, fog, or other objects that trigger false alarms. By digitising analogue environments, whether a smoke detector or an analogue pressure gauge, decision-makers will have access to a wealth of data for analysis that will enable them to optimise highly technical processes along different stages of manufacturing - as well as ensure employee safety and security of industrial assets and resources. Looking forward to the future of smart surveillance With the rise of automation in all three of these markets, from intelligent shelving systems in warehouses to autonomous-driving trucks, object detection for security threats, and the use of AI in monitoring agricultural crops and livestock, the overall demand for computer vision and video analytics will continue to grow. That is why now is the best time for decision-makers across a number of industries to examine their current infrastructure and determine if they are ready to make an investment in a sustainable, multi-use, and long-term security and business optimisation solution.

Hikvision provides their security systems to enhance maintenance systems for Chaka Wind Farm
Hikvision provides their security systems to enhance maintenance systems for Chaka Wind Farm

Wind is a free and unlimited resource that provides potential energy toward the growing demand for clean, renewable power. In coastlines, islands, grasslands, mountainous areas, and plateaus that lack water, fuel, and convenient transportation, wind power poses a potential boon for addressing local challenges. Chaka Wind Farm is located on the Gobi Desert in Qinghai Province, China. At an altitude of 3,200 meters (nearly 2 miles), Qinghai has abundant wind energy reserves. Since its commissioning in December 2012, the energy-capturing capacity of the installed wind power turbines has reached 99 MW, while the annual average power generation is about 184 million kWh and the average annual utilisation hours are a mere 1,850 hours. Mechanical energy of rotation However, the plateau environment experiences squally winds all year round, and in severe cold winters the temperature often falls below minus 30° C! At those temperatures, the biting cold wind carries a severe risk of freezing for wind farm employees, and the harsh weather makes operation and maintenance extremely difficult. The difficulty lies in the fact that most of the wind farms are located in remote areas A wind turbine’s transmission system is composed of blades, hubs, main shafts, gear boxes, and couplings. Its main function is to convert the kinetic energy of wind into mechanical energy of rotation, then into electrical energy. As the key element in wind power, these wind turbines require routine maintenance. At present, maintenance relies mainly on the on-site staff climbing up the towers to check for and predict unit failures. However, the difficulty lies in the fact that most of the wind farms are located in remote areas. Personnel safety management When they rely solely on manual maintenance, the costs remain high enough to threaten the economics of the whole operation. Chaka Wind Farm hosts 62 wind towers, distributed across a wide 38,000 square meters (9.4 acres). This generous area creates a big challenge for maintenance staff who spend large amounts of time on transportation and logistics from tower to tower. Furthermore, according to the maintenance plan, workers have to climb each 80-meter-high wind tower for routine inspections each month. To solve these challenges, Hikvision provided a set of intelligent operation and maintenance systems for Chaka Wind Farm, including intelligent visual inspection equipment, personnel safety management, and real-time communication, providing a 24-hour online ‘Smart Examiner’ for the wind farm. Monitoring equipment temperatures Technicians at headquarters can remotely support personnel during on-site maintenanceFirst, for core unit components, thermal cameras are deployed in the wind turbine cabin to monitor equipment temperatures. This way, machine failures can be detected immediately, and staff can be automatically alerted when abnormal conditions (such as overheating) are found. In addition, with the wind towers located in the expansive Gobi Desert, unstable communications can leave operating personnel feeling disconnected. To resolve this, Hikvision’s one-key alarm intercom at the bottom of wind tower provides communication with the control center. Technicians at headquarters can remotely support personnel during on-site maintenance operations, assisting with diagnoses and repairs. Lastly, panoramic and thermal cameras and other equipment vastly expand the visual capabilities of the control centre. Staff can monitor the situation and various parameters around the wind turbines at all times. If an abnormality is found, they can immediately receive an alert from the system and identify specific problems. Engine room equipment “Originally, each wind turbine had to be inspected by staff members every month; climbing the towers was difficult and the risk factor was high. After the intelligent operation and maintenance system was installed, the engine room equipment on each tower can be inspected daily through the video system. Now each wind turbine only needs to be visited once every three months, and the frequency of climbing is reduced more than 60 percent,” said Sun, a technical operating engineer from Chaka Wind Farm. “More importantly, those problems that could only be discovered by personnel on the scene can now be identified by the intelligent operation and maintenance system – even proactive and early warnings of abnormal problems – which is a great help for our overall equipment operation and maintenance.”

Disruptive innovation providing new opportunities in smart cities
Disruptive innovation providing new opportunities in smart cities

Growth is accelerating in the smart cities market, which will quadruple in the next four years based on 2020 numbers. Top priorities are resilient energy and infrastructure projects, followed by data-driven public safety and intelligent transportation. Innovation in smart cities will come from the continual maturation of relevant technologies such as artificial intelligence (AI), the Internet of Things (IoT), fifth-generation telecommunications (5G) and edge-to-cloud networking. AI and computer vision (video analytics) are driving challenges in security and safety, in particular, with video management systems (VMSs) capturing video streams and exposing them to various AI analytics. Adoption of disruptive technologies “Cities are entering the critical part of the adoption curve,” said Kasia Hanson, Global Director, Partner Sales, IOT Video, Safe Cities, Intel Corp. “They are beginning to cross the chasm to realise their smart city vision. Cities are taking notice and have new incentives to push harder than before. They are in a better position to innovate.” “Safety and security were already important market drivers responsible for adoption of AI, computer vision and edge computing scenarios,” commented Hanson, in a presentation at the Milestone Integration Platform Symposium (MIPS) 2021. She added: “2020 was an inflection point when technology and the market were ripe for disruption. COVID has accelerated the adoption of disruptive technologies in ways we could not have predicted last year.” Challenges faced by cities Spending in the European Union on public order and safety alone stood at 1.7% of GDP in 2018 Providing wide-ranging services is an expanding need in cities of all sizes. There are currently 33 megacities globally with populations over 10 million. There are also another 4,000 cities with populations over 100,000 inhabitants. Challenges for all cities include improving public health and safety, addressing environmental pressures, enabling mobility, improving quality of life, promoting economic competitiveness and reducing costs. Spending in the European Union on public order and safety alone stood at 1.7% of GDP in 2018. Other challenges include air quality – 80% of those living in urban areas are exposed to air quality levels that exceed World Health Organization (WHO) limits. Highlighting mobility concerns is an eye-opening statistic from Los Angeles in 2017: Residents spent an average of 102 hours sitting in traffic. Smart technology “The Smart City of Today can enable rich and diverse use cases,” says Hanson. Examples include AI-enabled traffic signals to help reduce air pollution, and machine learning for public safety such as real-time visualisation and emergency response. Public safety use cases include smart and connected outdoor lighting, smart buildings, crime prevention, video wearables for field agents, smart kiosks and detection of noise level, glass breaks and gunshots. Smart technology will make indoor spaces safer by controlling access to a building with keyless and touchless entry. In the age of COVID, systems can also detect face mask compliance, screen for fever and ensure physical distancing. 2020 was an inflection point when technology and the smart cities market were ripe for disruption, Kasia Hanson told the MIPS 2021 audience. Video solutions Video workloads will provide core capabilities as entertainment venues reopen after the pandemic. When audiences attend an event at a city stadium, deep learning and AI capabilities analyse customer behaviours to create new routes, pathways, signage and to optimise cleaning operations. Personalised digital experiences will add to the overall entertainment value. In the public safety arena, video enables core capabilities such as protection of people, assets and property, emergency response, and real-time visualisation and increased situational awareness. Video also provides intelligent incident management, better operational efficiency and faster information sharing and collaboration. Smart video strategy Intel and Milestone provide video solutions across many use cases, including safety and security Video at the edge is a key element in end-to-end solutions. Transforming data from various point solutions into insights is complicated, time-consuming and costly. Cities and public venues are looking for hardware, software and industry expertise to provide the right mix of performance, capabilities and cost-effectiveness. Intel’s smart video strategy focuses around its OpenVINO toolkit. OpenVINO, which is short for Open Visual Inference and Neural network Optimisation, enables customers to build and deploy high-performing computer vision and deep learning inference applications. Intel and Milestone partnership – Video solutions “Our customers are asking for choice and flexibility at the edge, on-premises and in the cloud,” said Hansen in her presentation at the virtual conference. “They want the choice to integrate with large-scale software packages to speed deployment and ensure consistency over time. They need to be able to scale computer vision. Resolutions are increasing alongside growth in sensor installations themselves. They have to be able to accommodate that volume, no matter what causes it to grow.” As partners, Intel and Milestone provide video solutions across many use cases, including safety and security. In effect, the partnership combines Intel’s portfolio of video, computer vision, inferencing and AI capabilities with Milestone’s video management software and community of analytics partners. Given its complex needs, the smart cities market is particularly inviting for these technologies.