Published on 14 November, 2013
|iOmniscient’s MD will describe the company's involvement in several Smart City projects
The MD of iOmniscient will give a talk about IQ Smart Cities at the Secured Cities Conference in Baltimore. She will describe iOmniscient’s involvement in several Smart City projects including the Singapore Smart City test bed which is perhaps the most sophisticated Smart City implementation that the company has undertaken so far. The focus is on making cities safe, smart and efficient. Pulling together its capabilities in video, sound and smell analytics focused on behaviour, License Plate Recognition and Face Recognition in a Crowd, the system provides the users with comprehensive information to manage the city environment both in real time and more strategically. The new patented Automated Response capability helps reducing the time it takes first responders to react to an incident ensuring timelier service for those who are involved.
There are multiple stakeholders for the project from the police and the transportation authorities to the agencies that are responsible for utilities and the environment. The intent is to provide each one with better information sourced from both structured and unstructured data to perform their duties.
Information has become a strategic asset. Hence the concept of Big Data Management has grown in importance where unstructured data from many different systems can be accessed and manipulated to draw out useful information. The data for Big Data analysis can come from numerous sources like RFID tags, purchase transaction records, PoS, social media sites, digital pictures and videos, CCTV cameras, cell phone GPS signals, and so on.
However the Big Data engines are essentially designed to manipulate text. How can one access the information embedded in images and videos? This is where iOmniscient comes in.
iOmiscient’s image analysis software can extract meta data (which essentially means the data about the image – such a description of what is seen) from videos and images and also from sound and smell sensors. This meta data is then provided in text form to the Big Data engines for analysis and for the extraction of patterns of information from this raw data. Such multi-media information is a breakthrough for organisations that have not previously been able to access meaningful information from these varied non-textual sources.
iOmniscient’s image analysis software can extract meta data from videos and images and also from sound and smell sensors
Much of the meta data that is extracted from an image can be meaningless. There would be little use in keeping track of moving clouds and leaves fluttering in the breeze. Most traditional motion detection based systems will indiscriminately pull out meaningless data which merely adds “noise” to the analysis process.
So iOmniscient has pioneered the concept of extracting what is called Meaningful Multisensor Metadata (MMM or M3), which is meta data resulting from preprocessing video to understand what is actually happening in the scene. This is converted to text in real time and provided to the Big Data engines. The meta data is extracted not just from video but from sound and smell analytics systems as well – hence the reference to “multisensor”.
The more intelligent the Video Analysis, the more meaningful will be the Meta Data – and of course iOmniscient prides itself on having the most advanced and intelligent video analytics in the industry.
Managing Big Data is not done by iOmniscient alone. The company works in partnership with the major suppliers of Big Data analysis engines to achieve these objectives.
The Meaningful Multisensor Mata Data can be analysed to provide further intelligence for organisations to forecast future requirements and plan and optimise their resources. There are varied applications and scenarios, where iOmniscient’s system can provide greater business intelligence by analysing the meta data. A few of them are listed below.
There are varied applications and scenarios, where iOmniscient’s system can provide greater business intelligence by analysing the meta data
Event response data from traffic monitoring systems can be used to plan and develop road networks in future, plan public transport and manage traffic flows. This can help reduce congestion, reduce noxious emissions and improve the quality of life for residents. It will be possible for cities to levy congestion taxes and estimate expected revenues based on forecasts of traffic flows in various sections of the city at different times of the day and different days of the week.
Crowd management information can be used to plan the patterns of crowd flows by day of the week, time of day and even seasons of the year. This can be used to improve security, provide better facilities such as public toilets and plan public transport by re-locating bus stops, for example. Data extracted from social media is already being used to enhance security during crowded events.
Large retail stores or shopping centres can plan and improve operational efficiency. The customer’s shopping behaviour such as the time spent by customers in each part of the store, the average waiting time in queues and other similar information can be used to better plan resources and improve customer satisfaction.
iOmniscient’s ability to provide Meaningful Multisensor Metadata (MMM or M3) in real time for targeted multimedia big data analysis rather than large volumes of random data that have limited usefulness helps reduce the need for large volume computing and storage by filtering out useless information. Computing resources continue to fall in price and while it is now feasible to analyse large volumes of information this process is not inexpensive. Making the inputs to the process meaningful can reduce the volume of data to be processed by an order of magnitude. This is not an insignificant saving. And iOmniscient helps by making Big Data smaller and therefore easier and cheaper to process.