SourceSecurity.com US Edition
Home  |  Settings  |  Marketing Options  |  eNewsletters  |  About Us  |  FAQs    Join SourceSecurity.com on LinkedIn
REGISTERTerms
In focus

Network / IP - News

Machine learning security systems address the limitations of traditional threat detection

Machine learning focuses on the development of computer programmes that can teach themselves to grow and change through exposure to new data
The need for security convergence and shared threat intelligence is markedly increasing

“Converged security” has been a buzz phrase for more than a decade, but the industry is just now starting to reap the rewards. Converged security recognises that truly comprehensive organisational risk management involves the integration of two distinct security functions that have largely been siloed in the past: information security (network operations centre or NOC) and physical security (security operations centre or SOC). In fact, “siloed” may be a nicer way of saying that these people historically have had no desire or ability to work together.

NOC and SOC convergence

That situation has been acceptable in the past but the need, and in some cases requirement, for security convergence and shared threat intelligence is markedly increasing and clearly more important today than ever before. The recent slew of successful attacks that all had predictive indicators that were overlooked because of highly segmented data collection and analysis are solemn reminders that the vulnerabilities are real. Organisations are tasked with keeping people and other assets safe, and to do that effectively, they must encourage cooperation between both the NOC and SOC functions, as they are inextricably linked. In the most recent tragedies, there were unlinked predictors on the cyber side that were discovered after the fact. In the past, physical assets merited the most attention in security protection, but today’s organisations are data driven and many of these traditionally physical assets are now information-based.

These two security worlds are markedly different. Security in a NOC often is focused on information like raw network traffic, security and audit logs, and other similarly abstract data that requires some interpretation as to what it could possibly mean. Points of emphasis in a SOC are video camera feeds and recordings, physical identity and access logs, fire safety, and many other important but largely tangible data. In an optimal security environment, the NOC and the SOC rely on each other, so today’s security professionals must be aware of the goals of the “other side.”

Modern threats are linked to
each
other, meaning that
there’s rarely a
physical threat
that didn’t originate
from a
network touchpoint at some

point during planning or
execution

What’s driving (or enabling) convergence on the IT side for many organisations is the ongoing analogue-to-IP video conversion that started some time ago and some heavy investments in IT infrastructure (often for other areas of the business), which have led to easier access to sensor connectivity. This, combined with the continuously decreasing cost of network bandwidth and data storage, has removed the last big obstacles to widespread use. Further pressure on the outcomes comes from the intelligence perspective where modern threats are linked to each other, meaning that there’s rarely a physical threat that didn’t originate from a network touchpoint at some point in the planning or execution phase. That reality has led in some cases to the obliteration of walls between NOCs and SOCs, creating a “fusion centre” or “SNOC.”

Convergence challenges

Although necessary, there are some notable challenges to convergence, best served through the integration of people, processes, preferred solutions in the cyber and physical security space, and the analysts’ knowledge base, meaning that security officers, for example, have different training than cyber analysts.

Different “personalities” often are observed within organisations that are tasked with security. The cyber team, for example, might be comprised of millennials who have highly technical skill sets because they grew up in the Internet age (digital natives). Those in physical security, on the other hand, might be comprised of former city/state law enforcement, former military or government service protecting physical assets, who are often more senior and didn’t grow up with technology at their fingertips. As a result, these personalities sometimes don’t “mix” naturally, so extra effort is needed to break down barriers that isolate the roles into separate business units, in completely separate operation centres, or sometimes on opposite sides of the country.

Security in a NOC often is focused on information like raw network traffic, security and audit logs, and other similarly abstract data
Cyber and physical security professionals often have different knowledge, personalities and training that hinder cooperation

Because these operators/analysts come from different backgrounds, have different areas of responsibility, and because their response workflows rarely intersect, a question emerges: Would a typical operator in the SOC think to even call the NOC if the operator saw something suspicious that could relate to the cyber side? Some NOCs are unaware that the SOC even exists, and if they are, they don’t know what the SOC is monitoring. The key to success is cross-training. For greater context and threat identification/mitigation, operators need to be familiar with the physical and logical risk, solutions, and cross-escalations.

The challenge of going traditional

Even in a converged security environment, traditional security detection systems produce a range of challenges in keeping organisations secure. Among them are:

Weak, independent alert streams: Most threat detection systems today are limited to a single data type – physical or cyber – and often these best-of-breed solutions are niched into a specific use (or division) within the department. For example, a large metropolitan transportation authority might have a physical security team with a dedicated fare evasion department – and, thus, leverage multiple cutting-edge solutions, including some machine learning, in support of a very specific objective rather than looking holistically at how to apply the technology across the organisation.

Cost of alarm investigation: Operators are inundated with data and “false alarms” in their day-to-day work. For example, on a large, urban college campus, SOC operators are responding to 911 (blue phones), LPR, unit dispatch, video analytics, and access control. The challenge of data inundation and false alarms cause them to average just over three minutes in issuing the required acknowledgement. In some cases this is actually considered a “good” response time. 

In short, false alarms without
context  or relevance and data
inundation require enormous
time and resources from
organisations

In another example, a major metropolitan city‘s police department, operators attempt to proactively keep the public safe and direct response resources by monitoring almost 2,000 cameras online (easily 48,000 hours of recorded footage per day). In an 18-month period, only one time did they actually catch an anomalous event as it happened with a camera operator looking at the monitor at the exact right time. Every other incident had to be found after the fact. In short, false alarms without context or relevance and data inundation require enormous time and resources from organisations and in most cases, are making real-time or even rapid response impossible.

Interpretability of alerts: Even when an alert is issued, the hardest thing to figure out through many systems is (1) why the alert was issued and (2) is there a recommended action/workflow.

Alerting rules start bad but get worse: Considering data inundation and the incidence of false alarms, traditional systems don’t adjust themselves to stop providing alerts that eventually are deemed to not be useful and don’t teach themselves to provide more relevant alerts that merit further investigation. Over time, a system that started with a large volume of alerts and a manageable amount of false alarms eventually becomes a system of mostly false alarms.

A machine learning system will connect to known threat libraries to help classify new anomalies and recommend mitigation steps
The next attack will not look like the last so we need an intelligent system that will identify the unexpected

How machine learning can help solve the challenges

So given the limitations of traditional systems, increasingly machine learning security systems are being used to address the challenges. Machine learning is a facet of artificial intelligence that provides computers with the ability to learn without being explicitly programmed or configured. Machine learning focuses on the development of computer programs that can teach themselves to grow and change through exposure to new data. Giant Gray’s Graydient platform, for example, leverages machine learning in its integration with video, SCADA or cyber technology to reduce false alarms in “teaching itself” what’s normal behaviour in a given setting. Machine learning addresses the limitations of traditional systems by:

Reducing the cost of alarm investigation with intelligent prioritisation: In a traditional rules-based system, the logic is largely black and white. There is either a violation of a rule or there isn’t. All alerts are treated equally. In an unsupervised machine learning system, the logic to determine the likelihood or “unusualness” of an event can be based on an ever-evolving body of highly detailed knowledge. As a result, it offers the ability to rate the unusualness of any given event. With the ability to dynamically rank alerts, those alerts can be classified based on this unusualness score. 

Machine learning focuses on
the development of computer
programs that can teach
themselves to grow and
change through exposure to
new data

A typical machine learning event ranking in a given period might be: Four alarms requiring acknowledgement, seven worth investigating, and 10 informational-only alerts that don’t create tickets. A perfectly configured traditional rules-based system in the same period would generate 21 equally ranked alerts that all require human interpretation. That said, optimally configured rules are rare and get worse with time, so the expectation might be to expect hundreds of equally ranked alerts in the same period that all need human review.

Context through combining traditionally disconnected alert sources: Machine learning systems leverage a composite sensor, a collection of individual sensors of various types that the system will learn and alarm as a whole based on the relationships between the member sensors. For example: When Object-A exhibits this behaviour, Object-B typically exhibits another behaviour within a certain time. The system will alert if the expected correlated action doesn’t occur.

External threat-intelligence: A system will connect to known threat libraries to help classify new anomalies and recommend mitigation steps. No one likes to see an “unknown” or “unk” classification, so many of the leading SIEMs have this functionality built in.

Automatic self-improvement: Feedback loops must be guarded and learned. There always will be risk that a human’s input can corrupt a learning system, which could result in undesirable output. This risk is mitigated with continuous learning, where we‘re either reinforcing memories or driving memory decay (forgetting) based on what we see. This approach adapts to changing conditions and can prevent long-term, heavy handed feedback.

Why machine learning is required in security

  • There is no baseline training data we can use to create reliable system rules or to train supervised learning systems;
  • We cannot manually keep pace with change, so we have to have a system that continuously adapts, learning the new environment or condition and forgetting the old;
  • Modelling and rules are the most effective they will ever be on the day they’re programmed. The next attack will not look like the last so we need an intelligent system that will identify the unexpected;
  • The most dangerous threats we all face are the ones that have never been seen before. They can’t be predicted, and therefore, we cannot program a rule or build a model for something that we can’t quantify.

See privacy and cookie policy
SourceSecurity.com
Browsing from the Americas? Looking for SourceSecurity.com US Edition?
View this content on SourceSecurity.com US Edition, our dedicated portal for our Americas audience.
Do not show me this again
International EditionUS Edition