Delfina Chain, Sr Associate Customer Engagement & Development at Flashpoint, discusses what resources defenders must access to in order to keep a finger on the pulse of the cybercriminal underground.
Artificial intelligence (AI) is already being applied to diverse use cases, from consumer-oriented devices - such as voice-controlled personal assistants and self-directed vacuum cleaners - to ground-breaking business applications that optimise everything from drug discovery to financial portfolio management. So naturally, there is growing interest within the information security community around how we can leverage AI - which encompasses the concepts of machine learning (ML) and deep learning (DL) - to combat cyber threats.
AI-enhanced cyber security
The effectiveness and scalability of cybersecurity-related tasks has already been enhanced by AI
The effectiveness and scalability of cybersecurity-related tasks, such as malware and spam detection, has already been enhanced by AI, and many expect ongoing AI innovations to have a transformative impact on cyber defence capabilities. However, security practitioners must also recognise that the rise of AI presents a potent opportunity for cybercriminals to optimise their malicious activities.
Much like the rise of cybercrime-as-a-service offerings in the underground economy, threat-actor adoption of AI technology is expected to lower barriers to entry for lower-skilled actors seeking to conduct advanced malicious operations. A report from the Future of Humanity Institute emphasises the potential for AI to be used toward beneficial and harmful ends within the cyber realm, which is amplified by its efficiency, scalability, diffusibility, and potential to exceed human capabilities.
Encrypted chat services
Potential uses of AI among cybercriminals could include the development of highly evasive malware, the ability for automated systems to exhibit human-like behaviour during denial-of-service attacks, and the optimisation of activities such as vulnerability discovery and target prioritisation. Fortunately, defenders have a leg up over adversaries in this arms race to harness the power of AI technology, largely due to the time- and resource-intensive nature of deploying AI at its current stage in development.
The purpose of intelligence is to inform a course of action. For defenders, this course of action should be guided by the level of risk (likelihood x potential impact) posed by a threat. The best way to evaluate how likely a threat is to manifest is by monitoring threat-actor activity on the deep-and-dark-web (DDW) forums, underground marketplaces, and encrypted chat services on which they exchange resources and discuss their tactics, techniques, and procedures (TTPs).
Cobalt Strike threat-emulation software
Flashpoint analysts often observe cybercriminals abusing legitimate technologies in a number of way
Cybercriminal abuse of technology is nothing new, and by gaining visibility into adversaries’ ongoing efforts to develop more advanced TTPs, defenders can better anticipate and defend against evolving attack methods.
Flashpoint analysts often observe cybercriminals abusing legitimate technologies in a number of ways, ranging from the use of pirated versions of the Cobalt Strike threat-emulation software to elude server fingerprinting to the use of tools designed to aid visually impaired or dyslexic individuals to bypass CAPTCHA in order to deliver automated spam.
Flashpoint analysts also observe adversaries adapting their TTPs in response to evolving security technologies, such as the rise of ATM shimmers in response to EMV-chip technology. In all of these instances, Flashpoint analysts provided customers with the technical and contextual details needed take proactive action in defending their networks against these TTPs.
When adversaries’ abuse of AI technology begins to escalate, their activity within DDW and encrypted channels will be one of the earliest and most telling indicators. So by establishing access to the resources needed to keep a finger on the pulse of the cybercriminal underground, defenders can rest easy knowing they’re laying the groundwork needed to be among the first to know when threat actors develop new ways of abusing AI and other emerging technologies.