Skip to main content
Blog

The silent weapon: uncovering the threats of adversarial AI

By 13 September 2021No Comments

Chuck Everette at Deep Instinct explains how deep learning can be used to defend against the criminal use of AI

Almost six decades ago, the concept of artificial intelligence (AI) was born. Since then, the technology has evolved and multiple subsets have emerged from the foundations, including machine learning (ML). Unfortunately, the enormous benefits offered by this advanced technology are often foreshadowed by the ways cyber-criminals are twisting the technology and weaponising it for malicious purposes.

The value of ML has been widely recognised by organisations and the speed in which these solutions are being adopted reflects this. By applying the benefits of ML in cyber-security, businesses have been able to fortify their defences.

Unfortunately, like any great tool, machine learning can be contorted and used for nefarious purposes. Recently, cyber-criminals have managed to crack the code on ML and are now using the technology against itself to diminish an organisation’s cyber defences. This is called adversarial AI.

Unlike the usual straightforward attacks where cyber-criminals target an endpoint and force their way into the system, they are now using their own adversarial AI tools in a far more strategic attack.

ML itself is structured like a flow chart- a chain of events from data input to the technology recognising malicious code. Adversarial AI attacks target the entire process, exploiting weaknesses to fool an organisation’s systems into thinking the incoming attacks are harmless, and therefore granting free access and movement virtually undetected.

The result is that malicious data sets are reclassified as benign and vice versa, allowing cyber-criminals to send malicious programs into a business environment without the ML-based security solutions properly flagging them as dangerous.

The hidden dangers

The most threatening element of adversarial AI is that the effects can go unnoticed until it’s too late. This dwell time creates ample opportunity for criminals to expand throughout the organisation’s environment and therefore makes rooting them out once detected even more difficult.

Threat actors are taking advantage of every development made in the technology space, arming themselves with the latest advancing weapons, including using AI for malicious purposes. Helping fuel an organisation’s ever-expanding threat surface is the rapid adoption and migration of data and applications to the cloud.

Partnered with the increasing remote work trends, the threat surface has dramatically expanded at a record-breaking pace over the past 24 months. Cyber-criminal organisations are growing and becoming more sophisticated, so much so that they are now starting to run like legitimate businesses. They have business disciplines and follow a traditional business structure, including marketing, sales, and services for purchase. This makes them more deadly as they become ever more efficient.

Organisations have a huge challenge in front of them as they need to analyse their systems for any indicators that they have fallen victim to an adversarial AI attack. Due to the sophistication of the attack, it would require great amounts of effort by the already overburdened security teams to identify this vulnerability within their current security solutions.

However, organisations rarely have the spare security resources or capabilities to conduct these types of assessments on a regular basis. Like with many other forms of threat vectors, different variants are developed on a weekly and monthly basis: some vectors already have around 300 different variants within the same family, and adversarial AI is no different.

To make matters worse, the full extent of the problem is difficult to accurately track, as only around 25 percent of ransomware attacks are actually reported. Organisations often choose to keep attacks and breaches private as they don’t want to display their weaknesses to others or take the hit to their reputation.

It’s safe to assume that there are a large number of new, successful attacks that go unreported, which further aids in concealing cyber criminal activities.

Building resilience with deep learning

When it comes to adversarial AI, the best defence is prevention. As we’ve established, cyber-criminals’ skills are growing each and every day, so it’s important to keep ahead of the curve. Preventing them from taking hold of your system in the first place should be the priority.

This is where deep learning (DL), an advanced subset of machine learning, comes into its own against adversarial AI. DL has proven to be a far superior preventative security solution. It greatly reduces false positive alerts and promises to stop the attacks that other technologies cannot.

Deep learning differs from machine learning as it’s developed by consuming huge amounts of raw data, and instinctively ‘learns’ to recognise malicious and benign data sets, similar to a human brain. While machine learning requires humans to input pre-classified data sets, which are vulnerable to compromise, deep learning only uses raw data which is harder to manipulate.

Given that machine learning solutions are far less reliable now that cyber criminals can weaponise them to use against victims, a new level of technological intelligence is needed. Deep learning, as the next evolution, is resilient against AI weapons and far less likely to be bypassed and fooled into granting threat actors with free reign within the victims’ environments.

Unfortunately, awareness of deep learning is still fairly minimal, and there are many security teams who are implementing ‘off the shelf’ deep learning frameworks and feeding it machine learning models. While this practice continues, these solutions will remain at greater risk of compromise.

What can we expect in the future?

Organisations are involved in an ongoing and dangerous game of cat and mouse with cyber criminals and it’s a daily fight just to keep ahead. Whilst we’re a little way off adversarial AI being sold as a service today, it is certainly heading in that direction from the indicators we’re seeing. With the trend continuing on its current trajectory, we can expect cyber criminals to develop a hostile AI framework and boost its prevalence in the next six months to a year.

The use of this malicious tool would also be an obvious choice for nation state actors to adopt. The state sponsored field is highly lucrative, and if these groups were to start utilising adversarial AI, they would become increasingly more dangerous.

However, there is hope for those fighting against adversarial AI. Deep learning offers that next level of defence that has proven to be far more resilient than the traditional machine learning solutions. In the same way that these threat vectors are constantly evolving, so are the security solutions working against them.

As awareness of deep learning expands, and more deep learning solutions are adopted, organisations will find themselves in a stronger position to fight against the advancing threat of cyber criminals and adversarial AI attacks.


Chuck Everette is Director of cyber security advocacy at Deep Instinct 

Main image courtesy of iStockPhoto.com

Source

All rights reserved Teiss Recruitment Ltd.