Why machine learning means new frontiers – and the end of human hackers.
Across the world, there’s almost no industry that isn’t being impacted by artificial intelligence (AI) and machine learning (ML). Cybersecurity is no exception.
Mega data breaches, such as those that left 145 million sensitive records vulnerable at Equifax and three billion Yahoo email accounts exposed, have thrust the safety of private data into the limelight. Businesses that lose customer information, whether personal or not, suffer both reputational and financial damage.
A series of cybersecurity start-ups have emerged, using AI in products and services.
These AI systems use the process of machine learning – essentially, advanced pattern recognition in large amounts of data – and, according to Dr Peter Burnap, a reader in data science and cyber analytics at Cardiff University, wide-scale adoption of ML is not far off in the cybersecurity industry. He says that there are multiple instances of businesses using AI to spot anomalies and automatically profile the devices that connect to their networks.
One company applying AI to cybersecurity is UK start-up Darktrace. In July it raised £58m in funding, valuing it at £625m
Hyrum Anderson, technical director of data science at security firm Endgame, agrees. “In these cases, there is often a mix of traditional ‘hand-crafted rules’ and supervised machine learning, where the goal of the latter is to learn from known, historical threats and then generalised to new, future threats,” he says.
According to Anderson, humans are often the problem when it comes to protecting sensitive information and preventing security incidents. Since people are unpredictable, it is hard to determine what new threats are being created by hackers and also it is difficult to prevent ill-advised mistakes by users. When the US Democratic National Party’s email accounts were hacked in the run-up to the US 2016 election it was the fault of one staff member falling for a phishing email and giving away a password to an attacker.
“These are hard [to counter] because people regularly act in an anomalous way, and finding the suspicious among the anomalous is required, so that users don’t become fatigued and ignore alerts,” says Anderson.
One of the most well-funded companies applying AI to cybersecurity is Darktrace. In July the UK firm raised £58m in a funding round, valuing it at £625m. Darktrace makes software that polices a company’s network from the inside. When it spots abnormal behavior, it springs into action, alerting IT staff and, where possible, stopping malicious activity.
Elsewhere, the US start-up SentinelOne has raised over £84m, which it is using to develop autonomous security systems. These include preventing, detecting, responding and forensically analysing issues using AI. It has been predicted that in about 10 years’ time, AI systems will be able to largely take control of cybersecurity defence systems.
But will this new wave of technologies mean the end for human IT security? “AI and other technologies alike will not render security professionals redundant,” says Tomer Weingarten, CEO and co-founder of SentinelOne. “To enable AI, and have it work with all layers of defense in the enterprise, security professionals are instrumental. They will orchestrate, observe and become empowered by AI”. Peter Burnap says that those working in cybersecurity at companies will likely augment their skills and tasks using AI. The heavy lifting will be taken out of the analysis.
And yet, the potential exists for a new, sinister side of AI in cybersecurity. As AI and ML systems become cheaper and able to be used by more people, often using pre-existing algorithms, they could be used in cyberattacks.
In a controlled research setting, AI has been used to simultaneously attack one system, while defending its own. In another experiment, in 2016, Darpa, the US Defence and Research Agency, bots (automated programs) were pitted against bots in attempts to find and exploit flaws in code in systems created for the challenge. In some cases, the bots were able to detect bugs quicker than humans could and in one instance, one bot found a flaw that hadn’t been purposefully planted.
Peter Burnap predicts two problems in automating issues with cybersecurity. The first is a lack of transparency in the systems that are being created. With some existing instances of ML it is impossible to know how a decision has been reached. The second is hackers using ML to attack ML systems that are being used to protect systems.
“This is where attackers manipulate the data used in the attack to exploit limitations in the ML algorithm used to make decisions,” he says, “which subsequently evades detection by producing an incorrect outcome.”
Though we may still be some time away from sophisticated AI hacking, Hyrum Anderson says it could come sooner than we think: “one has to believe that sophisticated and motivated adversaries, e.g., state-sponsored, have the resources and know-how to utilise AI and ML for nefarious purposes.”
Matt Burgess Matt Burgess is a staff writer at Wired and the author of Freedom of Information For Journalists (Routledge, 2015).