AI is not medicine for cyberattacks
This article is part of our Opinions section.
When AI comes up in a conversation about cybersecurity threats aided by artificial intelligence, it is tempting to ask, ‘Can we fight AI with AI?’ The 2024 RSA Conference certainly lauded the transformative impact of AI on cybersecurity. And, there are indeed many ways AI-driven threat detection can help spot viruses and malware.
However, AI is not the cure for cyberattacks. It may treat the symptoms, but it won’t treat the root cause.
Which is more vulnerable… software or humans?
Lately, there have been a lot – and I mean a lot – of data breaches. Every week, another headline hits the news, with one or more reputable, high-profile firms experiencing theft of data or disruption of operations. In some cases, there’s ransomware involved, like the particularly brutal recent breach of UnitedHealth Group’s Change Healthcare unit, which is anticipated to cost an estimated $1.6 billion.
Often, software vulnerabilities that broadly affect the industry garner the lion’s share of media attention. However, in reality, those only are responsible for approximately 5% of breaches. The real problem is social engineering: according to Verizon, 68% of cyberattacks involve the human element. Although there has been a slight improvement from the prior year, that’s a lot of breaches that could have – and probably should have – been avoided.
I stress the word ‘avoidable’ here because, in practice, these breaches come down to some employees at a company leaving behind a credential someplace they shouldn’t have – like a password, API key or browser cookie. These credentials appear in 86% of security breaches related to web-based applications and platforms. Many organisations may even have credentials hard-coded into their code base.
What social engineering with AI looks like
All of the above is critical context for the conversation around social engineering attacks. Threat actors know full well that those attacks keep working, so they keep doing them. With AI, that problem simply becomes a lot worse.
For example, phishing scams have for a long time been more or less obvious, with corporate training helping employees identify them by spelling errors, bad grammar or incorrect headers. However, AI ups the ante. With AI, threat actors can personalise the target emails, at scale.
For example, an employee receives an email from their HR department asking for a password to a pension provider platform. Coincidentally, your company just announced a change of pension providers. The ‘HR person’ in this instance is actually an impersonator who used AI to flag the announcement and weave it into the threat, made more believable by its proximity to the provider change.
AI tools enable threat actors to drive these strategies at scale. These tools have already started to appear, such as the hackbot-as-a-service WormGPT, which lets cybercriminals engineer convincing phishing and deepfake campaigns. They cost substantially less money and time to launch than they did without AI. A teenager could do it from home (not a suggestion).
To frame a conversation about using AI to combat this threat, however, is like suggesting that an adversary target the missiles on a fighter jet, rather than the fighter jet itself. The right conversation is, ‘how do we stop employees and enterprises from making their credentials a threat vector?’ As it stands, credentials are pretty much littered across the many disparate layers of the technology stack – Kubernetes, servers, cloud APIs, specialised dashboards and databases, and more.
AI agents should not become another security silo
I’ve spoken before about the problematic way in which modern infrastructure layers manage security in different ways, creating security silos in the process. The problem with AI is that this, too, presents yet another security silo.
If a leak happens related to GenAI, you might find yourself scrambling to figure out what data the AI agent had access to and was trained on, and who had access to the AI agent. You want to be able to reduce the friction of finding the source of truth for these things.
To do that, your AI agents can’t be treated as their own identity and security silo. The identities of the AI agents must be unified and managed with the same identity and access control that you use for microservices, servers, laptops and all other infrastructure. And, the identity for AI agents, and all other resources, should be cryptographic in nature. Further, you should be applying the same access rules and policy to AI that you do for all other resources.
There are more reasons than just security to consolidate the identity of AI agents with other technology resources. For example, having all resources discoverable in one place can significantly increase the productivity of your engineers. Removing identity fragmentation means less time spent provisioning resources to engineers.
Will AI prove to be beneficial for analysing threat activity and spotting anomalies? Sure! But we shouldn’t focus on the concept of ‘fighting AI with AI’. The success of AI-led social engineering attacks will be the same as any other social engineering attack: someone forgetting their unlocked laptop at the coffee shop, which just conveniently happens to have their passwords written down on a sticky note. We need to invest in making our infrastructure resilient to the human behaviour that currently exposes systems and data to nefarious actors.
Worth a read
NEXT UP
David Gallivan, Sales Manager at Virtualstock: “Technology is pivotal in unifying offline and online sales”
We interview David Gallivan, Sales Manager at Virtualstock, a leader with a wealth of expertise in retail tech and supply chain solutions
Digital Exceptionalism: the unsustainable truth behind tech’s carbon footprint
Lee Grant lays out the unsustainable truth behind the AI powering data centres – and the gadgets we all buy. Welcome to Digital Exceptionalism.
Virtual reality brings advanced medical training to Uganda
A new partnership between a Welsh startup and the University of South Wales (USW) is bringing VR training to Ugandan doctors