The potential for AI to supercharge cybersecurity risks has gone beyond speculation.
GCHQ’s National Cyber Security Centre (NCSC) has warned that as AI-enabled cyber threats becomes a greater and greater risk, a “digital divide” will form between organisations able to keep pace with the technology and those unable to adapt their systems and strategies to the new era of cybersecurity.
Research from Absolute Security recently found that a majority (54%) of chief information security officers feel unprepared to respond to AI-enabled threats.
The group’s international SVP Andy Ward described the digital divide warning from the NCSC as a “wake-up call for all UK organisations”.
“AI is rapidly reshaping the cybersecurity landscape—accelerating both the speed and sophistication of attacks,” Ward said.
So, how serious is this threat and what can businesses do to be prepared?
Why is AI such a threat?
As anyone involved in the sector will tell you, artificial intelligence is an extremely general term that encompasses vast amounts of technology, uses and risks.
It is in part that malleability of function that has made AI such an exciting development. In one way or another it can be used to improve healthcare, financial systems, general productivity, recreation and much more.
However, it is that same quality of AI that makes its capacity to be used by bad actors so dangerous, for every meaningful use case for automation, there is a way for the ill-intentioned to exploit it.
One threat that is being increasingly seen is the technology’s ability to enhance social engineering attacks, which make up the majority of global data breaches.
AI can be used to impersonate IT help desks, generate believable phishing copy and even create realistic fake expense receipts.
In cases of AI systems being accessible via prompts, attackers can tactically manipulate the chatbot with direct and indirect prompt injection, essentially tricking the AI into exploiting its flaws to target weaknesses.
Generative AI is even capable of producing code for malware – though testing has revealed this requires a solid amount of pre-existing computing ability from the attacker due to the errors and testing required to make the code work properly.
Then there’s the increased risk brought on by the rapid deployment of new, untested technology. Businesses are under significant pressure to adopt AI tools as quickly as possible, so the likelihood of a rushed job with holes in the system is increased.
“We know AI is transforming the cyber threat landscape, expanding attack surfaces, increasing the volume of threats, and accelerating malicious capabilities,” commented Paul Chichester, director of operations at the NCSC.
What can you do to be ready?
A key priority for ensuring systems are protected from AI threats is building safeguards into the technology itself.
Prominent large language models, for example, have already added blocks to avoid misuse. However, creative criminals have found ways to bypass these and open source AI models significantly increase the availability of the underlying technology.
For businesses onboarding AI systems, ensuring there is sufficient talent with particular expertise in the technology is critical. This can be hard to come by, despite ongoing government efforts to plug the AI skills gap, but rushing these systems without adequate resources.
As is the case with the burgeoning cybersecurity threat from quantum computing, the technological advancement can be part of both the problem and the solution.
“While these risks are real, AI also presents a powerful opportunity to enhance the UK’s resilience and drive growth—making it essential for organisations to act,” added Chichester.
“Organisations should implement strong cyber security practices across AI systems and their dependencies and ensure up-to-date defences are in place.”
AI tools can be exceptional at testing cyber resilience. Just as it can be used to brute force IT systems, it can also be highly effective for simulating automated attacks to point out weaknesses and provide insights on how to secure them.
“Businesses must go beyond adopting new tools—they need a robust cyber resilience strategy built on real-time visibility, proactive threat detection, and the ability to isolate compromised devices at speed,” added Ward.
“As AI continues to reduce the window between vulnerability and exploitation, the ability to respond in real time is no longer optional—it’s foundational.”
There are plenty of resources out there, including the AI Cyber Security Code of Practice released earlier this year by the government, as well as ongoing guidance from the NCSC.
The post Are businesses ready for AI-enabled cyber threats? appeared first on UKTN.