Cybercrime has been on the up over the past several years, exploiting the ever-expanding attack vector we create as we take more aspects of our lives online. One of the most recent and potentially impactful developments to come out of this surge in digital transformation is AI.
AI has been a blisteringly hot topic lately, with discussions ongoing about how it will affect jobs, education, and even the future of humanity itself. While we can’t speculate on whether we’ll be bowing to robot overlords anytime soon, what we can say with certainty is that AI is having a massive effect on the world of cybersecurity.
Like any powerful and transformative technology, AI has the potential to bring countless advantages into our lives—but in the wrong hands, it can also be used to do damage.
A potent tool that can be abused in a multitude of ways by hackers and cybercriminals, AI is already being used to scale up the already-substantial quantity of cyber-attacks we face every day, as well as equipping bad actors with new methods of manipulating and deceiving us online.
To find out how the cybersecurity industry is shoring up its defences against these new threats, we asked 250 cybersecurity leaders from across the UK to share their thoughts. Let’s take a look at a few of our key findings.
The greatest threat to security today
Despite the rise of organised cybercrime and ongoing geopolitical turbulence, 80% of the security leaders we surveyed believe AI to be the biggest cyber threat to their business.
And there’s good reason for that. AI gives cybercriminals the power to do everything they were doing previously faster and on a much larger scale. Take password cracking as an example. There are already plenty of algorithms out there that allow passwords to be guessed.
But with AI in their corner, cybercriminals are developing new algorithms that leverage AI and machine learning to crack huge sets of passwords in an instant. Machine learning models can also be trained to predict password variations, making them easier to decipher.
Hackers are also using AI to help them find vulnerabilities in websites and software. Automated vulnerability scanning can search for holes in digital defences faster and more thoroughly than humans.
And decreasing the time it takes for hackers to find a way into systems vastly increases the sheer quantity of attacks they’re able to execute. It can also be used to inject malicious code or components into digital products, causing widespread disruption to critical services.
Among the most headline-grabbing AI-powered cyber-attacks is the use of deepfake technology. Already used to appalling effect to create illicit, non-consensual images or political misinformation, deepfake technology uses AI to generate extremely convincing but entirely bogus audio or visual media.
It has so far been used in social engineering scams to trick people into sending money or disclosing information by impersonating a trusted figure. It’s also commonly employed in blackmail schemes, where manipulated images are used to extort cash from victims.
Complexity on the rise
As well as supporting an increase in the scale of attacks, AI tools are enabling hackers to craft more sophisticated methods, and improve tried-and-tested attack types. A massive 61% of our survey respondents said they’d seen an increase in cyber-attack complexity due to AI.
With AI at the helm, common modes of attack like phishing are becoming more refined, with criminals using generative AI tools like ChatGPT to create credible-sounding emails and messages to trick victims.
We’ve all spotted a potentially malicious email because of poor spelling and formatting, but with AI, such red flags may become a thing of the past, making phishing attacks harder to spot.
In fact, ChatGPT-like tools built specifically for this purpose have already hit the internet. While legitimate generative AI tools have safeguards built in to prevent them from being used for nefarious purposes, tools such as WormGPT have no such scruples.
Designed to generate believable and persuasive phishing emails, WormGPT is helping criminals manipulate employees into handing over sensitive data—and making it harder for security tools to differentiate them from genuine emails.
What makes these AI-generated emails even more likely to be opened and engaged with than manually written messages are the extra elements of authenticity that AI can sprinkle in. These messages can be combined with stolen personal data or information scraped from the web by AI bots, such as social media posts. This ability to source relevant data and parse it into spear phishing content can fool even the most savvy tech user.
AI is the latest weapon to be thrown into the cybersecurity arms race. For decades, criminals and security professionals have battled to get the upper hand; the technological advantage that will allow them to outsmart the enemy.
Unfortunately, the AI cat is now well and truly out of the bag, and the power it offers is accessible to everyone regardless of their objectives or moral standpoint.
The cybersecurity arms race is set to continue, and most cybersecurity leaders are not optimistic about their ability to get ahead. While 69% are investing more into cyber protections against AI, 85% of respondents expect that AI advancements will outpace cyber defences.
But that doesn’t mean that significant advances aren’t being made. AI has already been infused into cybersecurity processes, with tools like facial and fingerprint recognition and CAPTCHAs using AI to separate humans from bots and genuine users from potential hackers.
As with the adversarial side of cybersecurity, one of AI’s primary benefits in protecting assets is scalability. Continuous monitoring is crucial to maintaining a secure digital environment, and AI has a near-infinite ability to scan systems and networks, detecting threats in real time and covering more ‘ground’ than even an army of human cybersecurity professionals ever could.
Another advantage AI brings to cybersecurity is its capacity to learn. Legacy cybersecurity tools such as SIEM systems can detect threats well, but they need to be told exactly what they’re looking for upfront.
Any deviation or new type of threat the product doesn’t know about yet won’t be flagged. But AI-powered products can recognise patterns and proactively detect possible security events based on historical data, allowing them to identify previously unknown potential threats.
All of these AI-powered cybersecurity developments are protecting us against those using the same technology for unsavoury ends. And perhaps most importantly, freeing talented cybersecurity professionals from manual tasks and allowing them to focus on real threats and critical security tasks that shield our most valuable assets from harm.
Download the full report
For more insight into how cybersecurity professionals are tackling the risks that come with AI, download the full AI Unleashed: Navigating Cyber Risks report.