Navigating Cyber Risks: Harnessing AI Securely in Your Business
by Riversafe

With the increasing accessibility of AI technology, everyone from enterprise tech teams to schoolchildren is harnessing this new wave of tools to simplify, streamline, and automate all kinds of tasks. There’s an AI solution for almost everything now, and for businesses, the scope of possibilities to utilise them is near-endless.
But that opportunity comes with risk. Like any new, easily obtainable, and as-yet-unregulated technology, AI presents a significant cybersecurity threat. And as AI evolves at a breakneck pace, new hazards are being born constantly; some of which we can’t even anticipate yet.
The potential power that AI offers, however, is far too great, and far too transformative, to pass up. Its infusion into even the most basic of software solutions is inevitable; ChatGPT is already making its way into Slack. In the unlikely event that any business would want to avoid AI in any capacity, it would be a tall order.
AI is being implanted into our email and digital storage solutions. We share our information with chatbots and allow virtual assistants to rifle through our files and communications. And it’s already being harnessed by bad actors to make breaking into systems and stealing data easier.
The question then is: how can organisations utilise AI to be more productive and enhance user experiences without compromising on security? Here are a few ways that businesses can prepare their systems and their people for the AI revolution while minimising cybersecurity risk.
Keep a tight rein on AI tools
AI tools have become massively democratised, fast. New tools are being launched every day, and anyone can find and use them. Even if most of these new tools are just new skins on a ChatGPT engine, they still present a significant risk when employees use them on company networks without any kind of vetting.
According to recent research, 43% of professionals are using ChatGPT at work, but almost 70% of them have not told their bosses about it. In an effort to mitigate security threats, many companies including Accenture, Amazon, Apple, and Verizon have allegedly blocked employees from accessing ChatGPT.
A flat-out ban might not always be necessary if your business finds AI solutions beneficial to productivity or innovation, but the use of any and all such tools must be tightly controlled.
Access controls, firewall services, and web filtering technology are just some of the ways businesses can restrict and authorise the use of AI web apps and extensions. There are also tools available to control what information is put into AI solutions, blocking the use of any sensitive information.
Rethink what you know about phishing emails
Although phishing is only one of the many methods of cyberattack that will be made easier and more sophisticated by AI, its prevalence makes it worth special mention.
You might personally find it easy enough to spot a malicious or fraudulent email. Bad spelling, suspect email addresses, a lack of branding, or poor formatting can all give the game away when an email purporting to be from a familiar source invites you to click a shady link. But thanks to AI, creating convincing and effective phishing emails is now much more achievable for cybercriminals.
This leg-up is not only going to make identifying a malicious email more difficult, but it’s also going to make them even more common.
It’s time to flip your conceptions about phishing emails and start viewing emails that are a little too perfect with suspicion. Generative AI tools very rarely make spelling or grammar mistakes and tend to generate type in a fairly straightforward way, often devoid of the individuality or personality you might expect in an email from a colleague or peer.
That doesn’t mean that emails featuring personal information should be automatically trusted, however. As these tools get smarter, they also get more targeted.
Tools using large language models can scrape information from anywhere on the web (Chat GPT was trained on 30 billion words, all lifted from publicly available content).
ChatGPT and other well known tools have guardrails built into them, to stop them being used for nefarious purposes such as phishing. But inevitably cyber criminals have developed their own versions such as WormGPT, which has no such scruples and in fact has been designed to make phishing and BEC attacks more likely.
So just because the ‘writer’ of this email knows what your business does or that you went away last week (because you posted a picture of yourself in the airport on Instagram), you shouldn’t be wiring them company money to pay an overdue invoice.
Update employee training
According to the 2022 Verizon Data Breach Investigations Report, 82% of breaches involved human error, deception, or misuse.
Conducting regular cybersecurity training with staff at all levels is a crucial part of any security strategy, and even more so now that AI is powering more complex attacks on a larger scale.
Your employees are already using AI tools, whether you’ve sanctioned them or not, but AI as a cybersecurity threat will be a new concept to most workers. So update your training, actively work to raise awareness, and put best practices in place as soon as possible to help employees recognise and mitigate the risks involved.
Assess apps’ security posture before use
A huge number of data breaches occur as a result of weak spots in third-party tools: 98% of organisations across the globe have ties with breached third-party vendors.
Any new app you add to your stack introduces new vulnerabilities that, as a user, you can’t necessarily address yourself. It’s up to the vendor of the app to make sure it’s secure, and it’s up to you to make sure you’re satisfied with its security posture before you introduce it to your environment.
Sizing up these new AI apps can be more difficult, though, due to the complex algorithms they contain. Not every AI tool you come across will have robust security built in, so it’s important to review a vendor’s or developer’s privacy policy and security features before approving it for use by employees.
If in doubt, contact the vendor and ask about what measures are in place.
Don’t take AI-generated code as gospel
AI is super useful not just for delivering smarter products, but also for creating them. Developers may use AI tools to write, check, or augment code, but code generated by AI is just as susceptible to error and vulnerability as any other.
Generative AI is not infallible, certainly not when it comes to writing or reviewing code. Though AI-fuelled tools do exist and do their jobs extremely well, something like Bard is not a replacement for the kind of robust testing platforms often found in DevOps pipelines.
There’s no guarantee that an AI tool will spot bugs in code, and relying on it to do so can result in critical failures.
Be careful what you feed them
The oft-repeated rule of thumb when it comes to working with any kind of data-driven algorithm is ‘garbage in, garbage out’. The better the data you put into a model, the better the results it generates. And that applies just as much to AI algorithms.
If you’ve fudged around with tools like ChatGPT or Midjourney enough, you’ll know that a good, clear, and contextualised prompt is the key to getting something useful out of it. The more information you can give a generative AI tool about what you want, the better the results will be.
But for businesses, that can mean that employees are feeding sensitive, proprietary information into tools and chatbots without thinking about how it’ll be used or where it might end up. Sharing data with AI is still sharing data.
A study conducted by data security firm Cyberhaven found that 11% of the data employees paste into ChatGPT is confidential. That might include IP, client information, or personally identifiable information (PII) about employees—data that can get you into strategic or even legal hot water.
Remember, everything that’s fed into a generative AI tool like ChatGPT, it can learn from. That means the information you put into such a tool can get added to its bank of knowledge (known as a large language model) and churned back out in an answer to someone else’s prompt.
Samsung is among the companies that have already been stung after employees used ChatGPT to check code containing trade secrets on three separate occasions earlier this year.
Fight fire with fire
The fact that AI technology is so widely available now is a double-edged sword. Risk increases exponentially, but so does the arsenal with which we’re able to fight back. Cybersecurity has always been an arms race, and AI is simply the latest bit of tech to come into play.
The good news is that cybersecurity tools and platforms are now incorporating their own AI features to combat the evolution of cyber threats that we’re seeing.
Take SIEM platforms, for example. With the power of AI under the hood, these solutions can scan for and detect potential threats on a mammoth scale without ever taking their eye off the ball. And it can learn from what it’s seen to improve future results and eliminate false positives.
AI can also recognise patterns, improving the effectiveness of UEBA tools by spotting anomalous user behaviour and identifying malicious activity.
One of the most impactful things AI brings to cybersecurity tools is its ability to know the unknown. Before, such tools would have to be told what to look out for, meaning previously unencountered threat types would be overlooked. But AI can stay up-to-date on the latest threats, allowing it to identify potential issues that cybersecurity teams may not have even been looking for.
Simply put, AI does more, better, and faster than humans ever could, and that kind of always-on vigilance is exactly what businesses need to protect their environments from evolving threats.
Let the robots do the heavy lifting so that your cybersecurity experts can do work with real value that drives real security.
To find out how the cybersecurity industry is shoring up its defences against these new threats, we asked 250 cybersecurity leaders from across the UK to share their thoughts.
Discover how cybersecurity professionals are tackling the risks that come with AI in our report: AI Unleashed: Navigating Cyber Risks.
