However, these protections aren’t too difficult to get around: ChatGPT can certainly code, and it can certainly compose emails. Even if it doesn’t know it’s writing malware, it can be prompted into producing something like it. There are already signs that cybercriminals are working to get around the safety measures that have been put in place.
We’re not particularly picking on ChatGPT here, but pointing out what’s possible once large language models (LLMs) like it are used for more sinister purposes. Indeed, it’s not too difficult to imagine criminal organizations developing their own LLMs and similar tools in order to make their scams sound more convincing. And it’s not just text either: Audio and video are more difficult to fake, but it’s happening as well.
When it comes to your boss asking for a report urgently, or company tech support telling you to install a security patch, or your bank informing you there’s a problem you need to respond to—all these potential scams rely on building up trust and sounding genuine, and that’s something AI bots are doing very well at. They can produce text, audio, and video that sounds natural and tailored to specific audiences, and they can do it quickly and constantly on demand.
So is there any hope for us mere humans in the wave of these AI-powered threats? Is the only option to give up and accept our fate? Not quite. There are still ways you can minimize your chances of getting scammed by the latest technology, and they aren’t so different from the precautions you should already be thinking about.
How to Guard Against AI-powered Scams
There are two types of AI-related security threats to think about. The first involves tools such as ChatGPT or Midjourney being used to get you to install something you shouldn’t, like a browser plugin. You could be tricked into paying for a service when you don’t need to, perhaps, or using a tool that looks official but isn’t.
To avoid falling into these traps, make sure you’re up to date with what’s happening with AI services like the ones we’ve mentioned, and always go to the original source first. In the case of ChatGPT for example, there’s no officially approved mobile app, and the tool is web-only. The standard rules apply when working with these apps and their spinoffs: Check their history, the reviews associated with them, and the companies behind them, just as you would when installing any new piece of software.
The second type of threat is potentially more dangerous: AI that’s used to create text, audio, or video that sounds convincingly real. The output might even be used to mimic someone you know—like the case of the voice recording purportedly from a chief executive asking for an urgent release of funds, which duped a company employee.
While the technology may have evolved, the same techniques are still being used to try and get you to do something urgently that feels slightly (or very) unusual. Take your time, double-check wherever possible using different methods (a phone call to check an email or vice versa), and watch out for red flags—a time limit on what you’re being asked to do, or a task that’s out of the ordinary.