Bad Actors are Exploiting AI Technology to Design Harder to Detect Scams

With the latest generation of Artificial Intelligence (AI) tools, the technology is impressive, especially how these tools can sound almost like real people. ChatGPT4o is now conversational, not just by text,but now by voice.

AI chatbots can now create text that you'd never guess was written by a program. They can do it quickly and with hardly any help from us other than prompting a response.

So, it’s not shocking that cyber criminals are using AI chatbots to make their devious activities easier.

Authorities have figured determined three main ways criminals use chatbots for malicious activities.


1.Better Phishing Emails

This is where it really gets unpleasant Phishing emails are already the number one-way ransomware enters a business. Even with awful spelling and grammar these emails are meant to trick you into clicking a link to download malware, steal data or infect your computer with ransomware. With AI-written emails, the criminals are much harder to catch because the writing seems legitimate with obvious mistakes now removed. 

Bad actors can now make each phishing email unique, making it tougher for spam filters to flag dangerous content. Really bad news.


2.Spreading Misinformation 

Imagine telling an AI to “write ten social media posts accusing the CEO of ABC Corporation of having an affair, add some news outlets”. Spreading fake news might not seem like a big deal, but it can lead to your employees falling for scams, clicking on malware links, or hurting your business's reputation.


3.Creating Malicious Code

Unsurprisingly, AI is pretty good at writing computer code and keeps getting better. Bad actors can use it to create custom malware. The AI doesn’t know any better —it’s just following orders—but until there’s are liable way to stop this misuse, it’ going to be a growing threat.

It’s hard to hold the makers of AI tools responsible for criminals misusing their software. For example, OpenAI, the creators of ChatGPT, are working hard to prevent their tools from being used for bad stuff.However, where there is a will there is a way. It’s no secret that bad actors have found ways to “jailbreak” the AI safeguards that were built into the code.

These three reasons show why it's vital to try and stay ahead of cybercriminals in whenever we are online. This is why we work closely with our clients to keep them protected from criminal threats and updated on new potential threats. 

If you’re worried about your team falling for increasingly hard to discern scams, try training them about how these scams work and what to watch out for.  

Contact us if you’d like to know more about how to protect your team.