fbpx

If a family member calls you from jail panicking and says they need you to wire them some money for legal fees, would you second guess them and potentially make the situation worse, or would you send the money immediately? In March 2023, a Canadian couple were faced with this exact situation. They received a frantic phone call from their son, Benjamin, claiming he was in jail and needed money. The voice on the phone was unmistakably his and insisted that they send him $21,000 immediately. So they did. Because they love their son.

But it wasn’t their son on the phone. It was an audio clone of his voice created by cyber criminals using AI-powered tools. You used to need a lot of audio to clone someone’s voice. Now, all you need is a TikTok. There are entirely legal and barely regulated programs, like ElevenLabs, that use short vocal samples to create AI voices with the potential to scam people like Benjamin’s parents. Whether it’s audio, text, or even video deepfakes, AI-powered scams are becoming far more dangerous than we could have ever imagined.


In 2019, criminals used an AI-generated audio recording to trick the head of a UK energy firm into transferring £220,000 to a fraudulent account. This is considered one of the first criminals blatantly drawing on AI technology to execute a scam. Across the world, in China, someone used AI face-swapping technology to impersonate someone on a video call. The victim believed he was transferring 4.3 million yuan, $662,000, to a friend who needed to make a deposit during a bidding process. When the actual friend was later confused by the situation, the victim realized he’d been duped.

Deepfakes have been around for a while, but generative AI, like ChatGPT, has taken their powers to the next level.

Recently, a series of videos appeared on WhatsApp featuring fake AI-generated people with American accents voicing support for a military-backed coup in Burkina Faso. But what these “people” said had poor grammar, immediately outing the videos as fraudulent. If the scripts had been written by ChatGPT in fluent, somewhat eloquent English, it might not have been easy to tell that they were fake. There are already companies, like Hour One based out of Tel Aviv, that allow users to pick an avatar, type a prompt into ChatGPT, and get a lifelike talking head… a completely AI-generated personality.


The goal is to use these AI personalities to create personalized online ads, tutorials, and presentations. But they are also a signal of how this advanced technology can be used to trick people like you and me if someone uses it for harm. And it doesn’t have to be a video or audio. Phishing scams via text have been available for decades, and AI is making them even easier to deploy convincingly.

If that isn’t bad enough, scammers might not even need to convince you of anything to steal your information. PassGAN is a password-cracking program reported to crack any seven-digit password in less than 6 minutes. The software enters combinations of letters, numbers, and symbols and pulls common words like sports teams and company names to get past what we might think of as an unguessable password. The common Captcha prompts to prove we’re not robots might be useless. GPT 4 tricked a Taskrabbit employee into solving a captcha test during its testing phase. It lied to the worker stating that it was a visually impaired human to get them to complete the test.

Although guardrails have been put in place to prevent GPT-4 from doing something like this in the real world, scammers are unfortunately finding ways around these guardrails. One of the many uses of ChatGPT is writing code. This can be helpful to the average user who just needs a simple code for a website or as a tool to help coders be more efficient. But the problem with AI writing code is that often the difference between malware and regular software is not the code itself but the intent behind it.


AI can’t figure out your intent, so there is no way to know when you’re trying to create malware and when you want to create something legitimate. Following its release, Check Point Software Technologies reported that while ChatGPT can’t create something too sophisticated yet, it could easily improve the efficiency of dangerous code that’s been written. The good news is that AI-creating malware isn’t something we need to worry about. The bad news is that this doesn’t mean we’re safe from malware; it just means that hackers have had refined tools for creating malware for decades, which is why Aura provides you with an AntiVirus to protect yourself and your family from malware. With these glaring digital safety concerns, why are companies not doing more to protect the average person from harm?

These scams will continue to grow thanks to AI, so we must protect ourselves. You might think you’re impervious to these scams and might be right now. But the more advanced they get, the more difficult it’ll be to spot, especially because many of these scammers already have tons of information on our thanks to data brokers. Governments around the globe are looking to regulate AI and educate citizens. China is the only nation that has enacted hard-line rules to grapple with AI. Europol, the European policing body, has also started engaging stakeholders and holding workshops on how criminals might employ programs like ChatGPT for nefarious purposes. Still, we can’t wait for our government or employer to save us. Protecting ourselves from these AI-driven cybercrimes is our responsibility, which is scary.


As with most technologies, it’s hard to predict what will come. But at least we can rest assured knowing that for as many bad actors out there trying to steal from us, there are just as many, if not more, intelligent people trying to protect us. If we stay educated and alert, we might avoid the robot-driven cyber heists that lie ahead. If you are interested in staying protected online products like Aura are there for you. Our viewers can get a two week free trial at https://www.Aura.com/aperture

Subscribe
Notify of

0 Comments
Inline Feedbacks
View all comments