Please ensure Javascript is enabled for purposes of website accessibility

Report an Incident: 844.TRICORPS (844-874-2677)

Can You Spot a Deepfake? Probably Not

A new study finds that only 2 in 2,000 people, or 0.1% of people, can accurately differentiate between real and deepfake media. The study from iProov involved 2,000 participants in the U.S and the U.K. It exposed people to a mix of deepfake images and videos. Only two of those participants could accurately differentiate between the real and deepfake stimuli.

Older adults emerged as particularly vulnerable to AI-generated deception. Approximately 30% of participants aged 55-64 and 39% of those over 65 reported having never heard of deepfakes.

In all, 22% of consumers, almost a quarter of people, had never heard of deepfakes before participating in the study. Furthermore, many individuals exhibited significant overconfidence regarding their detection skills, with over 60% believing they could identify deepfakes, despite most performing poorly. Among younger adults, this false sense of security was especially prevalent.

So, What is a Deepfake?

Deepfakes are AI-generated media. They can include audio, video, and images. An AI model is trained on a specific subject using the model’s source material, whether it be images, audio of them speaking, or video of them. Then, with enough source material, the model can generate a realistic deepfake of the subject.

Deepfakes began with people creating funny videos of celebrities and politicians and, on the darker side, illicit images and videos of celebrities and real people. The technology was once crude, but it has improved, allowing anyone with little coding experience to create realistic deepfakes.

In 2023, deepfake images of then-former President Trump were widely circulated online that falsely showed him being arrested. In 2024, sexually explicit AI-generated deepfake images of pop superstar Taylor Swift were widely circulated on social media platforms.  

Deepfakes are being used more in cybercrime. Many cases have been documented of parents and grandparents receiving phone calls from scammers using deepfake audio of their loved one’s voice to scare them into thinking their loved one had been kidnapped. They would only be returned if a ransom is paid.  

In 2024, scammers approached a finance worker in Hong Kong. The scammers used deepfake audio and video technology to appear as the company’s chief financial officer (CFO). The scammers even conducted a video conference call with this employee surrounding the deepfaked CFO with additional deepfake video of other employees. The scam worked. The employee sent around 25 million dollars to scammers. 

It has been reported that over three hundred US-based companies have inadvertently hired North Korean scammers posing as US employees. Dozens of Fortune 100 companies have done the same thing. A leading cybersecurity awareness company even admitted hiring a North Korean actor and discovered the error after the new employee began doing strange things inside the company’s network.

The North Korean actors use deepfake audio, video, and images to bypass hiring checks and earn remote work. When the employees are hired, they are sent a company laptop, which in turn is sent to these workers by a mule. The workers typically reside in countries outside North Korea, such as China. The interesting thing is that sometimes these workers will legitimately work the job, collecting a paycheck and sending the money back to North Korea, which has a much lower standard of living. In other cases, the scammers are interested in stealing money or sensitive information from the companies. 

Why is this Important?

Synthetic media is beginning to flood the internet, according to the GOV. In the UK, the rise in deepfakes generated by artificial intelligence (AI) has been scaling rapidly – a projected eight million deepfakes will be shared in 2025, up from 500,000 in 2023.

Learning to navigate a world flooded with AI-generated deepfake content will become paramount in our personal and professional lives.

Below are three ways you can protect yourself and your organization from deepfakes:

Get Educated: Stay current on the latest in deepfake technology and artificial intelligence. Add articles, videos, and education about the technology to your daily content consumption. Also, ensure your team members regularly attend employee awareness cybersecurity training that involves AI-generated cybercrime.

Don’t Be Overconfident: It is easy to say, “I would never fall for that. Only stupid people fall for that.” That is just not true. As technology improves, deepfakes become more realistic and more prevalent. When you get cocky, that’s when you are in the most trouble.

For Pete’s Sake, Don’t Spread Them: We all know someone like this. Someone who might see a deepfake of a politician they despise. They would likely hit the “like” and “share” buttons to distribute it to their social media followers. They do this because they might find it funny, or it stirs emotions in them. I hope you know, don’t do it! Spreading deepfake content is not helping the problem and is making it worse. Don’t spread or share deepfakes unless it is for educational purposes, such as training your team on how best to spot deepfakes. Be part of the solution, not part of the problem.

Deepfakes will only become a deeper problem. We are entering a world where what traditionally helped us distinguish truth from fiction, what we see and hear, is no longer a foolproof method of understanding what is real. As a leader and digital steward, you must recognize deepfakes, their significance, and how they can impact you, your loved ones, and your organization.