Please ensure Javascript is enabled for purposes of website accessibility

Report an Incident: 844.TRICORPS (844-874-2677)

The Rise of Synthetic Content and The Future of Social Engineering

shutterstock 1398425123 1 TriCorps Security

Recently, a bot was discovered posting comments on Reddit, interacting with human commentators, and it took a full week for it to be unmasked.  

GPT-3 is a language model designed by OpenAI that writes text strikingly similar to humans. The model has been used to write an editorial for The Guardian and a blog that reached the number one spot on Hacker News. This on top of its dive into the bowels of Reddit.  

GPT-3 uses deep learning to produce human-like text. It, and technologies like it, are fed millions upon millions of text samples and returns its best attempts at a comprehensible sentence.  

These types of bots are far from perfect. AI language models have been shown to be susceptible to the deep dark cesspools of the internet. A famous example of this was Tay; a Microsoft invented AI chatbot that, in 2016, fell victim to the racist, misogynistic language of Twitter. In less than 24 hours perusing the platform, the bot was parroting the vitriolic language ever-present on the social media site.  

Even OpenAI researchers were surprisingly frank about the biases still present in GPT-3. For instance, the researchers admitted that the bot associates women with words like “beautiful,” “gorgeous,” “naughty,” and “tight.” The reason for this is relatively simple. The internet is filled with appalling content about female appearance. These bots are trained on this content. The technology, therefore, inherits these biases.   

Currently, OpenAI limits contact to the technology by giving access only to selected individuals and licensing the whole software exclusively to Microsoft. However, this type of technology will improve and become more widely available. It will continue to evolve. It will help search engines more quickly identify and answer user queries. It will be used more frequently by customer service organizations to streamline interactions. It will improve the dialogues of AI assistants and make conversing with them more pleasant and natural.  

It could also be used in a more nefarious way. 

An even more worrying technology is that of deepfake or synthetic media generators. Deepfakes began late last decade as people started plastering the faces of famous actresses on to the bodies of porn stars because….the internet can be an awful place. Deepfake employers also turned Elon Musk into James Bond, Steve Buscemi into Jennifer Lawrence, and Nicholas Cage into Indiana Jones.  

Deepfakes use a generative adversarial network (GAN) in which two competing neural networks work to improve image quality. A generator attempts to create a realistic image, while a discriminator alerts the generator to mistakes to allow it to improve the image. This back and forth refines the image until it becomes startlingly realistic. This approach has made “face-swapping” seamless and rather effortless. Open-source models, available in places like GitHub, make creating a deepfake a reality, without ever having to write a line of code.   

In 2020, Dutch researchers created a deepfake video of a Dutch politician and had 287 people view the video. They said the video was “unquestionably accepted as genuine by most of the participants in our experiment.” 

In an era of misinformation, one in which people believe the world is flat, the Holocaust never happened, and a former first lady ran a child pedophile ring in the basement of a pizza parlor, it is clear that it doesn’t take much to manipulate someone into believing something.  

Already synthetic audio technology is being used to social engineer unwitting organizations. In September of 2019, a British energy company was duped into wiring $240,000 to cybercriminals after a managing director was fooled into believing his boss was on the other end of the telephone. The criminals reportedly used voice-mimicking software to imitate the executive’s speech. 

In February, security researchers at Symantec reported three cases of audio deepfakes being used against private companies by impersonating the voice of a CEO. According to Symantec, the criminal’s trained machine learning engines from audio obtained on conference calls, YouTube, social media updates, and TED talks to copy company bosses’ voice patterns.  

Voice cloning technology is improving rapidly, reducing the amount of audio needed to create a passable deepfake. Where once hours of tape were required, now only minutes are.  

Language models could be used to create spearphishing email campaigns that are more effective and scalable. Synthetic audio can make wiring fraud scams more realistic. A deepfake video of an executive could be used to tank an organization’s stock price or blackmail an executive. These types of technologies may represent the future of social engineering.  

While researchers and hackers alike are exploring ways to use these technologies in both positive and negative ways, others are racing to develop tools to nullify their ability to deceive. It is the great cat and mouse game that has defined cybersecurity since at least Clifford Stoll hunted down a German hacker and later chronicled his journey in The Cuckoo’s Egg in 1989.  

As a leader, recognition is the first step. Awareness is the key. As has always been the case when it comes to social engineering, the most important thing you can do is understand what social engineering attacks look like and make sure all of your team members do. This is why social engineering awareness training is the most fundamental part of cybersecurity. A robust firewall is ineffective against an employee who can wire out hundreds of thousands of dollars because he believes the CEO has asked him to.  

It is critical that you conduct cybersecurity awareness training periodically for all employees. This training doesn’t need to be onerous. It can be fascinating. I mean, for all of its inherent potential for evil and misuse, learning how a deepfake works can be rather interesting. Not only can employees understand how to protect the organization more effectively, but they can also learn how to defend themselves in a world of echo chambers, misinformation, and Silicon Valley behemoths who want nothing more than all of their personal data.  

Cybersecurity awareness training is even more critical today, in an environment where most people are working from home. Because of this, people have less real-time interaction with fellow team members and leadership and very little face-to-face interaction. This exacerbates the potential for successful spearphishing campaigns and wire fraud. It is much more comfortable, or more convenient, to rely on an email from a colleague when it is impossible to poke your head into her office and ask her what’s up.  

Your team needs to be trained to understand that sharing personal, sensitive, or financial data must be bolstered by a second authentication. For instance, it should be clear that this type of information sharing should be only done after there has been a second check of authenticity, whether that be a text message, a phone call, or even a Slack or Teams message if the initial request came via email. Don’t trust, verify!  

The verification must be increased based on the scope of the request. If the request is to wire out hundreds of thousands of dollars, that request likely should be authorized by two people…think nuclear launch codes in a 60s Cold War movie. Mistakenly wiring out hundreds of thousands of dollars to an organization would be akin to Dr. Strangelove activating the Doomsday Machine. It’s game over! 

We are relying more heavily today on technology for communication and data sharing. That will only continue. This makes social engineering much easier and effective. When the techniques become so authentic as to seem almost tangible, they will be capable of tricking even the most evaluative of us. That’s why recognition is critical. We cannot protect our organizations unless we understand what is coming. You likely won’t stare down a deepfake video of yourself nor field a vishing call “from a colleague” featuring synthetic audio tomorrow. However, considering the dangers, you may face in the future can help keep you and your organization safer today.