Earlier this year, the FBI warned of the increasing threat of cybercriminals using artificial intelligence in fraud. It was articulating a rather apparent trend: the rise and availability of generative artificial intelligence tools over the last few years has led to an explosion in cybercrime.
“As technology continues to evolve, so do cybercriminals’ tactics. Attackers are leveraging AI to craft highly convincing voice or video messages and emails to enable fraud schemes against individuals and businesses alike,” said FBI Special Agent in Charge Robert Tripp. “These sophisticated tactics can result in devastating financial losses, reputational damage, and compromise of sensitive data.”
The average cost of a data breach reached an all-time high of $4.45 million in 2023 in part because of the rise of generative artificial intelligence tools in the hands of bad actors. Cybersecurity professionals have also seen a sharp rise in cyberattacks which can be attributed to these tools.
Artificial Intelligence (AI) is beginning to touch every industry and facet of our personal lives, and this is only the beginning. As AI capabilities and pervasiveness increase, so too does its potential to be exploited by cybercriminals.
Here are some of the ways that cybercriminals are utilizing AI tools to commit fraud and cybercrime:
Automated Phishing Attacks: Traditional phishing attacks required manual effort, limiting their scale. AI allows for the automation of highly sophisticated phishing attacks. Gone are many of the telltale signs of phishing, such as unnatural English and easily recognizable typos. AI can instead create more realistic phishing emails void of these errors.
Automated Spearphishing Engines: Additionally, AI allows cybercriminals to scour the internet for publicly available and leaked information about individuals to craft specific spearphishing emails that people are more likely to fall for. These tools can analyze social media profiles and online behavior to tailor phishing messages that make them more challenging to distinguish from legitimate communications.
Deepfake Audio and Video Cloning: AI tools allow for the creation of synthetic audio and video cloning of potentially any individual. With only a few seconds of someone’s voice, AI tools make it easy to recreate that person’s voice and make that voice say anything. We are seeing a rise in cases where people are receiving calls from loved ones claiming they are being held hostage and require a ransom to be released. The voices are synthetically generated, but when you hear the voice of someone you love potentially in danger and pleading for something, you are more likely to react in a way that you will later regret.
Meanwhile, deepfake videos are harder to create, but they are becoming easier to manufacture and more realistic. Free, or low-cost tools, allow for the creation of deepfake video with a minimal amount of video of a person.
When you can sync audio and video together you can create some pretty epic deception. Earlier this year, a finance worker wired $25 million to fraudsters after attending a video call with a deepfake of their company’s chief financial officer. In September, it was revealed that dozens of Fortune100 companies had unwittingly hired North Korean IT workers. Many of these individuals had duped these companies by bypassing interviews and other safeguards using deepfake audio and video technology. KnowBe4, a leading employee awareness company, announced it had also inadvertently hired a North Korean posing as an American employee. If it can happen to a leading cybersecurity company, it can happen to anyone.
Know Your Customer (KYC) Bypass: KYC are checks, typically used by financial institutions, to ensure that they are interacting with a legitimate person. But, as in the case above, knowing who you are interacting with is becoming increasingly important in other areas as well, such as in hiring, payroll, and much more.
AI tools now allow for the creation of high-quality fake IDs. One tool available on the deep/dark web allows anyone to instantly generate a fake ID, with startling realism, for only $15. With the availability of other personal information, such as Social Security numbers, physical addresses, and dates of birth often easily accessible through data breaches, creating a fake persona that is disguised with legitimate information has never been easier.
Malware Evasion: Cybercriminals are using AI to create malware that can adapt and evolve to avoid detection. These AI-powered malware programs can learn from each failed attempt, making subsequent attacks more likely to succeed. They can also identify vulnerabilities in software more efficiently, exploiting them before patching can be applied. These tools also lower the barrier for less talented cybercriminals to create malware that can be deployed against networks and network defense tools.
Now that you understand some of the ways cybercriminals are using AI to turbocharge fraud, how do you best combat these new and increasingly sophisticated attacks? Here are five ways:
Improving the Human Firewall: One of the most effective defenses against AI-powered phishing attacks is a well-informed workforce. This includes regular cybersecurity awareness training and testing that is focused on new AI-based types of cyberattacks. Employees should be trained on how to spot and avoid these types of attacks. Additionally, testing employees by sending out simulated phishing and spearphishing attacks regularly can help you understand if and who needs additionally training around cybersecurity awareness. Reach out to TriCorps if we can help you implement a cybersecurity employee awareness program. Finally, it can be helpful to regularly distribute articles and other informative content so that team members can learn about AI-based cyberattacks and how to avoid them.
Multi-Factor Authentication (MFA): Implementing MFA adds an additional layer of security, making it harder for cybercriminals to gain unauthorized access to systems and data. Even if a password is compromised, the second layer of authentication can thwart an attack. MFA, along with good password management, should always be a fundamental part of organizational cybersecurity.
AI-Powered Network Defense: Having network defenses in place are critical to combat AI-based cyberattacks. Network defense is going to look different for every organization. However, implementing some level of detection and response is necessary. This could be employing a provider such as an Endpoint Detection and Response (EDR) or Managed Detection and Response (MDR) provider. Many of these solutions are now employing artificial intelligence to more rapidly detect intrusions and combat them effectively.
At the very least, every organization must have a patch management program in place. It is important to be regularly patching software. This is even more important in the age of AI, as AI tools allow for cybercriminals to speed up cyberattacks through the recognition of a vulnerability and the exploitation of that vulnerability. Ensure you are patching your software regularly, especially critical and high vulnerabilities.
Incident Response Planning: AI is increasing the number of organizations facing cyberattacks. Planning for the event of a ransomware attack, data breach, third party vendor exploitation, or other type of incident, is necessary, because the likelihood you will face a successful attack is growing. A good incident response planning program involves building playbooks that help guide you through an incident. Playbooks allow you to move quickly in the event of an incident or potential incident as you have planned and can be proactive instead of reactive. Playbooks should be tested regularly through tabletop exercises. Tabletops allow your executive team and key stakeholders to walk through a real-life scenario in a safe environment. Then these team members understand what they are responsible for in the event of an incident. This also gives you opportunities to improve the playbooks and your overall incident response.
Organizational Dark / Deep Web Monitoring: Part of proactive cybersecurity is monitoring the deep and dark web for your organizational information. Monitoring for company name, domain, executives, credentials, and other important keywords can help you understand if any of your organizational information is available on the darker corners of the internet.
You can also monitor critical third-party vendors. This way, if they are victim of a potential cybersecurity incident, you can proactively consider the information this vendor has of yours, its criticality, and take the necessary steps to stay ahead of a potential third-party incident that might affect you. TriCorps can help you develop a deep and dark web monitoring program, if you do not have one in place.
AI is transforming the landscape of cybersecurity for both defenders and attackers. While the rise of AI-powered cybercrime is alarming, organizations and individuals are not helpless. By adopting a comprehensive and proactive approach to security, leveraging AI-powered network defense tools, and fostering a culture of cybersecurity awareness, organizations can effectively combat these advanced threats.
Staying ahead in the cybersecurity arms race requires constant vigilance, continuous learning, and the willingness to adapt to new challenges. By understanding the capabilities and limitations of AI, organizations can adapt and thrive against cybercriminals while protecting their most valuable assts.