Please ensure Javascript is enabled for purposes of website accessibility

Report an Incident: 844.TRICORPS (844-874-2677)

Unveiling Cybersecurity Trends: Emerging Threats and Regulatory Responses

The cybersecurity landscape is evolving rapidly, with emerging threats such as AI-driven cyberattacks and the proliferation of deepfakes in social engineering schemes. Hackers are exploiting novel data sources for extortion, challenging organizations worldwide. Regulatory responses are ramping up, with new rules mandating swift reporting of cybersecurity incidents, shaping the future of cybersecurity defense.

Trend #1: Explosion of Generative Artificial Intelligence (AI) Based Attacks

In late 2022, OpenAI released ChatGPT to the public. The generative artificial intelligence (AI) platform quickly became one of the most widely used applications ever created. While it has been used for a wide variety of legitimate purposes, it has also been used for much more nefarious aims. After its release, hackers began using it in cyber-based attacks. 

Tools such as WormGPT and FraudGPT were also created using the ChatGPT framework. These tools were created for the sole purpose of aiding cyber attackers in their work. They act much like ChatGPT but without the ethical guardrails built into ChatGPT. 

Reports have shown that security professionals are seeing a dramatic rise in attacks because of generative AI. The technology lowers the barrier to developing sophisticated cyberattacks and makes these attacks harder to spot as well. 

Generative AI can help in various ways, including creating better spearphishing emails that are free of the traditional telltale signs of a scam, such as misspellings and broken English. Also, generative AI tools can assist with developing code or malware that can be used in social engineering scams.    

Trend #2: Deepfakes Being Used in Social Engineering Campaigns

In February of this year, the world learned of a brazen scam using deepfake technology. 

A multinational company in Hong Kong was scammed out of $25 million after an employee attended a video conference call with multiple deepfake recreations of the company’s executives and other employees. 

It appears that the scammers were able to recreate the individuals using publicly available footage. An employee working in the company’s finance department attended a video conference call after receiving a phishing email message from someone appearing to be the company’s CFO asking for a transaction to be made. In that video call, apparently, all members of the call were known executives to the employees and were deepfaked. Only the employee was an actual member of the company. 

After attending the conference call, the scammers reportedly gave the employee fraudulent instructions to conduct as many as 15 financial transfers. 

Deepfakes are media created in a way that is digitally altered to spread false information. They can be video, audio, and photos.  

Sadly, many deepfakes are used to harass and target women, both celebrities and non-celebrities, by creating abusive videos and pictures of them. However, they are also being used more and more in fraud, politics, and cybercrime. 

People have used voice cloning technology to send voice messages pretending to be a friend or loved one in a precarious situation. There have been several real-world examples of this. In Saskatchewan, Canada, in 2023, an elderly couple received a call from an impersonation of their grandson, claiming he needed money. Similarly, also in 2023, an Arizona mother received an AI voice scam call from her daughter telling her mother she had been kidnapped.

In 2022, an executive at Binance, a cryptocurrency exchange, claimed attackers had created a deepfake of him and used it on videoconference calls to try and trick would-be investors. The executive only found out about it after people emailed him, thanking him for meeting with them. This would indicate that, in at least one case, someone was duped by the deception.

What are some of the ways that deepfake technology could be, or likely will be, used in social engineering scams?

  • They could be used as part of business email compromise attacks to bypass current prevention procedures, such as call-back protocols. It could be as simple as a follow-up to a well-crafted email, incorporating a more elaborate voice message, or requesting a video call in a system not used by the company to trick an employee into moving money as was in the case of the recent Hong Kong scam.  
  • They could be used to display an executive in a compromising video or say something that could tank a company’s stock or scuttle an important merger. 
  • They could be used to hurt a brand’s reputation with customers and business partners.
  • They could be used to trick banking systems or technology designed to verify their customers’ identity to prevent fraud or money laundering. 
  • Ultimately, they could cause business interruption and other costs as managing the disruption could impact normal business activity and lead to financial loss and unexpected costs. 

Trend #3: Hackers Will Find Novel Data Sources to Attack and Extort

In December, hackers began emailing the patients of Integris Health, demanding $50 in Bitcoin in order to stop their data from being published onto the dark web. In short, the attackers were extorting these patients after breaching Integris and stealing data. In all, the attackers say they stole data from more than two million people in a breach of the Oklahoma-based healthcare network. Reports from Integris say data from 2.4 million patients may have been impacted by the breach. 

The attack on Integris was eerily similar to another recent attack on a healthcare provider. In December, cancer patients from the Fred Hutchinson Cancer Center in Seattle received extortion emails after the center was breached in November. The attackers even threatened to “swat” patients. “Swatting” is when a fake emergency call is placed to law enforcement, forcing police to respond. Swatting has, in some cases, led to innocent people being killed during the event. At the very least, it can cause some unnecessary trauma for those who are forced to endure such an event. Luckily, it doesn’t appear any swatting events occurred in regard to the cyber incident.

These two cases highlight a disturbing trend. Attackers are finding new sources of data to extort. The vast majority of us have had our Social Security numbers and other personally identifiable information (PII) published on the dark web. It has become so commonplace that most of us barely flinch anymore when we receive a notice from a provider that our personal information has been compromised. However, getting an email from an attacker saying our medical information could be made public if we don’t pay an extortion demand. That is on a different level. 

Late last year, genetic testing provider 23andMe confirmed that hackers stole health reports and raw genotype data of customers affected by a data breach that went unnoticed for five months. The breach affected nearly 7 million people. The company said that the attackers may have accessed certain health reports derived from the processing of genetic information, including health-predisposition reports, wellness reports, and carrier status reports. Threat actors may have also accessed self-reported health condition information as well.

Could all of this be an untapped resource for extortion? Hospitals, clinics, therapy centers, and genetic testing providers. Time will tell. However, attacks on these organizations are likely not going to slow down.

Trend #4: Regulatory Pressure Will Increase

New regulatory rules enacted in 2023, require public companies to report material cybersecurity incidents to the U.S. Securities and Exchange Commission (SEC) within four business days of discovery. Already these new reporting requirements have impacted businesses such as Clorox, MGM, Caesars, and many others who have faced cybersecurity incidents after the new reporting requirements were put in place. 

Meanwhile, the Cyber Incident Reporting for Critical Infrastructure Act of 2022 will soon be active. A Notice of Proposed Rulemaking (NPRM) is due by the Cybersecurity and Infrastructure Security Agency (CISA) in March of 2024, meaning the law should take effect this year. 

The new regulation requires organizations in critical infrastructure to report cybersecurity incidents within 72 hours of discovery. Critical infrastructure is nebulous (and may be clarified by CISA’s NPRM). However, it encompasses 16 areas, including financial services, commercial facilities, food and agriculture, and communications, on top of what would be easily identifiable as critical infrastructure (dams, nuclear reactors, emergency services, transportation, etc.) This potentially means supermarkets, banks, public buildings, healthcare providers, defense contractors, and many more could all potentially fall under critical infrastructure. 

These new reporting requirements have been met with both positive and negative sentiments. However, it undoubtedly ramps up the pressure on security professionals, who already work in a very pressurized environment. 

In November, the SEC charged SolarWinds and its chief information security officer (CISO) with fraud and internal control failures stemming from their response to a breach in 2020. This followed the conviction of Uber’s chief security officer to three years’ probation for a 2016 breach. 

New regulatory concerns and potential punitive threats may make the job of a security professional harder and less appealing in a world where these types of professionals are increasingly important.