Please ensure Javascript is enabled for purposes of website accessibility

Report an Incident: 844.TRICORPS (844-874-2677)

Protecting Your Digital Network from the Rise of Generative Artificial Intelligence 

You have almost certainly been introduced to ChatGPT. If you have not yet, you should probably get introduced as quickly as possible. The generative artificial intelligence (AI) revolution is here, and it will touch every facet of organizational and personal life. This statement, of course, seems cliché at this point, but it is no less true.  

Generative AI burst on to the scene in the latter half of 2022. ChatGPT became the fastest growing app, based on users, in history. People have used the tool for everything from writing college essays to creating recipes and workout plans. Generative AI images, from tools such as DALL-E and MidJourney are flooding the internet, whether they be created by marketing professionals to beautify a blog or, in more sinister cases, to harass or spread misinformation or disinformation.    

Generative AI has been used in other ways as well. A recently released report found that four-fifths (80.3%) of respondents had received AI-generated email attacks or strongly suggest this is the case.  

IBM X-Force recently released a study where they tested the success of human engineered phishing campaigns versus those that were AI-generated. The generative AI phishing click rate was 11% while the human phishing click rate was 14%.  

While human generated phishing attacks were slightly more successful than AI generated attacks, human generated attacks can take hours to develop. AI generated attacks can take only minutes. Also, generative AI has only been in the mainstream for less than a year. The technology will most certainly improve, and the realism of the campaigns along with them. 

While most mainstream generative AI platforms are at least attempting to put guardrails in place to keep users from utilizing their platforms to create social engineering campaigns, other tools have been created specifically for that purpose. WormGPT and FraudGPT are two examples.  

These are products sold on the dark web that can be used to create content to facilitate cyberattacks. They are tools that marketed as aiding in creating undetectable malware, writing malicious code, finding leaks and vulnerabilities, and more. Because they are trained in English, they can help non-native English speakers craft more polished spearphishing emails that are absent the tells of spearphishing emails of yore, such as bad grammar and typos.   

The tools are trained, much like ChatGPT, on large language models (LLMs). This allows them to have much of the same basic functionality of ChatGPT minus the ethical boundaries. These tools have already been shown to be able to create convincing phishing and spearphishing emails. It is part of a reason for the rise in attacks created by generative AI.  

Yet even ChatGPT, guardrails and all, can still be used to create social engineering campaigns. IBM-X research shows that with only five simple prompts they were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes.      

Another cliché you hear a lot when it comes to cybersecurity is that humans are your greatest weakness. Cliché, yes, but also no less true. That unless they are properly prepared to protect your organizational network. Then they can become your greatest asset.  

October is cybersecurity awareness month. So, it is a good time to revisit some old friends when it comes to network protection.  

As Benjamin Franklin once said, “An investment in knowledge pays the best interest.” 

Investing in the cybersecurity knowledge for your team members may be the most important thing you can do when it comes to protecting your organizational data and intellectual property.  

Spearphishing, business email compromise, and ransomware remain the most effective attack vectors for cybercriminals. These usually involve the social engineering of a team member whether that be them clicking on a link or downloading an attachment or giving out financial or sensitive information in a way they should not. 

There are countless examples of this, but a recent example would be this autumn’s cyberattacks on hotel and casino giants MGM and Caesars. The attacks, attributed to a group called Scattered Spider, used novel social engineering techniques. The group was able to get help desk employees to bypass the multifactor authentication details for accounts by calling and convincing a help desk worker to reset a password. Then they move laterally through the network and deploy a ransomware payload. 

Training users to avoid social engineering techniques is paramount. They can’t avoid what they don’t understand. Therefore, team members should be required to attend regular cybersecurity awareness trainings that feature the latest techniques from bad actors. This should be coupled with regular testing of users by sending them phishing and spearphishing emails while requiring additional training for those who fail these tests. 

Specialized training should also be conducted for administrative users and those who are likely to be the targets of more specialized social engineering attacks. These types of team members include executives as well as representatives from human resources and payroll, who are targeted with business email compromise campaigns because of their access to sensitive financial information and often the ability to change that information if asked to do so. 

Controls should be put in place for changing account information or wiring out money. Administrators should be cautioned about changing financial information or handing out credential information to users unless they have received a physical confirmation from either the user or the user’s direct supervisor about the necessary change. This could be done either in person or by a videoconference call where the user requesting the change in information can be physically witnessed asking for the change. 

With the rise of generative AI, social engineering attacks are only grow in number and become harder to spot. Therefore, regular training and testing of users about these attacks is one of the most important things your organization can do. By doing so, you can turn your team members from a potential weakness into a strength and become safer in a rapidly evolving digital environment.