What Enterprises Need to Know About ChatGPT and Cybersecurity

 Zachary Folk
Author: Zachary Folk, CEH, CISSP-ISSEP, Security+
Date Published: 17 October 2023

ChatGPT is a generative artificial intelligence (AI) tool designed to help users gain insight to accomplish tasks. Arguably, the conception of generative AI could be likened to the evolution of the PC compared to the typewriter, or Google compared to a public library. ChatGPT can be considered an evolution of the encyclopedia. However, the knowledge acquired by this advanced tool makes it easier to get answers and solve simple (or complex) problems faster. It gathers and uses information from its users so that developers receive more intelligent results. An AI chatbot can also make searches more valuable by creating ad hoc searches or filters to read data. 

Many people are pondering whether ChatGPT is a friend or a foe. Similar to how grouping significant amounts of unclassified information together in the government sector could result in a higher classification, AI has the same potential problem. Because it returns aggregate data from many sources, it can allow a malicious cyberactor to collect enough data to infiltrate an enterprise.

With these use cases and risk factors in mind, it is the responsibility of security professionals to act now to determine how to safely leverage AI—if they wish to do so at all.

Why ChatGPT Is Relevant to Cybersecurity

ChatGPT can provide helpful information to aid cyberprofessionals in analyzing security incidents and preventing future cyberevents from occurring. The program is able to generate a summary of a topic and provide sources that can be referenced for a deeper dive. Other uses entail:

  • Saving time when asking for easily verifiable information—If a user asks for information about a Linux page or a brief how-to command, ChatGPT can quickly provide that information.
  • Pseudocode scripting engine—ChatGPT runs Python and may potentially accomplish anything that Python can, depending on the implementation.

ChatGPT can also help enhance enterprise security. The tool can be used to provide:

  • Threat analysis—ChatGPT can be trained on data related to cybersecurity threats to assist in identifying and analyzing potential security incidents.
  • Big data analysis—Analyzing large volumes of data including log files, network traffic, and security events can help organizations identify and respond to potential security incidents more quickly and efficiently.

Cybercriminals Are Taking Notice

While AI models such as ChatGPT can be used for good, cybercriminals are simultaneously using them to improve their malicious tactics. They do this by asking ChatGPT-specific questions to improve their information gathering. For example, a cybercriminal can use their knowledge, target and desired objective to ask ChatGPT-specific questions to enhance the capabilities of their already deceptive tools. 

A cybercriminal can use their knowledge, target and desired objective to ask ChatGPT-specific questions to enhance the capabilities of their already deceptive tools.

A hacker may have previously analyzed an enterprise’s network and found a vulnerability. At this point, they can ask ChatGPT questions such as, “I have found a network vulnerability, how do I fix it to protect my network?” It is worth noting that ChatGPT tries to avoid helping hackers, but if the hacker knows how to phrase the question correctly, ChatGPT does provide assistance. 

Hackers know that this line of ChatGPT questioning is more effective for uncovering helpful information rather than asking the AI tool to show them how to exploit a network’s vulnerability. Once the desired information is revealed, hackers can create a checklist and build malicious code to infiltrate a targeted vulnerability. From a defensive standpoint, cybersecurity personnel can ask ChatGPT the same questions the hacker posed, but from a security engineering perspective, to defend the network.

Receiving the most valuable information from ChatGPT requires asking the correct questions and expanding on the initial inquiry to obtain desired results and a deeper understanding. Hackers are learning that they cannot ask ChatGPT a directly malicious question, or they will receive a response such as, “I do not create malware.” Instead, they ask it to pretend that ChatGPT is an AI model that can execute a particular script.

Bad actors continue to exploit and socially engineer the process of installing malware or getting people to relinquish credentials for unauthorized data system access. AI tools are making it easier for cybercriminals to harm people. ChatGPT is merely another AI tool that they are learning to use along with other AI models, such as: 

  • Dalle-2, which manipulates and can generate any image1
  • Prime Voice AI, which can manipulate and clone any person's voice2

One noteworthy point is that the ability to use AI to manipulate humans through social engineering is becoming increasingly controllable. However, ChatGPT is not a Rosetta Stone-like translator for hackers. Although both AI-generated scripts and social media platform scripts are made by machines, their complexity, reliability and security can differ significantly. Thus, assessing each AI-generated script or program and its purpose is crucial before deciding if it can be trusted. It could have unexpected or destructive results if the user does not know enough about the script to trace what it does. 

Conclusion

Soon, it will be necessary to support AI machines to monitor networks, analyze data and interpret the results so that cyberthreats can be identified and defended against without human interaction. This sort of defense goes behind the concept of a static firewall or security incident event management (SIEM) tool. 

New AI tools will have access to the same threat intelligence as humans. But the AI tool will be capable of processing enormous data chunks instantly and learning from what is happening in real time to make adaptive decisions about what threats may occur before they come to fruition. 

Conversely, threat actors will utilize AI to generate new methods of bypassing and besting the AI tools defending the network. The world is entering a time when AI is fighting AI, and security professionals must focus on feeding this technology more relevant data faster than adversaries. The race for cybersecurity with AI is moving at the speed at which the model can ingest, correlate and suggest desired responses.

Endnotes

1 Openai.com, “DALL-E 2
2 Welcome.ai, “Prime Voice AI

Zachary Folk, CEH, CISSP-ISSEP, Security+

Is the director of solutions engineering at Camelot Secure. As an experienced cyberprofessional, he has worked in roles ranging from system administration to information system security management. This experience allows him to help enterprises integrate technical solutions for compliance and security standards. Folk has also served for 14 years as an officer of the US State of Alabama National Guard.