ISACA AI Survey Results: What Do Infosec Professionals REALLY Need to Know?

Raef Meeuwisse
Author: Raef Meeuwisse, CISM, CISA, Author of Artificial Intelligence for Beginners
Date Published: 25 October 2023

It is quite hard to find quality information about artificial intelligence (AI) right now, especially when it comes to how AI should and is being approached in the field of cybersecurity. Thankfully, ISACA has just completed a survey of thousands of members of the global infosec and digital trust community.

While many are willing to venture educated guesses about how AI will progress and what enterprises need to do to incorporate or defend AI systems within their digital landscape, there is an uncomfortable truth: nobody really knows how the use of AI in enterprises will unfold.

  • Could we face an uptick in infosec resource needs because of AI-driven threats or a downturn because of automation?
  • How far can and should enterprises be embracing or safeguarding from AI?
  • What policies should (and do) enterprises have regarding their use of AI?
  • What security measures can and should be placed around “in-house” use of AI?
  • … and can you really stop employees from using AI tools anyway?

AI presents so many variables, possibilities and threats that one of the only reliable options is to crowdsource how information security professionals themselves are perceiving, using or feeling threatened by AI. This ISACA survey provides an opportunity for us to understand how the situation is currently playing out across our community.

In this blog post, we will focus on just some of the key findings highlighted by this survey:

There is a High Degree of Uncertainty Around Generative AI

In the broadest sense, generative AI is any technology that has at least one human-like skill that can be applied to creating (the generating bit of) content of one form or another. Whether the generative AI is used to create written work, images, video, customer service responses or other content, it has the ability to autonomously perform at or beyond a human-level from simple prompts or other inputs.

Most ISACA members in the survey were familiar with the term Generative AI (73%), although overall only 25% felt they were very or extremely familiar with the term. Those somewhat familiar (48%) were probably demonstrating a level of caution in understanding the implications and nuances of the technology before declaring any level of comfort about the term.

However, when asked if the technology was permitted under their organizational policies, 45% say it is not permitted, 28% say it is, and 26% are unsure. Only 10% of respondents state that their organization already has a comprehensive policy in place, and more than one in four respondents have no plans to develop one.

AI Security Training: Generative AI Becomes the New Shadow IT

Whether or not an organization has policies, over 40% of employees are using generative AI. According to the survey results, where it is used, it is for creating written content (65%)—which can include writing programming code, increasing productivity (44%), automating repetitive tasks (32%), customer service (29%) and improving decision-making (27%).

In fact, there were dozens of uses listed by respondents.

While the ISACA survey shows that generative AI is being widely used for various purposes, it also highlights a glaring gap in ethical considerations and security measures. Only 6% of organizations are providing comprehensive AI training to all staff, and a staggering 54% offer no training at all. This lack of training, coupled with insufficient attention to ethical standards—41% say not enough attention is paid—can lead to increased risks, including exploitation by bad actors. Perhaps not surprisingly, 57% of respondents are very or extremely worried about generative AI being exploited by malicious actors.

Is AI Risk Management a Neglected Priority?

The survey results indicate that fewer than one-third of organizations consider AI risk an immediate priority. This is particularly concerning given that 79% believe adversaries are using AI as successfully or more successfully than digital trust professionals. The top five risks identified were misinformation/disinformation (77%), privacy violations (68%), social engineering (63%), loss of IP (58%) and job displacement (35%). These risks are not just theoretical—they have real-world implications that can severely impact an organization’s security posture.

The Future of Jobs and the Industry Outlook

Interestingly, while 45% believe AI will eliminate a significant number of jobs, 70% think it will have some positive impact on their jobs.

This reminded me of a very insightful observation made by my wife (the brainier half of our relationship): “Everybody seems OK with whatever AI can do, right up to the point where it can replace what they can do—then it’s gone too far.”

The overall outlook is optimistic, with 85% seeing AI as a tool that extends human productivity. Yet, it is crucial to note that 80% say they will need additional training to retain their job or advance their career.

In Conclusion: Should Enterprises Join the AI Wave? Is There Any Other Choice?

The ISACA survey paints a complex picture. While there is optimism about the potential benefits of AI, there is also a glaring lack of preparedness and understanding. The rapid adoption of AI technologies by employees, often without organizational approval or oversight, is a ticking time bomb.

On one side, as Microsoft, Google and others weave AI into their technologies and infrastructure, there is almost no choice but to figure out how to safely integrate this tech into every digital landscape. On the other side, there’s a need for caution and training to defend against AI-driven adversarial attacks that may leverage deepfakes or enable rogue AI models to operate as quickly as unprecedented armies of human hackers.

The absence of comprehensive AI security policies and training programs exacerbates the risks, including the potential for ethical lapses and exploitation by malicious actors.

As we navigate the complexities of AI in the cybersecurity landscape, one thing is clear: the future of AI is bringing with it more change, more rapidly than we have ever seen in the past. It is high time for organizations to catch the wave strategically rather than being swept away by it.

The ISACA survey serves as a wakeup call for the infosec community to address the challenges and opportunities presented by AI proactively. Failure to do so could mean missing out on the benefits while exposing the organization to unnecessary risks.

Editor’s note: Find more insights from the survey and AI resources from ISACA here.

Author’s note: Follow Raef on Bluesky @RaefMeeuwisse.bsky.social

Additional resources