A new poll of global digital trust professionals is revealing a high degree of uncertainty around generative artificial intelligence (AI), few company policies around its use, lack of training, and fears around its exploitation by bad actors, according to Generative AI 2023: An ISACA Pulse Poll.
Digital trust professionals from around the globe, including more than 660 based in Asia—who work in cybersecurity, IT audit, governance, privacy, and risk—weighed in on generative AI—artificial intelligence that can generate text, images, and other media—in a new pulse poll from ISACA that explores employee use, training, attention to ethical implementation, risk management, exploitation by adversaries, and impact on jobs.
Diving in, even without policies
The poll found that many employees at respondents’ organisations are using generative AI, even without policies in place for its use. Among respondents in Asia, only 32 percent of organisations say their companies expressly permit the use of generative AI, only 11 percent say a formal comprehensive policy is in place, and 30 percentage say no policy exists and there is no plan for one. Despite this, over 42 percent say employees are using it regardless—and the percentage is likely much higher given that an additional 30 percent aren’t sure.
These employees based in Asia are using generative AI in a number of ways, including to:
– Create written content (67%)
– Increase productivity (41%)
– Customer Service (such as chat box) (30%)
– Automate repetitive tasks (28%)
– Improve decision making (23%)
Lack of familiarity and training
However, despite employees quickly moving forward with use of the technology, only five percent of respondents’ organisations are providing training to all staff on AI, and more than half (52 percent) say that no AI training at all is provided, even to teams directly impacted by AI. Only 23 percent of respondents indicated they have a high degree of familiarity with generative AI.
“Employees are not waiting for permission to explore and leverage generative AI to bring value to their work, and it is clear that their organisations need to catch up in providing policies, guidance and training to ensure the technology is used appropriately and ethically,” said Jason Lau, ISACA board director and CISO at Crypto.com. “With greater alignment between employers and their staff around generative AI, organisations will be able to drive increased understanding of the technology among their teams, gain further benefit from AI, and better protect themselves from related risk.”
Risk and exploitation concerns
The poll explored the ethical concerns and risks associated with AI as well, with 29 percent saying that not enough attention is being paid to ethical standards for AI implementation. Twenty-five percent of their organisations consider managing AI risk to be an immediate priority, 31 percent say it is a longer-term priority, and 29 percent say their organisation does not have plans to consider AI risk at the moment, even though respondents note the following as top three risks of the technology:
– Misinformation/Disinformation (65%)
– Privacy violations (64%)
– Social engineering (48%)
Almost half (45 percent) of respondents indicated they are very or extremely worried about generative AI being exploited by bad actors. Sixty-five percent say that adversaries are using AI as successfully as or more successfully than digital trust professionals. “AI training and education is imperative for digital trust professionals, not only to be able to understand and successfully leverage the technology, but to also be fully aware of the risks involved,” says RV Raghu, ISACA India Ambassador and director, Versatilist Consulting India Pvt Ltd. “As quickly as AI has evolved, so have the ways that the technology can be misused, misinterpreted or abused, and professionals need to have the knowledge and skills to guide their organisations toward safe, ethical and responsible AI use.”
Impact on jobs
Examining how current roles are involved with AI, respondents in Asia believe that security (52 percent), IT operations (46 percent), risk team (44 percent) and compliance team (42 26 are responsible for the safe deployment of AI. When looking ahead, almost one in four (24 percent) Asia-based respondents say their organisations are opening job roles related to AI-related functions in the next 12 months. Fifty-seven percent believe a significant number of jobs will be eliminated due to AI, but digital trust professionals remain optimistic about their own jobs, with 71 percent saying it will have some positive impact for their roles. To realise the positive impact, 86 percent of Asia-based respondents think they will need additional training to retain their job or advance their career.
Optimism in the face of challenges
Despite the uncertainty and risk surrounding AI, 84 percent of respondents in Asia believe AI will have a positive or neutral impact on their industry, 84 percent believe it will have a positive or neutral impact on their organisations, and 83 percent believe it will have a positive or neutral impact on their careers. Eighty-four percent of Asia-based respondents also say AI is a tool that extends human productivity, and 76 percent believe it will have a positive or neutral impact on society as a whole.