The rapid adoption of generative AI technologies, like ChatGPT, among non-IT users is driven by an extremely low barrier to entry. Free access, simple interfaces, multiple suppliers, and integration into everyday applications mean AI tools are now at the fingertips of all staff within organisations. This access presents incredible opportunities for new use cases, efficiency, and augmented workforces. However, it also presents new risks for IT executives and their peers.
Unprecedented access to this new technology introduces risks, including data privacy and security, regulatory and compliance issues, bias and discrimination, and ethical considerations. It also introduces the risk of “Shadow Everything Else”. Shadow IT has been a long running risk for IT executives to manage. However, with this new technology, employees have the potential to use generative AI for tasks like seeking professional advice, including legal and financial matters, bypassing formal channels and creating new business risks akin to shadow legal and shadow finance. This creates new dynamics for broader business executives to contemplate. The problem is more complicated than controlling access to the tools; it also requires considering the context of use cases and the use of outputs.
A PWC CEO Survey from January 2024 found that 60% of Australian CEO’s believe that generative AI will significantly change the way their company creates, delivers and captures value over the next 3 years. However found that they view that only 25% of those surveyed had started.
Organisations will seek to leverage this new technology to support task automation, enhance decision-making, and drive innovation with low technical barriers. However, part of the challenge for those organisations is that generative AI tools are already likely in the hands of staff, accessible through work and personal devices, and soon to be operating systems. This widespread availability means AI tools are quite possibly already being used within the organisation to support work-related activity.
The reality of the situation is shown in research released by Veritas in February 2024, where they found that more than two-thirds (68%) of Australian office workers were using generative AI tools such as ChatGPT at work, with around a quarter (24%) of those workers admitting to inputting information which could be considered sensitive, (e.g customer information, employee information and company financials) into generative AI tools.
The research went further found than 76% of Australian employees want their organisation to provide guidelines, policies or training from their employers on using generative AI.
Without clear policies and governance, the risk of individual misuse of AI – intentional or otherwise – becomes significant. Employees may inadvertently or deliberately use AI tools in ways that compromise data security, privacy, ethical standards, and other company policies. They might generate outputs that are inaccurate or biased, leading to potentially harmful decisions and actions.
Now is the time for IT and business leaders to develop those robust policies and governance frameworks to educate and inform staff and manage these technologies effectively.
IT executives must work closely with their peers to:
- Define a Strategic Vision: Communicate how AI supports and enhances the organisation’s strategic objectives.
- Develop Policy and Governance Frameworks as part of their IT Operating Model: Address ethical use, data privacy, and security, ensuring oversight and adherence to policies across all departments, not just IT.
- Create Guidelines for Use Cases: Clearly define where AI can and cannot be used within the organisation, who can use specific use cases, and how the outputs should be used and reviewed.
- Communicate with and Empower Employees: Provide tools, training, and support to help employees effectively use AI, fostering a culture of continuous learning and innovation.
Generative AI technologies present both opportunities and challenges for organisations. IT executives must lead the charge in integrating these technologies into their operating models and work with their peers to develop robust policies, governance frameworks, and clear guidance for employees early, even if the full strategy around AI’s use is not yet defined.
At evince Consulting, we specialise in guiding organisations through Strategy and IT Operating Model development, helping you understand and mitigate risk, while developing frameworks to capitalise on and leverage this new technology.
Contact evince Consulting today to learn how we can help you.