Unless you’ve been avoiding the news, you know about generative AI – and you’re likely using it yourself. While smaller companies and individuals have embraced this cutting-edge technology quickly, larger enterprises have been more hesitant due to risks with data privacy and security. This is starting to shift with the release of products like ChatGPT Enterprise and increasing pressure from boards to investigate incorporating the technology across the organization. Harvard Business Review has even coined the term “generative AI-nxiety” to describe the increased pressure on leaders.
We were so intrigued by this that we decided to dig deeper; we surveyed 300 enterprise board members across four countries to learn how they feel about generative AI and how involved they are in shaping policy, strategy, and implementation at their companies.
Here’s what we found:
Generative AI is a high priority for board members.
We first wanted to investigate what percentage of board members work with companies experimenting with generative AI, and how much of a priority it is for the board. Our results show that generative AI is top of mind for most board members: 46% stated that generative AI is currently their “main priority above anything else.”
Of the board members surveyed, 76% said that their company is currently using generative AI in some way; 35% are actively implementing larger generative AI programs in business areas, and 41% are experimenting with generative AI in certain projects or departments.
The numbers appear to confirm that the sudden rise of generative AI as a focus for enterprises can reliably be attributed to the board’s keen interest in it.
Despite varying levels of knowledge, board members don’t hesitate to make strategic decisions about generative AI.
Board members of companies using generative AI were confident about their skills and knowledge of generative AI and the underlying technology; 67% of respondents using generative AI rated their understanding as either expert (28%) or advanced (39%). Also, board members who rated their understanding of generative AI as “expert” were much more likely to prioritize generative AI above anything else at the board level.
Not surprisingly, board members not yet using generative AI were much more conservative when rating their knowledge; only 3% of non-users rated their understanding as expert, and 13% rated their knowledge as advanced.
Digging into this question a little more, we found that certain geographies and industries were more likely to rate themselves as generative AI experts. Board members located in the United States were much more likely to rate themselves as experts, with 33% of US board members rating themselves as expert, vs the global average of 22%. Alternately, board members in the UK were much less likely to rate themselves as expert, and instead more likely to rate themselves as beginners (25% vs the global average of 15%).
Those working in retail were also much more likely to rate their understanding of generative AI as expert (38% vs the global average of 22%), and those working in financial services and banking were more likely to rate their understanding as advanced (38% vs the average of 33%).
This self-reported level of experience and knowledge is inconsistent with what organizations like McKinsey are finding, who report their conversations with board members “revealed that many of them admit they lack this understanding.”
In the same vein, the majority of respondents using generative AI (70%) felt confident that their board has enough of a level of understanding and knowledge to make informed strategic decisions for their organization.
So, how are board members who report lower levels of knowledge closing the gap that gives them the confidence to make these strategic decisions?
Board members look to a Head of AI or Chief Technology Officer for advice.
Even with high levels of confidence in the board, 43% of board members consulted experts regarding generative AI; 21% consulted other parts of the business internally to inform decision-making, 18% consulted external sources of expertise, and 4% consulted internal and external sources to inform their decision-making.
Who did they consult? Interestingly, users and non-users of generative AI varied quite a bit on which internal experts they reached out to. Users of generative AI were most likely to reach out to the Chief Technology Officer (45%), VP or Head of IT (23%), or Chief Information Security Officer (23%) with questions about generative AI. Non-users relied more heavily on internal counsel than users and were more likely to consult more sources; most consulted by non users included the Head of AI (68%), Chief Technology Officer (45%), Head of Business Intelligence (45%), CEO (39%) and Head of IT (35%).
Of those who sought external counsel, most (52%) engaged with industry experts, while 45% sought the guidance of external technology partners. Additionally, 29% consulted academic advisors, and 19% relied on their professional networks and communities for advice.
Generative AI policies are common, but there’s room for growth with enforcement.
While board members are prioritizing generative AI, they aren’t doing so without considering its risks and challenges. The top risks reported included job displacement (49%), security (44%), and unaccountable processes (41%). Board members currently using generative AI additionally realized challenges like an over-reliance on generative AI and possible impact on job displacement (30%), governance (23%), and data privacy concerns (23%).
To mitigate these risks and challenges, most companies reported having policies to address them; 79% of policies address privacy and security, 77% address ethical concerns, and 76% include fairness and bias. Additionally, 75% of policies have guidance on transparency and trust, while 62% provide social impact and accountability guidance.
However, companies were less likely to go beyond policy creation and take active steps to ensure proper governance and ethical use of generative AI technology. Board members reported governance enforcement at lower rates, including conducting regular audits of policy adherence (56%), establishing clear lines of responsibility for generative AI management (51%), and training staff on generative AI ethics (45%).
These additional approaches may be an excellent opportunity to stand out for companies looking to elevate their risk mitigation strategies around generative AI.
Board members need to see costs decrease — and skills increase.
Overall, board members have a significant amount of optimism surrounding the future of generative AI; 82% expect generative AI to be fully (35%) or partially (47%) integrated into organizational processes within the next 3-5 years. Less than 1% expect not to use generative AI in that time.
But what will it take for generative AI to be successful? According to board members, operational considerations are key. Forty percent (40%) listed cost efficiency as a top priority, while 31% listed technical expertise. And 25% of them believe that user-friendliness is crucial for success.
What areas will generative AI have the most significant positive impact on? Board members predict that regulation and compliance (76%), innovation (73%), data governance (73%), and public reputation (73%) will all see positive changes thanks to generative AI. These predictions overlap with the potential beneficial use cases for generative AI that the Harvard Law School forum on corporate governance outlined in a board cautionary and consideration memo.
The Next Steps for Generative AI
The board’s perspective on generative AI strategy is a captivating way to comprehend the market forces propelling this technology forward. It’s clear that generative AI is a top priority for board members, who are confident in their abilities to make decisions for their companies. While we don’t know what the future holds, we can imagine that the future of generative AI adoption is partially in the hands of board members.