New! Sign up for our email newsletter on Substack.

Inside The Governance Puzzle Of Generative AI Adoption

Generative AI is surging across industries, yet most organizations remain unsure how to govern it responsibly. A new peer-reviewed study offers a framework to understand why, moving beyond hype to expose the tangled web of technology, organization, and environment factors shaping real-world adoption.

Researchers from Swansea University, Cardiff Metropolitan University, and the University of Liverpool analyzed how companies implement generative AI (GenAI) using the Technology-Organization-Environment (TOE) model. The mixed-methods study, published in the International Journal of Information Management, combines interviews and survey data from more than 300 industry decision-makers worldwide. The results illuminate a core governance challenge: technology moves faster than institutions can adapt.

Complexity Meets Capability

In the TOE framework, technology refers to an organization’s readiness and infrastructure, organization to its internal structures and skills, and environment to the external pressures of regulation, competition, and market trends. By layering GenAI adoption through these lenses, the researchers identified a pattern of mismatched expectations between leadership and technical teams.

According to lead author Laurie Hughes, “Complexity and uncertainty around generative AI are not simply technical issues. They are deeply entwined with organizational learning and governance structures.” The study found that perceived complexity was the single strongest barrier to adoption, surpassing even financial cost or data access concerns. Firms that lacked clear accountability models for AI decision-making showed the lowest confidence in their own implementations.

Visualizing this, the authors describe a typical boardroom dilemma: executives eager to deploy AI tools face teams warning of data privacy risks and compliance uncertainty. Without cross-departmental collaboration, GenAI remains siloed or superficial, used for experimentation rather than strategic transformation.

Bridging The Policy Gap

The paper’s environmental analysis focuses on regulation and ethics, identifying gaps between global policy discourse and organizational practice. While new laws such as the EU AI Act and U.S. executive orders emphasize transparency, many companies interpret compliance narrowly, as if it were a checklist rather than a culture shift.

Co-author Yogesh Dwivedi explained, “We found that even in highly digitized sectors, decision-makers lack confidence in aligning GenAI governance with national and international standards. This creates what we call a policy-implementation lag.” The team warns that this lag can widen digital inequality, as well-resourced firms rapidly scale AI systems while smaller organizations hesitate, fearing reputational or legal exposure.

Interestingly, the study also found that organizations with stronger internal training programs were more likely to develop ethical AI practices voluntarily. Employee education, it turns out, functions as an informal governance mechanism when external rules remain unclear.

The authors propose a three-tier roadmap for responsible GenAI adoption: build internal AI literacy, formalize oversight roles (including data stewards and ethics leads), and integrate environmental scanning into regular business reviews. These steps, they argue, can help institutions bridge the gap between innovation and accountability.

As the generative revolution accelerates, this research offers a rare empirical foundation for what has too often been a theoretical debate. By quantifying how organizations perceive and manage complexity, it reframes governance not as a constraint but as a competitive advantage.

International Journal of Information Management: 10.1016/j.ijinfomgt.2025.102982


Quick Note Before You Read On.

ScienceBlog.com has no paywalls, no sponsored content, and no agenda beyond getting the science right. Every story here is written to inform, not to impress an advertiser or push a point of view.

Good science journalism takes time — reading the papers, checking the claims, finding researchers who can put findings in context. We do that work because we think it matters.

If you find this site useful, consider supporting it with a donation. Even a few dollars a month helps keep the coverage independent and free for everyone.


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.