Chief Marketing Officer at HTEC, Alex Rumble, on a hidden risk in your organization and how to turn it into a competitive advantage.
AI literacy has emerged as a foundational business risk. Many leadership teams still treat AI as a siloed IT concern rather than what it truly is: a strategic organizational imperative.
AI is the engine behind increased business productivity. It accelerates decision-making, automates low-value tasks across operations, personalizes experiences at scale, and simulates outcomes. This translates into faster innovation and reduced time to market for new products. But this is just a fragment of what AI can do.
AI-first organizations are embedding AI across workflows, upskilling their people to work alongside it, and building competitive advantage through smarter execution. But without a workforce that understands AI’s capabilities, risks, and limits, companies risk underutilizing their investments, or worse, falling behind more agile, AI‑ready competitors.
Why workforce AI literacy still isn’t where it needs to be
AI has tacitly moved into everyday workflows while organizations at large operate under the assumption that employees can adapt to AI on their own. The infrastructure to support that expectation is largely missing.
According to The Adecco Group’s 2025 Business Leaders report, 60% of business leaders expect employees to proactively upskill themselves for AI. Yet, 34% of organizations admit they have no formal policy for AI use at work (no guidance, no safeguards, and no plan). This reactive mindset is particularly dangerous when combined with another critical blind spot: lack of visibility into workforce capabilities. As highlighted in a World Economic Forum article, only 2% of companies are truly prepared to adopt AI at scale—most lack not only training infrastructure, but also the foundational workforce intelligence required to make informed, strategic decisions.
Bridging the leadership gap
The risk here isn’t just about falling behind in AI adoption. It’s about widening performance gaps, misallocating talent, and creating internal friction between AI-driven goals and a workforce that hasn’t been trained to deliver on them.
This mismatch between expectation and enablement is further compounded by weak leadership modeling. According to KPMG’s AI Pulse survey, 93% of business leaders believe their investments in generative AI have improved their company’s competitive position, and 70% say AI will shift how value is delivered. Yet when it comes to personal engagement, only about a third of leaders have taken steps to actively build their own AI capabilities over the past year.
Many companies are increasing their AI budgets, with global AI spending projected to reach more than $300bn in 2026. But budgets alone won’t close the gap. Without leadership modeling, clear training structures, or internal guidance, employees are left to navigate this transformation largely on their own, often without the skills, context, or confidence to do so effectively.
What happens if you don’t train your people
On paper, most leaders understand the importance of AI risk management. The great majority of leaders interviewed in an IBM study say they’ve established clear AI governance frameworks, but fewer than 25% of companies have actually implemented and consistently reviewed systems to mitigate core AI risks like bias, transparency failures, and data security. In short, the guardrails exist in theory, but not in practice.
This gap in oversight reveals a deeper issue: a lack of functional AI literacy across the organization. What’s missing isn’t more AI but better-informed people. Companies need employees who can make smart, responsible decisions about when and how to use AI, who understand its limitations, and who can critically evaluate its outputs. We are yet to see how the lack of these capabilities may affect corporate responsibility and brand image of affected companies in the future.
Gen Z employees, for example, who now make up a growing share of the global workforce, are demanding that their employers implement transparency, ethical use, and upskilling. Surveys show that the Gen Z believes AI literacy should be a mandatory part of education, and nearly half worry that overreliance on AI could harm their own critical thinking skills. In other words, they value empowerment over automation and they’re watching how companies respond. Without structured AI education programs and clear ethical guidelines, companies may struggle to attract and retain the very people they need to power long-term digital transformation.
Why AI literacy is the bridge between pilots and real value
According to new research from BCG, only 22% of companies have moved beyond the proof-of-concept phase and begun generating any value from AI. Even more striking: just 4% are creating substantial business value at scale.
This chasm between promise and payoff is rarely a technology problem. It’s a people problem. Pilots may prove that AI can work but scaling it requires a workforce that understands how to integrate it into real-world decisions, processes, and outcomes. Organizations that train employees across roles—from executives to frontline—treat literacy like cybersecurity: mandatory, repeating, with tailored curricula per function.
Competitive edge or existential risk?
The gap between AI ambition and AI readiness has never been more visible. The numbers tell a sobering story. Here’s a snapshot of the current state of AI readiness across organizations:
Formal AI use policy at work | 34% of organizations have no formal policy |
AI adoption at scale | Only 4% are creating business value at scale |
Systems to mitigate core AI risks | Fewer than 25% of companies have implemented systems to mitigate AI risks like bias, transparency failures, and data security |
CEO belief in AI impact | 70% expect value transformation |
Leadership on GenAI ROI | 93% see competitive value |
Implementation beyond POC | 22% of companies have moved beyond the proof-of-concept phase |
Where we go from here
As pressure mounts for all of us to unlock AI’s value at scale, the lack of coordinated, organization-wide empowerment could quickly become a critical point of failure. AI literacy should be understood as corporate infrastructure rather than a tech initiative. Educated, trained, and ethically aware AI-ready workforces will lead the pack.
At HTEC, we’ve embedded this principle into action with a company-wide AI enablement program designed to achieve full AI literacy across our global workforce. Led by our AI Center of Excellence, the program provides tailored learning paths for non-engineers, non-AI engineers, and AI engineers, ensuring every employee gains the skills to use AI effectively, spot opportunities in client work, and deliver next-gen solutions.
We’re also extending this expertise beyond our walls.
I recently had the opportunity to host HTEC’s inaugural AI-First Executive Dinner in London, where we explored a central question: What does adopting an AI-first approach really mean? Together with industry leaders, we discussed how AI is transforming businesses not only through technology, but in the way it reshapes projects, empowers the workforce, and influences organizational culture.
Building on that conversation, we took the discussion to New Jersey, bringing together healthcare leaders from Bayer, Baxter, Amgen, Verizon, NJII Innovation, and others to examine how AI is driving the next wave of innovation and complexity in healthcare.
With the success of our internal AI enablement program, we’re already extending this approach to selected clients, helping them build the same AI literacy, governance, and applied skills that are now embedded across HTEC. It’s a powerful next step, ensuring that together we can harness AI responsibly, effectively, and at scale.