When Intelligence “Fails” Us: The Missing Context for Enterprise AI

Contributing experts

By Sanja Bogdanović Dinić, Head of Data and AI Strategy

I vividly remember the moment the room went quiet. We had built a model to predict construction project status transitions. On paper, it scored 90% accuracy on validation. Then the experts ran their checks. Only 56% according to them. When I asked them to walk me through their real-time thinking, they used completely different criteria than they had described in interviews, and most of their actual decision-making signals were not captured in any digital system. 

That experience shaped everything that came after. We are building enterprise AI on incomplete foundations because the systems don’t see the data we actually use. 

There is a version of your organization that exists in documents. It has policies, process maps, org charts, and handbooks. It is clean, consistent, and largely fictional.  

Then there is the version that actually runs. The one where everyone knows that official processes have real exceptions, that numbers from certain teams need to be read a certain way, that some decisions require approvals that no policy formally mandates. Your AI systems, whether you call them agents, copilots, or workflows, only have access to the first version. 

This is the problem enterprise AI is quietly running into. The scale of what’s at stake makes it hard to ignore for much longer. With the decision intelligence market projected to reach up to $88 billion by 2034, even a modest failure rate translates to tens of billions in sunk costs and under-realized business value. The technology is not failing; the systems operating on an impoverished version of reality are. They have data. They don’t have context.   

What context actually is 

Context is not metadata. It is the accumulated situational understanding that allows someone — or something — to interpret information correctly in a specific environment. When an experienced analyst reads a set of numbers, they are not just reading the figures. They are reading them against everything they know about how those figures were produced, what shaped them, and what the person asking actually needs versus what they said they needed.

That interpretive layer is what separates a response that is technically accurate from one that is genuinely useful. AI systems do not have this by default.

The gap between what the systems say and what the organization knows is precisely where AI goes wrong — not dramatically, not in ways that are obviously broken, but subtly and confidently wrong in ways that quietly mislead anyone who acts on them. 

The knowledge that was never meant to be data 

I think of this as a problem of three distinct intelligence levels. The first is factual intelligence — explicit, measurable data that AI handles well. The second is experiential intelligence: context-dependent patterns of expert judgment that are hard to articulate but shape every significant decision. The third is irreducibly human intelligence — values, ethics, and moral judgment that should not be automated at all. Most enterprise AI systems capture only the first layer, while the second is where genuine competitive advantage is made and lost. 

That second layer is what most people mean by tribal knowledge, but the term undersells it.  It includes the exceptions that became de facto policy without being codified, the historical context that makes a decision look irrational without it, the tacit standards that experienced people apply without being able to fully articulate them. None of this was ever captured because it never needed to be. It transferred through proximity, through apprenticeship, through lived experience. The moment you try to augment that loop with a machine, you discover the machine has no access to any of it. 

I call this the illusion of completeness: the data looks complete, the dashboards look polished, the accuracy metrics pass, but the system is answering a narrower question than anyone realizes. 

I saw this collide with painful clarity at a healthcare company. After investing over three million dollars across two years, they achieved 95% technical data accuracy — yet delivered less than 10% of the anticipated business value. The system could identify the right contacts. It could not capture what seasoned professionals instinctively knew about relationships, influence, and timing. When pharmaceutical clients asked, “Which oncologists should we target to maximize early adoption?” or “Who actually influences treatment decisions?” the room went silent. Sitting in that review meeting, I realized we were not simply missing data but betraying those who trust us to understand their world. 

The harder questions nobody is asking 

The logical response is to capture it — extract the decision logic, encode it into the system. But there is almost always a gap between what people say they use, what they actually use, and what they wish they had. The breakthrough I found came when I stopped asking experts what they did and started watching. What they described in interviews was methodical and logical. What they actually referenced was intuitive, contextual, and often brilliant in ways they could not articulate. 

And even setting aside the limits of extraction, there are questions about what should be captured at all. Experiential knowledge is not neutral. It carries assumptions, historical decisions, and power dynamics. Automating on top of it does not clean it — it scales it. The system does not know it learned something wrong. It just applies it. This is where data strategy frameworks are not yet equipped to go — into the irreducibly human layer of decisions that carry enough ethical and moral weight that they should never be handed to a system in the first place. 

Where the real value is 

The organizations that get genuine value from AI are not necessarily those with the best infrastructure. They are the ones that treat institutional knowledge as a first-class data asset, are honest about what can and cannot be captured, and make principled decisions about where machines should act and where humans should remain. When we build AI that helps humans feel seen in their expertise rather than replaced by it, we close the gap between technical success and business value. 

Last year, I had the privilege of exploring these questions alongside twenty of the most remarkable women working in AI today. The result was Shaping AI Without the Hype: How Women Turn Technology Into Real-World Impact — less playbook, more honest account of how transformation really happens, and who is doing the work. 

But you don’t need a book to start. You need a different question. Not what data do we have, but what do we need to understand — and who in this organization already knows it, but has never been asked. 

Not sure where to begin? Head over to HTEC.ai or schedule a call with Sanja — we’ll help you find the right questions. 

Explore more

Most popular articles