By Marydee Ojala
</span>
<span itemprop=”>
In his closing keynote for the 2026 Data Summit conference, John O’Brien, principal advisor and industry analyst, Radiant Advisors, shared three insights he’s gained from his initial analysis of data gleaned from a market study on AI-readiness in enterprise data architecture.
The annualData Summitconference returned to Boston, May 6-7, 2026, with pre-conference workshops on May 5.
The survey was conducted by DBTA and Unisphere Research in the first quarter of 2026 and O’Brien received the survey data in April, so it’s very fresh. This is important because, as he pointed out, the needle for studying AI moves extremely quickly. His main takeaway surrounded the interesting contradictions the data revealed.
The survey itself has six main sections: AI Maturity and Outcomes; Agentic AI and Orchestration; AI-Enabling Data Infrastructure; AI Trust and Governance; Outcomes, ROI & Measurement; and Challenges and Strategic Needs. O’Brien concentrated on the three insights he’s identified so far.
The first insight: AI-readiness does not predict AI success, which is not what the industry is telling us. Although 74% of survey respondents rate themselves mostly or fully AI-ready, only 52% had half or fewer AI initiatives succeed. Why the discrepancy? 71% of AI failures trace right back to data quality. Confidence is high but delivery is not. The disciplines are not new. However, what they are being asked to support has changed. A data quality framework built for monthly reporting is not the same as one supporting real-time inference. A governance framework for audit trails is not the same as one supporting model explainability. A semantic layer designed for BI consumption is not the same as the one feeding context to agents.
O’Brien thinks that data professionals should focus on outputs and suggests a redefinition of what we mean by AI-readiness is necessary. The biggest gap is data security, following that is unstructured data, then semantic definitions, and trust scoring.
His second insight concerns AI pilots versus AI production, which he likened to AI purgatory. He cautioned that AI pilots should not become production (and that’s a good thing). This isn’t about new tools; it’s about operational foundations. Unlike pilots, production has committed AI budgets, fully traceable AI model outputs, semantic definitions, security, data quality, automated AI-readiness assessment, and operational data governance approaches
Pilots figure out if AI technology works. Production requires AI that works reliably. An experimental AI budget is not the same as a committed production budget. A governance framework in development is not the same as one partially operational. A partially traceable AI model is not the same as one fully traceable. Pilots and production are different and require different mindsets, one concentrated on experimentation and the other on operationalizing. The gate between them is not automatic and can be a trap.
Agentic AI architecture is O’Brien’s third insight. The most important aspect of successful agentic AI architecture is attributes around data foundation. Trust-oriented attributes are the next most important, and, somewhat surprisingly, it’s AI architecture that is the least important. Critical is a committed budget. Successful agentic AI is not being built on quick fixes or policy-based trust; it’s foundational data. A new protocol such as MCP carries enterprise security and control concerns. A RAG pipeline aims to improve context but is not a substitute for trustworthy data quality. Governance compliance documents trust; operational signals and scoring demonstrate it.
O’Brien urged the audience to check their blind spots. AI-readiness means trust in your data foundation and your AI architecture. Looking ahead, companies need AI agents to be as good as their best employees. He reminisced about work he did with data warehouses, where trust in the data in the warehouse was validated by trusted individuals. This parallels the current situation with AI. How do we know that AI generated data is not hallucinated? That often requires human validation. It’s the institutional knowledge that feeds semantic layers that give a competitive advantage, since everyone is essentially buying the same LLMs.
What successful AI does differently is threefold. Readiness is defined by what AI needs. Plan and budget with a production mindset at the beginning. And AI built on data foundation and trust. Overall, have (or develop) an AI mindset, which differs from our previous approaches to data.
Many Data Summit 2026 presentations are available for review athttps://www.dbta.com/datasummit/2026/presentations.aspx.