<span itemprop=”>
Executives are betting on agentic AI to drive growth, yet many enterprises still struggle to move from pilot to production. The hard part has never been the models. It is about building trust in underlying data assets, human-AI workflows, and outcome accountability.
Cal Al-Dhubaib,principal technologist atRubrik, opened the first official day of Data Summit 2026 with his keynote, “Engineering for Trust in the Era of Agentic AI.”
The annualData Summitconference returned to Boston, May 6-7, 2026, with pre-conference workshops on May 5.
“One theme throughout my career has been how do we inherently take these systems that aren’t right all the time and make them work,” Al-Dhubaib said.
AI incidents are on the rise, driven by hallucinations and malicious actors using these tools to exploit people and gain information. AI failures can cause PR nightmares. Unchecked agentic actions have an even larger blast radius. The gap is expanding between expectations versus reality, he explained.
“We’re treating AI like any other software instead of treating it like the probabilistic technology that it is,” Al-Dhubaib said.
Al-Dhubaib introduced Trust Engineering—the emerging discipline needed to consistently scale AI beyond pilots. It brings together the trust infrastructure required to support agentic AI in production, design patterns that improve human-AI decision making in operational workflows, and best practices for creating a culture of accountability around AI-driven outcomes.
The 4 pillars start with decision design. Not all costs of errors are made equal, he explained.
“What we’re trying to do here is cost tolerance,” Al-Dhubaib said. “We’re trying to design workflows around these errors.”
The next pillar is expectation management, he noted. Minimize open-endedness by prompting humans. Empower users to trace and verify information.
The third pillar is the trust infrastructure, which falls into 3 categories: governance, monitoring and observability, and remediation.
“With trust infrastructure it’s also really important to determine what is out of scope,” Al-Dhubaib said.
Lastly, trust assurance forms the governance strategy. Define the ethical nightmares and determine where human oversight is needed.
“This is your toolkit for enabling AI and human insight,” Al-Dhubaib said.
What Makes Enterprise Data Ready for AI Agents?
As AI agents take on more responsibility in the enterprise, many teams are confronting a hard truth: AI is only as reliable as the data behind it.
In large organizations, data is typically distributed, inconsistently governed, and difficult to interpret, making it risky for AI agents to operate with confidence.
During his keynote, Kiyu Gabriel,field CTO,IBM, looked at what it truly means to make data ready for AI agents, beyond simple access or scale.
AI agents need AI-ready data, he explained. Without this data, AI projects fail.
Across internal and external use cases, companies run into three problems: fragmentation, context, and governance and security.
He explored how context, lineage, and governance help agents discover, understand, and safely use enterprise data.
“[IBM] work[s] very closely with all the clouds, we work with whatever foundation you’re working on,” Gabriel said. “Because AI costs are so high, you need to bring in the products that are specialized to do it.”
A New Path Forward: Reimagining Data Management Through AI
AI is reshaping what’s possible in data management. Today’s data leaders must drive rapid AI adoption while ensuring data remains trusted, contextual, and high-value.
Achieving this requires a control layer that delivers transparency and trust by design—supported by tight alignment between the control plane and data plane.
During her keynote portion, Susan Laine,field CTO,Quest Software Inc., shared how AI is accelerating data delivery, enabling faster and more effective data products, and elevating data trust across global organizations.
“Now is the time to reimagine and really think about the processes you’ve been dealing with for so long with your tools and now AI,” Laine said.
There are 3 problems clients are having with their data management, including the trust gap, speed and scale, and lack of interoperability.
Silos and handoffs create a people, process, and tools nightmare, she explained.
“We’re working on speed and trust, so you don’t have to sacrifice quality or safety when it comes to AI,” Laine said.
The Quest Trusted Data Management Platform provides an end-to-end, unified platform to industrialize data product creation, she noted.
“This is designed for trust, you have to design trust for your data,” Laine said.
She introduced data products, which are a curated, self-contained, and reusable set of data, metadata, semantics, and templates designed to solve specific business problems.
This contains the trust companies are looking for in one place, she stressed. The tangible benefits include reusability, value, trustworthiness, discoverability, accessibility, and composability.
Your Data Is Your Moat. Your Agents Are The Bridge
Enterprise AI agents are only as intelligent as what they retrieve—yet most organizations are investing heavily in models while ignoring the layer that determines whether those models reason accurately, act on current knowledge, and deliver distinctive results.
AJ Meyers,principal solutions architect,Elastic, during his keynote revealed why retrieval is the decisive competitive variable in agentic AI, exposed the organizational and architectural gaps most teams don’t know they have, and offered a concrete framework to assess where organizations stand.
“You need to have more data on hand in order to do what you need,” Meyers said.
Data needs to be relevant, trusted, and open for agents to access it, he explained.
“The retrieval layer is the one thing you fully control,” Meyers said.
The data moat only works if the agent knows where the boundaries are—and respects them.
Many Data Summit 2026 presentations are available for review at https://www.dbta.com/datasummit/2026/presentations.aspx.