<span itemprop=”>
The roles of information professionals are being redefined in the digital age, seamlessly bridging the gap between traditional knowledge management (KM) practices and cutting-edge AI applications.
At Data Summit 2026, Fleur Levitz,principal consultant,FDL consulting NYC LLC, and former data governance executive on Wall Streetand senior management consultant and data governance practice lead, IBM lead her session, “Information Professionals in the Age of AI,” examining several classical approaches to organize and rationalize human thought, namely through catalogs, classification schemes, and taxonomies.
The annualData Summitconference returned to Boston, May 6-7, 2026, with pre-conference workshops on May 5.
AI is not just a technological shift. It is a transformation in how information is created, interpreted, and trusted, Levitz explained.
She highlighted how information professionals, and librarians in particular, are stepping into pivotal roles within the AI-driven KM space—not just as knowledge managers but as ethical custodians. Their focus on transparency, inclusivity, and information integrity makes them indispensable partners in shaping responsible AI systems.
Most organizations are asking: “How do we build better AI?” But that’s the wrong question, she said.
The better question is: “How do we govern intelligence responsibly?” Because the real challenge isn’t capability. It’s control, trust, and accountability.
To understand where this is all going, we must look back to the past, she noted. For thousands of years, societies have relied on people to manage knowledge—
scribes, archivists, librarians. Each of them shaped what was recorded, what was preserved, and ultimately—what was known.
Control of information has always meant control of decisions. Information has always been power, she explained. Who controls knowledge… controls decisions. Institutions such as libraries and archives didn’t just store information—they shaped access, visibility, and truth. Now AI is scaling that power in ways we’ve never seen before.
In the 19th century, something important happened. Information work became a profession—with standards, systems, and ethics.
That’s when we started formalizing ideas such as:
- Fair access
- Intellectual freedom
- Responsible stewardship
These ideas are not new. They’re just being rediscovered in the AI era.
Information systems laid the foundation for AI and as information grew, humans built systems to manage it: metadata, indexing, and search. These weren’t just technical innovations—they were ways of structuring reality. And they laid the foundation for data science.
Responsible AI is a governance response. This is where Responsible AI emerges, not as a trend—but as a response to real harm.
And the principles are familiar:
- Fairness
- Transparency
- Accountability
- Privacy
These are not engineering concepts; they are governance concepts.
The EU AI Act is the first major attempt to regulate AI at scale. It introduces a risk-based framework. At the highest level, this means:
- Some AI systems are completely prohibited.
- Others are classified as high-risk and heavily regulated.
- And the rest fall into lower-risk categories with lighter requirements.
The key idea is simple: The higher the risk to people, the stricter the rules.
Even if you’re not operating in Europe, this regulation will shape how AI is built and governed worldwide, she said. Much like GDPR did for data privacy, the EU AI Act is likely to become a de facto global standard.
Levitz asked, “if AI generates information…who is responsible for its truth?” These systems require governance. Human-in-the-loop is key to this responsibility.
At the highest level, this is about governance. Information professionals should lead Responsible AI frameworks, participate in decision-making bodies, and educate organizations. They are uniquely positioned to bridge technical systems and human values.
She recommended 3 best practices that include:
- Elevate information professionals.
- Build governance into AI systems.
- Invest in AI literacy across the organization.
You don’t need new capabilities; you need to activate the ones you already have.
“Responsible AI is an information problem, not a technical problem,” Levitz said.
We’re entering a new era that is not defined by who builds AI…but by who governs it responsibly, she stressed. And the organizations that understand this first will lead what comes next. Those who govern intelligence will lead.
“AI is a data use case, and I like that because data is the heart of it,” Levitz said. “AI governance starts to become apart of everyone’s job because of all the tools we’re using.”
Many Data Summit 2026 presentations are available for review athttps://www.dbta.com/datasummit/2026/presentations.aspx.