By Stephanie Simone
</span>
<span itemprop=”>
As organizations embrace AI technologies, they face growing challenges related to fairness, privacy, and unpredictable system behavior.
At Data Summit 2026, Nicole Janeway Bills,CEO and founder,Data Strategy Professionals, led the session, “AI Risks & Risk Mitigation Strategies,” using real-world examples to help attendees recognize 10 critical AI risks, detect emerging issues early, and implement practical measures to mitigate them while maximizing business value.
The annualData Summitconference returned to Boston, May 6-7, 2026, with pre-conference workshops on May 5.
She presented her session in the newData + AI Leadership Forum, an exclusive space for business and technical leaders to explore strategy, governance, responsible AI, and value realization.
She began her session citing stats from McKinsey’s recent report on the state of AI. Despite the increasing usage of AI, only 43% have an AI governance policy in place.
“If you’re not governing it, you’re more likely to run into security issues,” Bills said.
According to Bills, the top 10 categories of AI risks include:
- Data privacy and confidentiality
- Bias and discrimination
- Misinformation and hallucinations
- Intellectual property risks
- Security vulnerabilities
- Lack of transparency
- Over-reliance on AI
- Operational and strategic risks
- Regulatory and legal risks
- Reputational risks
Mitigation Strategies
- For data privacy and confidentiality, implement data minimization and anonymization, conduct privacy impact assessments, establish clear governing guidelines, and more, she explained.
- For bias and discrimination, train models on diverse and representative datasets, implement demographic parity constraints during optimization, embed bias audits and fairness testing, and more.
- For misinformation and hallucinations, implement RAG to fetch relevant external documents before generation, chain of thought prompting breaks tasks into steps, source grounded prompts require “according to [source],” and more.
- For intellectual property risks, create a proactive, comprehensive AI governance policy that outlines permissible uses, output review process, and employee training on IP laws. “Make sure you’re protecting your trade secrets and understand how these GenAI tools are creating their outputs,” Bills said.
- For security vulnerabilities, sanitize training data, scan plugins for malware or potential vulnerabilities, run adversarial testing and red teaming, and more.
- For lack of transparency, choose explainable models where appropriate, provide model cards detailing data sources, adopt explainable AI using techniques such as LIME/SHAP, and more.
- For over-reliance on AI, include uncertainty signals, educate users on AI limits and enforce human-in-loop for high stakes decisions, use overreliance metrics, and more.
- For operational and strategic risks, implement robust data and AI governance, conduct ROI modeling, ethical audits, and more.
“Governing the use of AI models has been cited as the most common challenge of implementing AI,” Bills said.
Many Data Summit 2026 presentations are available for review athttps://www.dbta.com/datasummit/2026/presentations.aspx.