As an AI Risk and Governance management consultant, I’m often met with confusion: “What the heck is AI risk?” Most AI industry participants are focused on the innovation side. Don’t forget about AI risk management.
While there are new and novel risks in AI development, most companies should start with the basics.
1) Deploy an employee Acceptable Use Policy. Your employees are using AI, without any guidance or rules. Lots of risks here from intellectual property to hallucinations.
2) Update your third-party risk management (TPRM) policies and process to incorporate AI.
My focus today is on #2.
AI supply-chain risks are both similar and distinct from cyber security supplier risks. Your AI tools touch your data, infrastructure and operations. They pose addition risks beyond security.
Ø Risk #1) Data rights: Training.
Review your supplier contracts, Terms of Service and Privacy Notices focused on AI risks. Do your suppliers use your data for training their AI? Are you sure?
Here’s the risk in play: Slack “…To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as other information (including usage information)…”
Ø Risk #2) Bias and/or discrimination.
Who has liability? It might be yours. If you use a tool that has bias, your business can be accountable for the resulting decisions. Example, employment discrimination.
What about Indemnification and warranties?
Ø Risk #3) Vendor Lockin and Exit Strategies.
AI tools move fast. Much faster than historical software, particularly if they are “learning” along the way. Don’t get stuck in an obsolete tool.
Ø Risk #4) Data rights, Output.
Do you own the outputs of your tools? Meta data? Contract transparency is vital.
Ø Risk #5) Data rights, Training Data Provenance.
Do your suppliers own the data rights that they’ve used in their AI training data? Do they have “consent” for PII used in the training data? What are their privacy policies?
Next up…. AI development and deployment – Training data risks.