Today’s most significant AI regulations are via GDPR and the EU AI Act. Many businesses prefer to optimize innovation via limited regulation. However, the regulations exist whether we like them or not. Ignoring regulation is a risk in itself, but beyond that, these risks are real. Any company that values it’s reputation and customer trust is well served to consider the potential AI risks.
Here’s my risk summary of the French data protection authority regarding ensuring your AI system development complies with GDPR (and considering EU AI Act). The full article is linked in the comments.
AI risks considered in a DPIA (Data Protection Impact Assessment)
When processing personal data in AI systems:
🔴 Data confidentiality that can be extracted from the AI system
🔴 Subsequent risk to data subjects in the training data via internal employee misuse or breach
🔴 Automated discrimination via bias in the AI system
🔴 Producing false fictitious content on a real person
🔴 Automated decision-making risk. Human oversite must be able to verify performance or take a contrary decision contrary
🔴 Users losing control over their data published online
🔴 Known, specific AI attacks, (e.g. data poisoning attacks)
🔴 Systemic and serious ethical risks related to the AI deployment
While you may not be subject to regulation yet, an AI risk assessment (which is broader than the DPIA above) is foundational to ethical and responsible AI.