I was on a conference call discussion yesterday with a lobbyist from one of the big hyperscaler tech companies. He advocated strongly against pre-mature regulation of AI. No surprise there, but he did make some really solid points. Primarily he was positioning that state-level AI laws should be narrow or non-existent, and whatever AI regulation is put in place should be at the federal level.
His points about a patchwork of regulations that vary among states is well taken. For a global cloud and AI provider, this is obviously problematic.
His more interesting position is that AI is not fundamentally unique. States (and federal) already have tort laws, privacy laws, criminal laws, etc. Rather than regulate AI in a duplicative fashion, simply apply these existing laws to AI applications. Of course, there’s a bit of a question of who to apply these laws to:
- The company that developed the AI model
- The company that trained the model
- The company that is using/purchasing/applying the model
Here’s some examples:
- AI applied to resume and candidate screening.
Civil Rights Act, Equal Pay Act, Age Discrimination in Employment Act, Americans with Disabilities Act- Presumably all these and state-level equivalents still apply to AI. Ultimately, it’s the person or company using the AI for employment screening that would be libel for violations.
- AI applied to financial/risk screening for loans or insurance.
Equal Credit Opportunity Act (ECOA), Fair Credit Reporting Act (FCRA), Fair Housing Act (FHA), etc, would similarly still apply when AI is used.
- Other AI applications impacting privacy, involving PII, PHI, etc
Again, the FTC regulates unfair or deceptive trade practices, which would directly apply to AI. Similarly, state level UDAP laws would apply at the state level. Tort laws as well.
Of course, there’s some obvious areas where ambiguity remains, such as Intellectual Property rights (both on the input and output side of an AI model). But these are federal-level laws, as opposed to states.
What occurs to me is look for the actual harms caused by AI and consider if any of these harms are unique to AI, as opposed to existing regulation. When we find these gaps, develop appropriate regulations, until then, let innovation move quickly.