Did US courts just green light AI innovation?

United States Supreme Court Building in Washington DC, USA.

The US government might have found a way for companies to protect their AI innovation, but there’s some bad news: we may need to give up using ChatGPT.

While the new Section. 1993 bill hasn’t passed into US law yet, its stated goal is to remove Section. 230 protections for AI companies, a move that some commentators say might strangle the AI baby in its cradle.

What are these Section. 230 protections? The controversial regime essentially immunises social media companies from being held responsible for all the things written by people on their websites.

The reason this law was enacted was because the US government feared social media companies would quickly go bankrupt if they had to police every single comment under threat of legal action. The solution was to call social media sites “platforms” rather than “publishers” just so they could continue operating.

The new Section. 1993 bill goes in a different direction.

By refusing to treat AI companies as platforms or publishers, AI is effectively being squeezed into a third box that doesn’t yet have a name because no one is certain who would truly be at fault for the content AI systems generate. Would it be the person who types the query or the AI that generates the answer?

This is a complicated question because although the liability should fall on a person who asks an AI to generate something defamatory and then publishes that content on X/Twitter, that person would also have plenty of plausible deniability. After all, they only asked the AI to invent a lie about Tom Cruise. It was the machine that said Tom Cruise likes to barbeque cats. The AI wouldn’t have told a lie if it wasn’t asked.

Blaming the AI for defamation is a bit like blaming the Zippo company because an arsonist used its lighters to burn down a house. In that scenario, the arsonist is considered by courts as the one with agency and is therefore responsible for the fire. Zippo is just a company that makes lighters so it can’t be held responsible for how those lighters are used.

However, this new law assumes that AI systems do have some agency – at least enough to “create” the final product – and therefore courts want the AI companies to be subject to defamation laws just like a human would be. Legal scholars and technologists will no doubt argue about this.

In the meantime, there is a bigger question at stake: if AI has some responsibility for what it produces in the negative, then surely companies can own what it produces in the positive?

If that’s true, it would have immediate, and potentially lucrative, consequences for the entire US technology sector.

First, it’s hard to see public large language models (LLMs) like ChatGPT surviving if Section. 1993 passes. It will simply be too risky for AI companies to continue operating these services.

Given the high likelihood of abuse by users asking the AI dodgy questions that could result in lawsuits, the only real options for US-based companies like OpenAI would be to either close these systems off to the public or include onerous clauses that would provide indemnity in instances where abuse occurred.

But the thing is, there are hundreds of other countries. What’s stopping OpenAI from setting up in Malta or Lichtenstein where regulations aren’t so punitive? If that happens, public-facing AI systems like ChatGPT will probably survive, they just won’t be in the US.

This would also mean that nothing would change from Washington’s perspective. People could still ask a Malta-based AI to write something false and then publish it on X/Twitter with impunity. Social media would still be flooded with potentially defamatory content and at the same time, the US would be losing some highly profitable AI companies as they depart for other countries.

If that was the only consequence of these proposed new regulations, then it would be a major backward step for the US.

Yet there’s a second implication that could be enormously lucrative for the entire tech sector: If AI is responsible for what it produces, then it follows logically that AI companies should be able to lay claim to and protect whatever content and innovation those models produce.

That’s speculative at this point, but it’s worth pondering. That single change could be worth trillions of dollars for the US economy as AI may be unleashed to invent all kinds of product ideas, fresh content and perhaps even entirely new businesses.

We’ll have to wait for the final wording of the Section. 1993 legislation. However, the US government appears to be taking this issue seriously and seems to understand the huge potential of AI for the economy – if it can get the laws right.

Recommended Reads

Delving deep on due diligence

Balance sheets can teach you a lot about a company, but nothing beats getting on…

Why the CIA was right about data

What if mission-critical data was quite literally flowing through your air conditioning vent? A 2021…

Could your data become a money-making product?

It’s amazing what even small companies can do when they understand the power of data.…

EverEdge Announces New Shareholder & Europe Expansion

A message from Paul Adams, EverEdge Founder & CEO I founded EverEdge 15 years ago…

What role will intangible assets play in 2024?

As the first quarter of 2024 comes to a close, it’s worth pondering what might…

Free 1hr Consultation

Intangible assets are a company’s greatest source of hidden value and hidden risk. Make the valuable visible in your organisation.

Sign-up for a free 1-hour consultation today.

Subscribe to Newsletter