No, Google, AI is not just like humans

artificial robotic arm write down some notes with pen

This month it was the turn of the giant AI companies to have their say on whether they should be allowed to use copyrighted material to train their systems.

What’s the problem? Simply put, the US Copyright Office has no idea what to do, so it asked the public to comment on the many possible paths for setting policy about AI and copyright.

A handful of companies submitted brief proposals, including Meta, Microsoft, Stability AI, Anthropic, Adobe, Apple and other firms.

But the most interesting submission was from Google, which has its own AI system called “Bard.”

Google’s comments are curious because of how misguided they were. Such a big business that has played a core role on the internet since almost day one really shouldn’t be getting such basic things wrong.

So, if Google is misguided about the issue of AI, into which it has poured more than $US200 billion over the past decade, it’s worth pondering why.

Here’s what the (presumably human?) team at Google wrote to the US Copyright Office:

“If training could be accomplished without the creation of copies, there would be no copyright questions here. Indeed, that act of “knowledge harvesting.” to use the Court’s metaphor from Harper & Row, like the act of reading a book ‘and learning the facts and ideas within it, would not only be non-infringing, it would further the very purpose of copyright law.”

The mistake here should be obvious. Is AI “reading” in the same way that a human reads a book? Not at all. To say such a thing is to fundamentally misunderstand how an AI model operates.

It is highly deceptive to pretend that AI models are doing anything like what human beings do when they “read” information. Machines don’t look at words. They don’t have eyes. They don’t see anything. AI models operate by compiling statistical predictions of words and then using an applied mathematical model to generate text.

A person doesn’t analyse data and deploy mathematics when they read a book. That’s not how creativity works. People study a page and wonder “How did the writer do that?” They look at the specific argument in an article, piece together the technique, revisit the original text to compare it with their output and then apply what they’ve learned to create new sentences and paragraphs.

There is a massive difference in the amount of effort it takes for one human to do this and a machine to do it and then make the results available to everyone. A human writer can’t spin up extra bandwidth to boost their mental processes, but generative AI can. This fact alone proves that AI is not comparable to human thought. AI is a tool, not a mind.

Current copyright laws never anticipated a tool like this would exist. It always assumed humans, with their limitations, would be the ones “copying” and “learning.” This is no longer the case. We have no precedent for AI on this scale. Nothing like generative AI has ever happened before.

What Google misses is that there is quite a large gap between “an individual studying hundreds of works of several artists to learn technique” and “a computer program studying millions of artworks from hundreds of thousands of artists and only concerned with making a profit.”

The copyright laws were written to deal with the strengths and limitations of humans. They cannot apply to superhuman AI tools that seem to emulate human creative processes (like “reading”) but which, in reality, process billions of artists’ works and then produce unlimited new works in almost no time.

Saying that AI learns like humans do, and therefore nothing needs to be done, is not serving the public. Google’s fundamental misunderstanding here highlights how difficult it will now be for regulators to fit frameworks of ownership dating back to the 18th century onto generative artificial intelligence.

Regulators should be cautious about disingenuous positions from AI companies like Google and keep one thing in mind: copyright laws are for protecting society, not individuals or companies.

An AI system is not a person. Machine learning has no more rights than a hammer. Google seems to have conveniently forgotten that an AI system is never “doing” something on its own. Even with Google’s “Bard,” there is always a human prompting the AI to do something. And that’s the point of having copyright laws in the first place.

It is good that governments are trying to see the big picture as they contemplate drafting new laws or adjusting existing laws. But it is crucial that they don’t get caught in the web of mischaracterisations that try to describe their programs as just super-powerful humans.

AI might appear to “read” and “learn” just like humans, but it will always be a tool. Any new framework for copyright law should reflect that. This debate needs to be watched by everyone due to the far-reaching implications it will have on creative works moving forward.

Recommended Reads

The story of Air America or, sometimes it is about what you know

It can be devilishly hard to value what a key person in a business knows,…

China’s IP protections have come a long way

Lots of experienced travelers to China say the country has changed drastically over the past…

Can a company change its DNA?

Starting a business is a lot like setting the foundation of a house. Get the…

Why useful definitions are critical for innovative thinking

The 2024 Paris Olympics came and went in a whirlwind. If you blinked, you might…

By attacking ‘price-gouging’, Kamala Harris punishes intangible assets

A sudden price jump is a great way to annoy customers, but it’s a bit…

Free 1hr Consultation

Intangible assets are a company’s greatest source of hidden value and hidden risk. Make the valuable visible in your organisation.

Sign-up for a free 1-hour consultation today.

Subscribe to Newsletter