What did we learn about AI in 2023?

A person's hand touching a computer recreating Michelangelo's The Creation of Adam.  Artificial intelligence and machine learning concept.

This time last year the business world was panicking about the introduction of something called “ChatGPT” and its consequences for, well, the human race.

Even in those early few months, the breathless articles sounded a bit too hyperbolic, but they certainly got the clicks. At least they convinced CEOs to begin thinking through what might happen to their companies should artificial intelligence (AI) systems disrupt their status quo.

A year down the track, what have we learned about the consequences of AI and where might this exciting technology trend be leading?

The biggest lesson about AI is that it isn’t, actually, artificial intelligence. Not yet, anyway. Unless the big AI companies have some secret software looming behind closed doors, all we’ve seen until now is a series of machine learning tools and large language models (LLMs).

While these are certainly impressive, LLMs are not technically “thinking” like humans. LLMs are wicked fast at predicting which word should be placed next so that a sentence makes sense, but this isn’t how humans process information. The machines are still machines.

That doesn’t mean AI is useless. Far from it. A lot of companies saw incredible value in tasking these AI tools to help them create all kinds of intangible assets such as high-quality content, improvements in big data analysis, better customer engagement, brand enhancement and even entirely new inventions.

Over the year, as the use of AI systems entered the mainstream, the effects of AI were felt at each level of the business world – including at the legislative level. As usual with technology, innovations quickly outrun the legal headlights.

Courts in the US and around the world are dealing with complaints from artists worried that AI companies are using their copyrighted works to “train” AI models without any compensation. But by the time the courts unpack the complaints it is likely to be too late. The major AI companies will already have stolen the protected works, trained their models and released the software to the public.

This means courts are in an awkward position of being forced to respond to a new status quo, rather than prevent it. The AI horse has already bolted, so to speak. Any legal decision about copyright will only put a bandage on the IP bleeding. So, the first big result of widespread AI is that life will be much harder for creatives from now on.

At the business sector level, the effects of AI might be a bit more positive.

For example, as we wrote in a previous article, the US Congress has introduced a bill (Section. 1993) that refuses to treat AI companies as “platforms” like social media. The reason this is important is that social media companies can’t be held liable for anything a user publishes on their “platforms.”

Allowing AI companies to have the same legal privileges would mean users can generate all types of false or defamatory content and the developers of these AI systems wouldn’t be held responsible. US lawmakers rightly thought this was unfair since AI systems do a lot of the heavy lifting for creative works.

The proposed law in the US (other countries are watching closely) considers that AI systems are actively involved in the creation of content. They aren’t simply tools. A user could still get in trouble for defamation, but so might the AI company. There’s a big question now as to whether it’s worth the risk to continue allowing the public to use AI.

The upshot of this law change? It could be an important green light for companies. After all, if it becomes possible to copyright AI-generated innovation, that could have lucrative implications for businesses across many sectors.

On a business level, dozens of anecdotes point to companies already deploying LLMs and other AI tools to optimise their processes. Some business workflows were reported to have complete overhauls resulting in staff requirements reduced by 30-50%.

Other firms have gone the other way and increased headcount since humans are still (for now) required to operate the AI. The logic here seems to suggest that if more people are employed to drive the systems, that leads to higher productivity. So far, this seems to be accurate, but we will see how long this alliance sticks around.

At the individual worker level, employees are using basic tools like ChatGPT to speed up their tasks so they can get back to doing more important – and valuable – work. One of the most common personal uses is to use it like a “better Google” because asking ChatGPT certainly beats using search engines for hours on end.

However, we also quickly learned that generative AI models like ChatGPT are trained on material drawn from the whole internet. And, as you might know, the internet contains a lot of faulty stuff which means those lies will be embedded in these AI models alongside the true information.

This led to some curious situations.

In the first few months of 2023, when AI models like ChatGPT were new, people who really should know better were using AI-generated information without checking its veracity. As you might expect, they got themselves into terrible legal messes.

(As an aside, if you’re worried about AI hallucinations and false training data, there are ways to mitigate these bottlenecks: Your prompt must contain your version of the truth for the output to be truthful. Don’t ask the AI to seek the truth within its training data. It won’t find it and that’s not the point of an LLM, anyway.)

As for the transformation this new technology will have on society, AI appears to be going about the same way as when the internet started to come about in the late 1990s.

People said back then that the internet would be a force for good and make everything better. But in the end, while the internet had its upsides most workers weren’t disrupted.

AI will probably have about the same impact. It will create some efficiency gains at various layers of the business world making a few companies insanely rich, but the rest of us will just use it to create cat images (or anything else that comes to mind).

Nevertheless, AI is already helping to shine a brighter light on the importance of intangible assets since a lot of them will be made – or enhanced – by AI. We’ll certainly be watching this space in 2024.

Recommended Reads

Delving deep on due diligence

Balance sheets can teach you a lot about a company, but nothing beats getting on…

Why the CIA was right about data

What if mission-critical data was quite literally flowing through your air conditioning vent? A 2021…

Could your data become a money-making product?

It’s amazing what even small companies can do when they understand the power of data.…

EverEdge Announces New Shareholder & Europe Expansion

A message from Paul Adams, EverEdge Founder & CEO I founded EverEdge 15 years ago…

What role will intangible assets play in 2024?

As the first quarter of 2024 comes to a close, it’s worth pondering what might…

Free 1hr Consultation

Intangible assets are a company’s greatest source of hidden value and hidden risk. Make the valuable visible in your organisation.

Sign-up for a free 1-hour consultation today.

Subscribe to Newsletter