Lies, Damn Lies and ChatGPT

Nothing seems to be going right

This article really should be required reading for anyone with “C” and “O” in their title.

Artificial intelligence (AI) is not – and likely will never be – a full replacement for humans. Story after story is emerging warning companies to remember that ChatGPT is not smart enough to be 100% trusted.

The intangible assets of your human employees are still needed to help drive AI systems, and to pick up on their mistakes.

For example, earlier this month a lawyer with 30 years of experience was discovered to be using ChatGPT in his brief on a case. That might have been acceptable in other situations, but the lawyer’s goal was to show that there was precedent proving his client’s suit shouldn’t be dismissed because “the statute of limitations had expired.” So, he asked ChatGPT to find some historical cases.

The problem was that none of the legal cases he submitted to support his argument were real. ChatGPT invented every single one of them.

This isn’t the first time AI systems have offered false information. The AI bot has invented books and studies that don’t exist, publications that professors didn’t write, fake academic papers, false legal citations, unreal retail mascots and technical details that don’t make sense.

At one level, this is to be expected since ChatGPT isn’t “thinking” at all. ChatGPT is a Large Language Model (LLM) and, as such, it lacks the concept of “truth.”

Let me explain.

Very simply, when you ask ChatGPT a question it strings together tokens (words) that its algorithm statistically determined to be the best option for the next portion of the response. ChatGPT is essentially one giant search autocomplete system trying to concatenate everything in its database into an intelligible output.

In a way, ChatGPT is a bit like having Google summarise the top search results without mentioning where it pulled the information from. Worse, ChatGPT has no mechanism for establishing if its sentences are accurate, only that they “fit” the user’s question or query. Even OpenAI (the maker of ChatGPT) can’t show why its machine said something.

Said differently, if the by-product of ChatGPT’s normal operation is making things up and those falsehoods hurt someone – but OpenAI continues to make ChatGPT available anyway – that comes very close to being negligence.

It’s not good enough that ChatGPT is right 95% of the time. After all, you don’t want to be in the “unfortunate 5%” who have their life ruined because ChatGPT used some ironic thread about a person with a similar name to yours to stitch together a sentence.

OpenAI could argue that responsibility for misuse rests on the user. But any victim could then argue that the user may not have successfully accomplished the harm without OpenAI’s tool. The bottom line is that the potential risks of using AI systems without human oversight could be serious.

That’s why CEOs and managers must read this article.

Many leaders incorrectly assume AI is ready to put customer support, training, documentation and low-level employees out of work – today.

Because they think there is genuine intelligence in something like ChatGPT, these leaders will, consciously or unconsciously, devalue the skills of humans doing the “same” job. This is already leading to redundancies.

But if ChatGPT and other AI systems make things up all the time, and there’s no way for these systems to double-check themselves, then it’s probably not a good idea to put these machines in charge of customer service just yet, for example.

Humans still have an important intangible asset that machines don’t have: a mind.

Coupled with industry expertise (another key intangible asset), internal policies, guidelines, good training based on quality materials and a slew of well-designed incentives, machines will continue to work best alongside humans as a team. Humans are the ones that know how to swing the hammer no matter how complex the hammer becomes. CEOs and boards should remember this.

Like any tool, if the operator is not properly trained on that tool and does not understand its attributes and limitations, that tool will not function properly and may even cause harm.

Deploying AI tools without human oversight is a bit like trying to solve Plato’s Cave with the Wisdom of the Crowds and expecting the one blind guy to learn how to read shadow puppets.

While it is getting impressive, AI is still artificial (fake) and cannot be trusted. Real intelligence (people) must be checking the output at every step. If we forget that humans still have important intangible assets, things could get very messy, very quickly.

In saying that, focusing on the limitations of the current batch of LLMs would be like pointing to the Model T Ford and saying that automobiles will never be a thing. There may come a day when machine learning is 100% reliable and can fully replace human actors.

But that is not this day.

Originally published in The Business Times

Recommended Reads

Protecting patents by ‘defanging the snake’

Patents are great, but they’re just bits of paper. Like any rule or regulation, a…

Can Zespri’s intangible assets save its Kiwifruit from illicit growers?

In 1904, Mary Fraser packed a few cuttings of the Chinese gooseberry into her baggage…

How Sears broke its OODA loop and missed the internet

“Keep your eyes on the prize” is a cliche because it works. Focusing on a…

Bitcoin and the power of perceived value

People have their ideas about Bitcoin. For the believers, it’s the long-awaited alternative to the…

Unlocking value with the Mary Poppins technique

The 1964 film Mary Poppins had a great scene about hidden value. In the scene,…

Free 1hr Consultation

Intangible assets are a company’s greatest source of hidden value and hidden risk. Make the valuable visible in your organisation.

Sign-up for a free 1-hour consultation today.

Subscribe to Newsletter