Ten years ago, every C-suite was under siege from the IT department begging their directors to be more careful with customer data so it wouldn’t be stolen.
Newspaper headlines about major cyber-attacks made their way out of the geek media and started to appear on the business pages of mainstream media, which only reinforced what the IT teams were warning about.
As a result, new regulations were invented, key business processes were reframed and IT budgets were increased.
Change took a while to have an effect, but by the beginning of the second decade of the 21st century, most serious businesses had a good grasp on the dangers of cyberattacks and the importance of keeping watch over their precious data.
Then OpenAI released its “ChatGPT” system late last year and everyone lost their minds.
Suddenly, those same serious companies were salivating over the prospect of artificial intelligence (AI) models cutting headcount to save on labour costs. They assumed there would be huge risks for any company that was last in their sector to catch this boat and it spooked many CEOs into engaging with a third-party AI company.
Just pause here and recognise the regressive mindset that this sudden shift represents.
It took years for companies to understand that their data was valuable and deserved better protection. Yet in just a few months (and a lot of social media fireworks) everyone was happy to package up their most valuable intangible asset and ship it to an AI company on a whim.
Obviously, each engagement with an AI contractor to build a model for a company, or simply to have access to a model, should be wrapped in a thorn bush of NDAs, ethics agreements and legal red tape. And from a basic business perspective, it would be highly unwise for that AI contractor to do anything nefarious with the data being entrusted to them.
But think about it. AI tools use all the data entered into their system to enhance the model’s performance. Even if the data is “deleted” after the contract has been fulfilled, can you really be sure the lessons learned by the AI are deleted too?
It makes no sense to have cyber-security policies to protect data, but then to turn around and immediately send all that data to another company or enter it into a publicly accessible AI tool on the vague promise of earning a bit more revenue.
Was everyone joking when they claimed to understand the risks of cybersecurity?
Maybe the disconnect is because the intangible asset of data doesn’t often appear on balance sheets, so it can be tough for the average CEO to visualise the value of data. As a result, cybersecurity policies tend to be hand-wavey at best and reactive at worst. The widespread carelessness of contracting AI companies to use your precious data confirms this attitude.
But it gets worse.
Companies have funnelled millions into training staff to respect their individual role in helping to secure data. After all, a business can have all the best cybersecurity systems in place but it’s for naught if an errant email finds its way into the wrong hands due to employee slip-ups.
The introduction of ChatGPT earlier this year was a good test to see if people were absorbing the warnings about digital hygiene. Unfortunately, it’s not just the CEOs who aren’t listening.
For example, earlier this year employees in Samsung’s semiconductor business were quick to use ChatGPT to assist with their work.
The engineers made the not-unreasonable assumption that no one outside ChatGPT was watching whatever they typed into the AI. So, they pasted in a few sections of source code of unreleased Samsung products for debugging purposes. They also entered confidential meeting notes to help build slide decks for an upcoming internal presentation.
Little did they know that although their session with ChatGPT was private, it certainly wasn’t secure. Perhaps if they’d read the terms and conditions (who does?) they would have noticed that any information typed into ChatGPT is fed directly back into the AI model to help “train” it.
That means, should a future random user of ChatGPT ask the bot some pointed questions about semiconductor technology, the answers they receive may be drawn directly from Samsung’s confidential information.
Samsung gave up some of its most precious intangible assets for an extra bit of productivity.
One would have thought senior tech engineers might avoid simple mistakes like this. After all, the first lesson of the internet is to never type anything into any website that you wouldn’t want to see on the front page of the New York Times.
But the reality is, even the most talented among us are still struggling with knowing the true value of intangible assets like data.
It seems to be disturbingly easy to forget the years of corporate messaging inculcating deeper respect for cybersecurity and data. The moment we saw dollar signs over the word “AI,” we threw out all those lessons to chase some hypothetical profit from a new technology.
Either data is an intangible asset, or it’s not. If it is, then surely it deserves the same care as would be applied to a trade mark or brand. Wouldn’t it be horrible to discover that a third-party manufacturer you had used for years had quietly registered your brand for themselves in their home country?
Well, similar things might be happening right now with the company you hired to build your AI or the data you are entering into an AI platform.
It is vitally important with any new technology to keep an eye on data as an intangible asset. The last thing you want is to wake up one day and see your crown jewels disappear because a marketing pitch convinced you to hand your data over to an untrusted partner.
Recommended Reads
Free 1hr Consultation
Intangible assets are a company’s greatest source of hidden value and hidden risk. Make the valuable visible in your organisation.
Sign-up for a free 1-hour consultation today.