Skip to content
AI-Threat_1920x1080px_v2
Apr 5, 2023 3:00:00 PM

The AI Threat is one to privacy, not our lives.

I need your clothes, your motorcycle...and your privacy. 

Cliches abound when it comes to discussions about the dangers of rampant artificial intelligence (AI). Whether it’s images of a mechanoid Arnold Schwarzenegger tearing up downtown Los Angeles in The Terminator, or the ominous red ‘eye’ of HAL in 2001 ejecting a few unwitting astronauts toward an early astral grave, we’ve just about heard and read it all. 
However, what was once the stuff of science fiction is now real. Very real.  AI is a technology we quite simply will have to integrate into our businesses and our lives. But despite what the naysayers and the clickbait headlines suggest, that doesn’t have to be something to fear. 

So where are we up to with AI?

Artificial intelligence and machine learning have been around for some time, but ChatGPT has completely changed the game due to its sheer utility. It’s a chatbot like Siri or Alexa, but much smarter and can converse in a natural way. 

It was first developed by San Francisco-based tech firm OpenAI back in 2018, which designed software called GPT (generative pre-training transformer). The ‘Chat’ part is just the name of the app. GPT, itself, is an AI language model that can predict the next word based of a sequence of words it’s been given.

The ChatGPT app was launched in November last year to relatively little fanfare but quickly became one of the most popular internet applications ever developed. 

Its capabilities are extraordinary in isolation. What makes it remarkable is its ability to  analyse both the content and context of what it’s told. This enables it to hold flowing conversations, rather than pump out generic and solitary responses. It also has access to massive datasets of text which it can access to learn about culture, language and general knowledge topics.  

When it’s put to work it’s no slouch either. ChatGPT can code, write articles and has even passed law exams. It already has Ivy League universities running scared of the impact of plagiarism, as well as coders and journalists fearing for their future job prospects. 
However, as yet there’s no sign of it trying to take over the US military in an attempt to wipe us all off the face of the Earth. So what’s the real problem? 

Well, in addition to plagiarism and the impact on jobs, many are worried about its use of our personal data. Italy recently became the first Western country to ban ChatGPT for that very reason. AI must use training data to refine its abilities, which is information fed into it so it can learn. But the Italian data-protection authority claimed there was no legal basis to justify what it called ‘the mass collection and storage of personal data for the purpose of 'training' the algorithms underlying the operation of the platform’. OpenAI has said it complies with privacy laws. 

Indeed, privacy has been one of the biggest issues raised in recent years when it comes to how we use our other go-to tech like social media and search engines, and ChatGPT has many of those same terms and conditions built into its user agreement. Something many fledgling users may not yet realise. 

Also, all this came not long after some of the biggest names in tech, including Elon Musk, called for development of AI systems to be suspended due to fears the race to improve them was spinning out of control, and that the consequences were not yet fully understood. 

Body Image

So, do we have anything to fear? 

It depends. Musk and the other tech boffins could be right in the sense that the speed of this development hasn’t given us a chance as a society to take stock. This doesn’t mean OpenAI and other similar technology is going to be a bad thing. Certainly not Hollywood-bad, but it does mean we’ll have to think about how we use it and what we use it for. 
The impact of how it gathers and uses our data is also a very real concern. By its nature it could be given access to vast amounts of information, and that raises serious problems for our privacy. So far, there’s been no reports of ChatGPT telling the Italian data-protection authority ‘I’ll be back!’, but who knows what’s around the corner? 

Going forward, strong regulatory measures will have to be used to ensure public data is protected if tech like this is to reach its full potential, without causing any undue harm. 

Share article

RELATED ARTICLES