Is ChatGPT worth the hype?

As the AI revolution gathers pace, we will need to balance its enormous potential with a new generation of harms built around its power to predict, persuade and mimic. Meet Chat Generative Pre-trained Transformer, the advanced language model that can help you automate invoicing or make embarrassing mistakes.

by | 15 Mar, 2023

ChatGPT fervour has few rivals among other cultural phenomena. Since its release to the public in November 2022, it set a new record for fastest application to reach 100 million users. And those users, including reviewers and detractors, have collectively produced 239 million articles and posts about the disruptive language-learning technology. Unofficially, the chatbot is bigger than Elvis.

The bona fide user and media frenzy has been fuelled by familiar economic tropes. The first is our beliefs that tech innovation is inherently progressive, the second that technology fuels indefinite productivity growth. However, there is growing evidence that we need to lace our wide-eyed hope with a healthy dose of systems thinking and some unsettling, but widely available data.

According to the McKinsey Global Institute, the current disruption caused by new exponential technologies is 300 times larger in scale and 10 times faster than the disruption caused by technologies during the Industrial Revolution in the 18th century. The impact of these technologies on society is estimated to be 3,000 times greater than in the past.

Even so, their impact on economic growth and productivity has been negligible for well over a decade. This is not least because companies like Google or Microsoft, which has invested US$13 billion in Open AI, primarily innovate for profit (rather than for productivity). Microsoft is entitled to 75 per cent of the profits generated by Open AI’s products.

As the recent media hype around ChatGPT attests, the tech industry’s ability to generate interest in its profit-making ventures is unparalleled. It uses the very technologies it sells to understand how to tap into human curiosity and distract us from critically assessing its impact.

The ubiquitous ChatGPT will, for instance, deny any knowledge of a marketing ploy to make it famous. When it spews out false information or miscalculates your car loan repayments, as it may, its digital poker face will remain unchanged. If caught, it will apologise profusely, eliciting sympathy. This could be the reason Open AI used the marketing slogan ‘Too dangerous to release’ to launch its predecessor GPT-2.

 

How ChatGPT works

GPT-3, on which ChatGPT is built, is trained to generate human-like responses to users’ prompts by analysing massive amounts of text from books and the internet. Much like an autocomplete function, it predicts subsequent words in a response by predicting the likelihood of certain words appearing after others, which it has internalised through its training.

For instance, when asked about the ways to restrict financial losses in a company, it quickly produces blocks of text discussing budgeting, forecasting and auditing. It does so not because it understands the question, but because it has acquired knowledge that certain terms like ‘budget’, ‘forecast’ and ‘audit’ are likely to be used with each other, and with other words such as ‘financial losses’ or ‘business enterprise’.

Ask ChatGPT a conceptual question and it gives you sentences that may be well phrased, but with questionable factual accuracy. The system is trained to model language, not develop knowledge. Put simply, ChatGPT does not understand the concepts about which it generates linguistic responses, but it can sound like it does.

An accountant’s clients may not notice or care that a newsletter is written by a deepfake Dostoyevsky. Instead, they will care if their automated invoices are correct and timely. They will care if the accountant’s communication is clear and accurate. And alongside fairly rudimentary automations, accountants need the ability to trust AI to accurately analyse large chunks of data, make financial predictions or identify trends.

A recent global survey by KPMG and Australian researchers found that more than 50 per cent of people believe that AI cannot be trusted in the workplace, with lower levels of confidence in western countries. A majority of people, 71 per cent, believe that regulation is necessary, with most citing national universities, research institutions and defence organisations as organisations they have most confidence in to provide that governance, as well as development and use of the tools. Government and business are the least trusted to develop, use and govern AI tools.

The mistrust is likely to wane as more users start to engage with the chatty bot. Our brains’ critical design flaw is to anthropomorphise technology just like we do with anything else we create – from stuffed animals to nicknamed cars. Nowhere is this more pronounced than in the way we project human features and hopes on all things artificial and supposedly intelligent.

If you ask ChatGPT to describe itself, it will come up with ‘an engaging and convenient way for people to interact with technology’. It is not a search engine or a junior analyst who is having a real conversation with us. But our interaction with it might fool us into thinking so, as our desire for human-like interaction can trigger an emotional response. Relationships, after all, are the most transformative technology we have.

Coincidentally, the first iteration of GPT was released on Valentine’s Day 2019. Its successor, ChatGPT, carries the promise of a perfect companion – part pet, part servant, part a secret weapon – who happens to be a better conversationalist than your family dog. This scenario has all the ingredients of a great and potentially disastrous love story.

 

Chat GPT and productivity

Tristan Harris and Aza Raskin, founders of the Centre for Humane Technology, cite a core problem of humanity identified by biologist EO Wilson – the combination of ‘palaeolithic emotions, medieval institutions and God-like technology’ as opening a gap between the complex issues technology is creating and our ability to deal with them.

Technology, Harris and Raskin believe, is making us less – not more – productive and able to deal with reality. Synthetic relationships have already begun to transform us due to AI’s ability to mimic and persuade, and our human inability to switch off emotions.

The outcome of the AI-human partnership will be partially determined by our starting ideological position – be it an optimistic or a pessimistic one. Do we believe AI business models can be trusted to close the complexity gap created by tech-innovation? Can we curtail AI’s gigantic CO2 emissions? Should societal well-being take precedence over technology-driven productivity gains, which have been at their lowest level since the 18th century in some OECD countries? And, last, but not least, can we trust AI to shape our culture?

The publishers of a slew of academic journals say no, banning the use of ChatGPT in submitted articles. Others have turned their attention and their use of the AI tool to the question of productivity in research, treating it as a low-cost research assistant (however error-prone).

So far, the revolutionary cyber scribe sounds more like a mega-fibber that generates human-sounding text than a Noam Chomsky in a cyberchat. Chomsky himself called the chatbot a high-tech plagiarist, questioning the ability of labour-saving technologies to replace people.

 

ChatGPT, sustainability and regulation

A bigger elephant in the ChatGPT room is sustainability – how power-intensive the AI systems are. A Karma Metrix analysis by Chris Pointon estimates that ChatGPT could emit about 3.8 tons of carbon dioxide equivalent (CO2e) every single day. ChatGPT training alone resulted in emissions equivalent to a 700,000 kilometre car-ride.

The traffic from AI’s exponentially growing user base can only exacerbate, rather than help us solve climate problems. The Internet, as Pointon notes, is the largest coal-fired highway on the planet. So long as our human ecosystem is worth less to us than AI-driven entertainment, profit or productivity, prospects for improvement are going to remain grim.

So, what kind of economy can be built on the business-AI machine alliance? For starters, it might not be a sustainable one. Over the next few years, there are going to be thousands of start-ups trying to profit from chatbots and actively shaping humanity’s socio-economic transformation. In some cases, AI agents might become primary economic drivers, a business panacea.

Either way, regulation will need to match the speed with which technology is moving. The question of who will regulate AI is a pressing one, as emerging AI models will need to adhere to international principles of economic and climate justice. Yet most of the AI regulation is likely to be sovereigntist in practice.

So far, only China and Europe have been hard at work trying to rein in artificial intelligence by tightening their grip on the technology sector and the way its products are used.

For now, we’d be wise to at least refrain from pronouncing ChatGPT’s name in French, as its very transcription – ‘sha-jeu-peh-teh’ – translates to a serious threat: “Cat, I farted”.

 

Share This