Humans in the loop?

Image related to Humans in the loop?

By Jake Plenderleith, 17 November 2025

Can AI think?

It’s a question currently occupying the minds of the leading scientists and academics engaged in AI development and analysis.

The answer depends on who you ask. Some claim that AI has already crossed the threshold of human-like cognition. Others, while acknowledging AI’s exceptional information processing ability, are unconvinced about its potential for aping human thought.

But there’s growing evidence suggesting that what AI is now capable of doing is something like thinking. It isn’t very easy to tell – not least because there isn’t a great deal of understanding of how large language models actually work.

Regardless, if a thinking AI is not quite here yet, it’s possible it could get there in the future. 
The consequences should AI prove capable of cognitive functioning anything like that of which the human brain is capable are stupefying.

Such an AI model would quickly go beyond carrying out the more mundane, time-consuming, repetitive tasks once assigned to it and assume a new role, the nature of which is hard to define.

At that point, an AI model might cease to be a passive crutch on which we lean for support and might just become a more active participant in the workplace – albeit one that won’t get sick or take annual leave. 

Human input

All of which might, to those accustomed to using AI for more menial, prosaic purposes, seem far-fetched, even risible. 

Indeed, there is often an element of schadenfreude in gloating over AI mistakes, a barely concealed delight when an AI model – which with its speed and efficiency seems to make a mockery of human effort – trips up over something simple. 

In this respect, AI makes it easy for us. None of us would struggle to find examples of AI howlers (the chatbots dispensing bad, even illegal, advice, or the reams of so-called AI Slop that litter the Internet). 

Such errors are often cited by those who claim that AI is nothing but a gimmick. But AI’s headline-grabbing mistakes are misleading, because such blunders are in fact atypical. More often than not, AI gets it right. 

This is clear when we look at AI and how it affects tasks relating to the workplace. 

One programmer, a self-confessed former AI-sceptic, recently wrote that tasks that once took him a month, now only occupied a single evening. The AI models used by the programmer could ‘digest, in seconds, the intricate details of thousands of lines of code’.  

The conversion of the programmer from sceptic to believer is instructive, in that he took up AI use ‘fearing that if I didn’t I would fall behind’.

Left out the loop

The programmer’s confession goes to the heart of current AI rhetoric. Others in his field were using AI, and he didn’t want to be left out.

The desire not to be left behind might explain the behaviour of those who are a little too eager to criticise AI models. Might it be that some of the mudslinging in AI’s direction is in fact a coping mechanism born of the recognition that AI is actually rather good at what it does, and is getting better?

The balm proffered to those concerned about AI development is the idea of the ‘human-in-the-loop’. That is, that AI models will have, at some point in their functioning, the benefit of human input or oversight. Such oversight ensures that any errors can be corrected, statements checked for factual accuracy and its overall performance measured.

We are still in the early stages of establishing exactly what a good human-in-the-loop role looks like. What is the nature of that human oversight? How thorough is it? At what stage in an AI process should it occur? How frequently? And how labour intensive is it to for a human to check, for instance, the accuracy of the numbers AI can run?  

How to make it work

Already there are concerns that the prevalence of, and our reliance on, AI is blunting our ability to think for ourselves.

To put it plainly, just as AI is starting to look like it is ‘thinking’, we are thinking less.

For the human-in-the-loop to work, a great deal of thinking is going to be required on our behalf. 
First, in order to spot that AI has got something wrong, one has to be sufficiently knowledgeable to recognise that an error has occurred in the first place.

Second, the purpose of the human-in-the-loop is to assist the AI to reach the desired outcome. That ‘assisting’ demands critical-thinking skills and creative ingenuity.

Third, to ensure that the AI is working correctly, a human-in-the-loop will need to analyse that which AI produces and verify that it is done what it has been tasked with carrying out.

In other words, no matter how well-functioning an AI model becomes, human input is still going to be required, not only to guide at the beginning of a project, but then to oversee it as it carries out and to check on the quality of its final output.

A good example is an airplane pilot who, though they may delegate much of the actual flying of an aircraft to the autopilot function, are still overseeing that function and can step in it any time to override it.

No matter our reliance on technology, there is always the need for human intervention.

Which takes us back to the question of whether AI is thinking. Even if AI begins to ‘think’ in a manner reminiscent of human cognition, human supervision will remain critical.

As a species we have always rightly sought the highest reward for the minimum amount of effort. That is the story of human development: the slow, arduous slog of building and improving technology and, gradually, over time, making our lives easier.

AI is merely the latest chapter in that story. And as in the chapters that preceded it, human agency will continue to be the protagonist. 
 

You may also be interested in: