The human side of AI
Hype and fear of missing out are driving rapid AI developments, but we should be thinking more about our own behaviours and thought processes to really benefit from AI

Artificial Intelligence is, seemingly, nearly everywhere now. But distinguishing hype from reality remains challenging. Here I cover a few recent (non-technical) AI developments or issues that look beyond some of the hype.
On the one hand, AI was a critical aspect of both the 2024 Physics and Chemistry Nobel Prizes. AI is also increasingly aiding pharmaceutical and healthcare research & development. And more and more businesses are adopting AI systems.
The proposed half trillion-dollar investment in the US for AI infrastructure is a sign of the anticipated economic significance. While DeepSeek’s surprising (alleged) cost and computational efficiencies challenges the view that bigger models and computer power is better, and that the US will remain the unrivalled AI leader.
In New Zealand, the AI Forum found that in a small survey of organisations two-thirds are using AI. The IT, construction and manufacturing sectors had the highest levels of AI adoption, although most sectors are adopting AI platforms. Half reported having a “positive financial impact”, and nearly all (96%) found worker efficiency improved. Job losses were reported to be minimal.
In fields like conservation AI is being rapidly adopted at large and small scales to improve pest management. Automated visual and audio identification of predators, pests, and endangered species is providing a step change in the effectiveness and efficiency of pest control.
A recent paper by two US economists suggests that AI may have similar disruptive effects on labour markets as “general purpose technologies” such as steam power and electricity. But those impacts take decades to unfold. Their assessment is confounded by the fact that they really consider digital developments in general, rather than AI specifically, so the impacts on the workforce are still speculative.
But do expectations exceed reality?
An analysis by Goldman Sachs suggests that current investments in AI may only have modest financial returns, since in a business setting they often aren’t used to reliably solve complex problems. Generative AI appears to have improved the productivity of some tasks, but has had limited effect on how we work, according to Goldman Sachs.
“Jim Covello, [Goldman Sach’s] head of global equity research, argues that unlike transformative technologies of the past – such as e-commerce, which immediately offered cheaper solutions to existing problems – AI remains prohibitively expensive while struggling to handle even basic tasks effectively. The firm estimates that in the next decade, AI might boost U.S. productivity by just 0.5% and GDP by less than 1%, a far cry from the revolutionary impact its proponents promise.” [Source: Quartz 9 December 2024]
This assessment may change if DeepSeek’s, or other AI companies’ approaches, do enable much cheaper computational effort, and can be used for more complex problems. But what also shouldn’t be negelected is the need to have the right human capabilities to use AI well. Organisations can also currently struggle with scaling AI to get the most benefit.
Technology commentator Vaclav Smil has also called out Silicon Valley hyperbole of AI “saving the world” and points to current limitations of AI models and applications:
“When Marc Andreessen, a general partner of a leading US venture capital firm, says that “AI will save the world” and that it “can make everything we care about better”, does he mean that, or does this hyperbolic claim apply only to information management? If the former, then I urge readers to make their own short lists of such “save” and “care” measures and ask what Ai will do for them in five or 10 years. My list, with inclusions guided by their overall potential to save lives, would include the complete elimination of nuclear weapons, the economic development of Africa and the end of malnutrition. I do not see the vaunted large language models (llms) and generative AI (Gen AI) triggering fundamental transformations in society, crime or politics. And if not, what then is that “everything we care about” which AI will do for us? Writing personalised rejection letters or drawing cartoons in Picasso’s style?”
To Smil’s list you can include reducing environmental impacts. Significant increases in greenhouse gas emissions are linked to the growing development and adoption of artificial intelligence, with some companies eager to recommission old nuclear power plants, or strike deals with coal or gas plants. Longer term, AI-associated emissions may decrease as (or if) computational processes improve, but currently “saving the world with AI” is pure hype.
Along a similar line as Smil, Yann LeCun, a leading AI researcher, challenges the linear thinking about artificial intelligence:
“The idea that somehow intelligence is kind of a linear scale is nonsense,” he said. “Your cat is smarter than you are on certain things, and you're smarter than it on certain things. A $30 gadget that you can buy, that can beat you at chess, is smarter than you at chess. So…the idea that somehow it's a linear scale, that at some point it's going to be an event when we reach AGI, is complete nonsense. It's going to be progressive.”
While LeCun thinks eventually AI systems will be “intelligent” in many ways that humans are, there won’t be a sudden tipping point. He also suggests that generative AI will only be relevant for another few years, with other methods replacing it.
Significant increases in greenhouse gas emissions are linked to the growing development and adoption of artificial intelligence, with some companies eager to recommission old nuclear power plants, or strike deals with coal or gas plants. Longer term, AI-associated emissions may decrease as (or if) computational processes improve.
AI FOMO
Proponents of AI see endless opportunities as well as fear of missing out, on both technological advances and adoption. Since 2023 there has been a rapid retreat by tech companies away from acknowledging the importance of discussing safety and introducing safeguards, even when speculative and fanciful existential risks of AI over-shadowed more pressing concerns.
While Europe remains more cautious, the US and UK are concerned their desires for dominance of AI will be hindered by regulations that seek to promote safety, equity, and ethical uses.
“The AI future is not going to be won by hand-wringing about safety.” US Vice President, 11 Feb. 2025
As geneticist Paul Berg predicted in 2008 in relation to new technologies:
“… there is a lesson in Asilomar for all of science: the best way to respond to concerns created by emerging knowledge or early-stage technologies is for scientists from publicly-funded institutions to find common cause with the wider public about the best way to regulate — as early as possible. Once scientists from corporations begin to dominate the research enterprise, it will simply be too late.”
The rapid retreat of AI firms’ “commitment” to improving understanding of and addressing AI risks illustrates this.
The safety, trust, and ethical issues associated with AI get the most attention currently. These tend to focus on the developer and technology aspects. Less consideration has been given to the users, but this is starting to change.
Move beyond “intelligence” and think about achievement and human cognition
Much of the current AI discussions focus on the technology, rather than us – our ways of thinking, and the practices and systems we currently use.
I think that we’ll need to focus less on intelligence and more on “achievement.” As with humans, being very intelligent doesn’t necessarily mean being more capable, consequential, or acting more appropriately, reliably, and ethically. How good are AI systems at achieving their goals, and how confident can we be in how they do that?
There is growing concern of “cognitive offloading”, where critical thinking about how AI is used, and the rigour and robustness of the “answers” diminishes. Microsoft researchers report this too. But it is based on self-reporting, rather than independent clinical assessments, so this needs further study.
“Metaphorical offloading” (my term) could be a problem too. Recent research illustrates this through how people use metaphors to describe AI. Cheng et al., in an as yet non-peer-reviewed paper, found that when people labelled the AI application as a “friend”, “assistant”, or “genie” their levels of trust were higher even when that isn’t justified. In contrast, and not surprisingly, labelling it as just a “tool” can generate more circumspection as to accuracy and reliability. Women and non-binary people, non-white participants, and older people were more likely to use the more personal terms. Males tended to use more functional terms – like “search engine” or “synthesiser”.
The researchers also found that “anthropomorphic and warm perceptions” which ascribe human-like qualities to the technology, and can improve trust and adoption, have rapidly increased. But the authors noted that studies elsewhere are needed to see if it isn’t just a US issue.
As scientists are figuring out, effective use of tools like AI requires asking the right questions, and that can still be challenging:
“When it comes to molecules, we are entering a world where we can do almost anything we want,” said Gevorg Grigoryan, chief technical officer and co-founder of Generate Biomedicines. “But to translate to clinic, we need to know what questions to ask. There’s a lot of biology we don’t understand, and we often don't know what questions to ask this magical molecular compiler.” [Source: Nature Custom Media]
The decline of human consultants?
One final issue. Many consultants, like me, may be fearing losing their livelihood thanks to developments such as OpenAI’s Deep Research. Why pay lots to a consultant when you can get an AI agent to write “similar” reports for you for a fraction of the cost and time?
Not so fast potential clients, don’t be deceived by the “Deep” label. Sure, some consultants just compile existing information with no real insights or other added value. However, at the moment at least, such programs lack judgement and critical analysis, including not questioning assumptions nor fairly taking account of different perceptions. They often fail to understand what is important, can miss key recent developments, can generate false information, and fail to distinguish reliable from unreliable sources.
So, I hope that I have a few years of paid interesting and stimulating analytical projects to look forward to. I also hope that we can make good decisions for the many not the few about the control and use of our rapidly improving computational tools.