This isn't AI's Asilomar moment
Calls for a pause on some AI developments seem too little too late when compared with the 1975 debate on genetic modification
(Image created using DALL-E 2)
Generative artificial intelligence applications, like ChatGPT-4, have been impressing many over the last few months. Answering questions and queries like a human, crafting essays, creating images and videos. These tools are being quickly adopted as part of existing services or applications, such as search engines, content creation, and research.
Inevitably, there has been hype and hysteria. “Artificial General Intelligence (AGI) is just around the corner”, “Many white collar jobs will disappear.”
Still, ChatGPT-4 isn’t very good at solving Wordle.
But recent advances have led to a prominent call to pause some AI experiments.
Is ChatGPT revolutionary?
There’s also been some more composed analyses and assessments. Yann LeCun, a leading AI researcher (who is Meta’s Chief Scientist) observed that ChatGPT isn’t that revolutionary – it’s built on some older techniques, but it is well put together.
Gary Marcus, another leading figure in AI, has argued that deep learning (which is how GPT [generative pre-trained transformer] systems operate) provides “rough and ready results” because it takes a statistical approach to pattern recognition.
Ali Minai likens ChatGPT to a “stochastic parrot” – providing rote responses, albeit with some originality. It is trained on text and limited human feedback, and because it lacks direct experiences of the real world he says it can’t be considered intelligent. ChatGPT’s inferences are “sporadic, inconsistent and, on occasion, absurd”, and Minai also notes that it has an “inherently ambiguous relationship with the truth.”
Chomsky and colleagues (behind the New York Times paywall) point out that ChatGPT, and other machine learning approaches, are good at description and prediction, but don’t suggest causal mechanisms or physical laws that underpin the results.
AI concerns and dangers
What isn’t disputed is that ChatGPT is a significant advance, but poses three significant societal dangers:
Difficulty in distinguishing truth from falsity (or “hallucinations” in AI speak);
The potential for spreading dis- and misinformation; and
Outsourcing more human knowledge functions to unreliable and opaque systems.
Gary Marcus has also highlighted the error of a focus on “super intelligent” AI. In the short term he is more concerned about MAI – mediocre AI – rather than AGI (artificial general intelligence)
Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world.
He notes that “power” (access) not intelligence is usually the more important influence factor for AI
We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs [large language models], and what, if anything, we might do to stop them.
Call for a pause on some AI experiments
Concerns about how such forms of AI can be intentionally or unintentionally misused has prompted some computer scientists, and others, to sign an open letter calling for “a pause for at least 6 months the training of AI systems more powerful than GPT-4”. If not, governments should step in and introduce a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
Disagreements among signatories to the letter are already emerging, as are criticisms that the letter conflates hype and more realistic short term advances.
Many probably agree with the sentiments of the letter. But, six months seems to be an arbitrary period. And why just “giant AI experiments” as the letter’s title suggests?
Some of the concerns about different types of artificial intelligence need to be addressed at earlier stages, not just when they scale up.
Copyright and intellectual property right infringements for image and texts used in training databases are already a concern even for small scale projects. Labs and companies refusing to share their training data and details of the models are also an existing issue that undermines trust and confidence. The letter is vague about what should and shouldn’t be examined.
Revisiting Asilomar
The plea for a pause reminds me of the 1975 Asilomar International Congress on Recombinant DNA Molecules. Then too, a call for caution (about genetic engineering) was raised by practitioners themselves and debated over several days at the Congress. It resulted in a voluntary moratorium on some types of laboratory experiments until additional safeguards were taken.
Paul Berg, one of the leaders of the initiative, reflected on it in 2008
Scientists around the world hotly debated the wisdom of our call for caution, and the press had a field day conjuring up fantastical 'what if' scenarios. Yet the moratorium was universally observed in academic and industrial research centres. Meanwhile, the public seemed comforted by the fact that the freeze had been proposed by the very people who had helped to develop the technology.
In addition to establishing a safety regime, he thought that the Asilomar Congress also helped build public trust. By having the discussions of the benefits and risks associated with the research
… I feel that scientists were able to gain the public's trust — something that is now much more difficult for researchers working in biotechnology. Because some 15% of the participants at Asilomar were from the media, the public was well informed about the deliberations, as well as the bickering, accusations, wavering views and ultimately the consensus. Many scientists feared that a public debate would place crippling restrictions on molecular biology, but the effort encouraged responsible discussion that led to a consensus that most researchers supported.
The AI open letter seems suggest something more constrained – focussing only on certain aspects. And how publicly open the discussions will be isn’t clear, though these days, blogs, twitter, and other social media enable ideas and discussions to be widely disseminated. It is significant too that those running the large AI experiments are not the ones who instigated, or signed the letter. Unlike Asilomar, not all the key participants are involved yet.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts
Paul Berg noted that Asilomar was probably so successful because most of those involved in the field in 1975 were researchers in the public sector, rather than private companies.
… there is a lesson in Asilomar for all of science: the best way to respond to concerns created by emerging knowledge or early-stage technologies is for scientists from publicly-funded institutions to find common cause with the wider public about the best way to regulate — as early as possible. Once scientists from corporations begin to dominate the research enterprise, it will simply be too late.
In contrast to genetic modification in 1975, AI applications now have already left academia and the lab. There are huge financial opportunities for companies to deploy them, and so many may not be interested in pausing, or could only accept minimal constraints. Some tech companies have already gotten rid of their AI ethics teams.
So, a “pause” to address some concerns will be useful, but is unlikely to be as effective as Asilomar in creating a common approach to managing risks, or in building public trust.
Beyond ChatGPT
With or without a pause, AI will continue to develop and be used in many ways, without certainty of the impacts.
But it is unwise to assume that from now on there will be a smooth and vertiginous rise in artificial intelligence developments.
Over the last few years progress in artificial intelligence has been, to many, spectacular. However, this came after what has been called an “AI Winter” in the 1980s and 90s, when progress and funding declined. AI has gone through several cycles of alternating rapid and sluggish advancement. Another winter (or autumnal phase) could arrive soon.
Gary Marcus has argued that statistical approaches like deep learning (which is how GPT [generative pre-trained transformer] systems operate) are hitting a wall. (A view shared by LeCun). Other approaches, such as symbolic methods AI (used for example by DeepMind, and Wolfram|Alpha) may be the future for AI.
Whatever methods are used, we’ll still need to think carefully, and openly, about how, when, and why they are used.
One trap in thinking about (un)intended consequences of applications of AI (or any other new technology) is to focus mostly on policing and enforcement. This only considers how the current system in which the technologies are used in can still operate (reasonably) fairly or safely. For example, how to stop students cheating with ChatGPT.
But as an article on AI lessons for education from the UK highlights, such developments should also make us reconsider how to redesign current practices, not just how we can control new tools. For example, we should already be considering how we assess students in new ways, and reimagining education (or healthcare, transportation, office work, etc).
With or without a pause on some AI developments, we can already imagine how more advanced computing could change our lives for better and worse. Thinking about how we adapt current practices and systems needs as much attention as how we can control new technologies.