Future Salience #1 - Four errors in futures
Looking to the future shares three common errors with looking backwards, plus an additional one
Image: Jean-Marc Côté ca. 1900
Looking to the future frequently suffers from the same problems as how we look back - we are usually a poor judge of what is important and why.
Stephen Davies in his Works in Progress Newsletter discusses the tendency of histories to concentrate on politics and wars, overlooking or minimising technologies and ideas. (Though there have been some great historical accounts written about, for example, how cod and the shipping container have shaped the world).
Davies suggests that the understanding of history (or a particular history) is often incorrect for three reasons:
It places emphasis on the wrong events.
It judges the relative importance of events incorrectly.
It ultimately misunderstands which events had the most transformative effects on human life.
A lot of futures-related writing, opinions and presentations) suffer from the same weaknesses. Instead of politics though, technologies, especially digital ones, tend to dominate many considerations about the future.
Ironically, Davies falls foul of his own three reasons by asserting that technology and ideas are more influential than politics, leaders, and wars. It’s not an either/or dichotomy. Different factors will be more or less influential for particular circumstances and context. The same applies to looking ahead.
Both historians and futurists can be misled, or mislead, by crafting “grand narratives” of the past or future. These can tell a very good story, but it doesn’t mean they are accurate. Life is usually way more complex, and we’ll never have a perfect appreciation of how events unfolded, or could unfold, as Graeber and Wengrow argue.
Then why bother?
What’s the point of considering the future then? As with history, it is about developing a better understanding of the causes and context of change. But while history aims to reconstruct and explain events and their consequences, the objective of good futures work isn’t about constructing the one true future. It is often most useful as an aid to examine our current biases, assumptions, and exploring alternatives, rather than as a predictive tool.
A futures mindset that looks beyond your normal channels of information makes you better able to identify what may affect you in the short term, not just the long.
The fourth weakness
Unlike history a lot of futures speculation is focussed on the pace of change. But in many cases we are hopeless at predicting how quickly something will develop or spread, be that technological, social, political, or environmental. Don’t be seduced by those charts of exponential growth progressing smoothly into the future.
So, a fourth possible source of error needs to be added when considering future events:
4. Over- or underestimating the scale and pace of change
This was codified decades ago for technology as Amara’s Law:
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Roy Amara
We need to think about the assumptions we, or others, make about the speed and nature of change, and to monitor trends and events continually, and adjust our assessments.
Challenging technology hype
The difference between expectations of the pace of change and reality has been illustrated by Rodney Brooks. He’s a well-respected technology insider who regularly challenges the hype from Silicon Valley. In 2018 he made predictions about progress in artificial intelligence, robotics, self-driving cars, and the space industry. These were much less optimistic than press releases and analyses from other technology commentators. Importantly, he holds himself to account, annually reporting on how his predictions are faring.
His 2023 review of the predictions is holding up well, but he admits that he too may have been overly optimistic estimating the pace of progress in others.
Brooks suggests that regular services provided by fully self-driving cars are still years away, based on current performances. The current small scale trials in some cities still have many challenges, he thinks, to overcome to provide a safe, reliable service that is comparable to human driven vehicles. Flying cars continue to be a fantasy, and shouldn’t be confused with a range of firms proposing autonomous drones that may fly people around.
Brooks also considers that we aren’t yet close to an artificial intelligence “tipping point.” In contrast to much of the commentary about ChatGPT, all Brooks has to say about it is “calm down”. He doesn’t think that there is much else worth saying. Yes, it can be impressive, but it isn’t close to generalised artificial intelligence.
“We neither have super powerful AI around the corner, nor the end of the world caused by AI about to come down upon us. And that pretty much sums up where AI and Machine Learning have gone this year. Lots of froth and not much actual deployed new hard core reliable technology.” Rodney Brooks
Brooks thinks that rather than speeding up, the technological developments he is monitoring may go even slower than he thought five years ago.
As a side note, the collapse of the Silicon Valley Bank is likely to further shake confidence in the technology sector. As Noah Smith commented, the Bank’s collapse fits within the broader picture of what he calls the “tech bust” that has been going on for the last year. Such falling confidence, and harder access to funding, is likely to add to the delay of some technological developments that Brooks’ monitors.
Of course, Brooks may be wrong, but he is one of the few making quite specific predictions and consistently monitoring them, so it is worthwhile following what he writes.
Questions of salience
But it’s important not to get too hooked up on the pace of change. “What if?”, “So what?”, and “Why?” questions can be more helpful in futures projects than “When?” ones.
So, these are the types of issues that Future Salience will be looking at. These will help us think more critically, constructively, and hopefully about change.