How to Think about Progress: A Skeptic’s Guide to Technology
Nicholas Agar, Stuart Whatley & Dan Weijers
250 pages, Springer, 2024
Writing during the COVID-19 pandemic, and amid another wave of hype about the potential of new technologies to solve humanity’s biggest problems, our goal for How to Think About Progress: A Skeptic’s Guide to Technology was to offer principles for thinking about the future. To handle the challenges before us, we will need a fresh generation who can think wisely about a wide range of issues unknown to earlier generations. That means rejecting overly optimistic views of our ability to advance technology quickly, tempering our expectations for how technological breakthroughs will affect our well-being even if they do happen, and considering individual, social, and political solutions alongside technological ones.
Thinking appropriately about progress is something we should all engage in. But it isn’t always easy. When people venture ideas about an essentially uncertain future, they should be aware that they risk being wrong. But this makes them no different from many of the self-appointed experts of our age. We need the contrasting and competing visions of the many, including young minds that enter universities with an eagerness to solve our species’ problems.
Elon Musk’s billions embolden him to venture forecasts about the future without fear of being falsified. People who lack his means but share his desire to describe and shape humanity’s future should be similarly empowered. We need more, not fewer, ideas about an essentially uncertain future. It would be both tragic and dangerous if we failed to see an opportunity that was within the considerable imaginative range of our species, simply because we feared being wrong, or because the prognostications of the rich and powerful had crowded it out.—Nicholas Agar, Stuart Whatley & Dan Weijers
* * *
For most of human history, people, relying on agriculture, have prayed for rain. Today, we, relying on technology, pray for continued scientific breakthroughs. Whether we are seeking solutions to climate change and disease or pursuing lofty ambitions such as the colonization of Mars, even many pessimists and skeptics tend to hold out hope that someone, somewhere, will come up with something sooner or later.
After all, that has been the modern creed at least since the Enlightenment. We are confident that there are no limits to human ingenuity. The doctrine of progress, the fin de siècle historian J.B. Bury observed, is one of the central ideas of Western civilization. And why shouldn’t it be? Our commitment to the pursuit of innovation has been rewarded in the form of ever-higher material gains. It stands to reason that we would expect continued progress of this kind. Past performance does not guarantee future returns, of course, but nor can one judge too harshly our eagerness for ever more material progress. It is this anticipation that underpins the sprawling field of academic and pop futurism, where expert prognosticators of all stripes extrapolate from current trends to offer a portrait of what’s next.
For centuries, we have convinced ourselves that every problem is solvable with enough brain power, particularly when applied through new technologies. And with the rapid development of COVID-19 vaccines, this conviction received a new breath of life. Commentators have heralded a “Great Acceleration” and the onset of a “New Roaring ‘20s”—a reference to the decade of rapid economic growth that followed the influenza pandemic of 1918.
Before the COVID-19 pandemic, the prevailing narrative had been one of disappointment, characterized by Peter Thiel’s complaint that while we wanted flying cars, we got Twitter instead. Best-selling economists warned of a “Great Stagnation” and the fall of growth, owing to technology’s inability to pack the same punch as it once did. But following the fast arrival of COVID-19 vaccines, many commentators began to welcome the dawn of a new era of progress in biotechnology, digital currencies, transportation, energy, and—now—AI.
Still, we as a global civilization are struggling to muster a sufficient response to challenges like climate change. With the arrival of COVID-19, we learned that big, societal problems don’t patiently wait in line, giving us time to solve each in turn, while never confronting us with more than we can handle.
Pandemics and climate change can both be understood as civilizational problems, which is not the same thing as an existential threat. The latter implies human extinction. While COVID-19 was a nightmare, its total death toll barely put a dent in the global population. Similarly, climate change is very unlikely to eradicate humanity; even the worst-case scenarios do not envision Earth turning into Mercury, with its daytime temperatures of 430° Celsius. But pandemics and the climate crisis do both threaten our way of life on a global scale. And as we confront these problems, we should reflect on the attitudes that guide our responses to them.
Civilizational problems, by definition, affect or implicate everyone. They are the kinds of challenges that inspire a “wartime mobilization.” By declaring war on things like poverty, climate change, cancer, and COVID-19, we set these issues apart from others. By setting out to colonize Mars, we tap into a much longer tradition of humankind’s conquest over nature. But when thinking about civilizational problems—“Big Cs” like cancer, climate change, COVID-19, and colonizing Mars—we should be mindful that our secular faith in technological solutions is precisely that: a leap of faith. We are constantly waiting for the next big technological fix to arrive in the nick of time, because we implicitly accept that with enough information, practically anything is possible.
But complicating the picture implied by such optimism is what we call the horizon bias: the modern propensity to believe that anything we can envisage accomplishing with technology is therefore imminently in reach. The horizon bias not only leads us to think that ambitious targets are closer than they really are; it also subjects us to an endless treadmill of dashed hopes. It is an enduring feature of an age in which futurism has become a booming industry.
Futurism is the offspring of the eighteenth-century doctrine of Progress. Early works in the genre, such as Sebastian Mercier’s 1771 novel L’An 2440—one of the first works of utopian fiction set in the future—extrapolated from the technological advances of the present to consider what plausibly might come next. These Enlightenment-era works represented a new future-oriented consciousness—and a new, widespread sense of expectation for what industrialism would bring next.
Over the past 250 years, there have been ongoing efforts to turn the art of forecasting into something resembling a science. One of the most famous and influential examples of this was H. G. Wells’s 1902 book Anticipations of the Reaction of Mechanical and Scientific Progress upon Human Life and Thought, which offered “a rough sketch of the coming time, a prospectus, as it were, of the joint undertakings of mankind in facing these impending years.” Wells declared that he was going beyond the realm of futuristic fiction to offer “quite serious forecasts and inductions.”
Wells’s project was continued in earnest after the massive upheavals of the two world wars. In another famous futurist work, the 1970 book Future Shock, Alvin and Heidi Toffler (though the latter was not originally listed as an author) set out to consider not only the technologies that were likely to come, but also their social and psychological implications. The goal of the book was to make readers more future-conscious. This objective would appear to have been achieved. It is no exaggeration to say that the culture of Western advanced economies today harbors an obsession with the future.
Anticipations—and hype—about technology are a major engine of the news cycle and the foundation of a massive venture-capital industry. Futurism is the VC sector’s bread and butter. VC firms like Prime Movers Lab now publish roadmaps to the year 2050, anticipating, among other things, commercial nuclear fusion plants operating by the 2030s; near-Earth asteroid mining by the 2040s; and the ability to slow or partially reverse human aging by 2050.
By focusing entirely on the pursuit of vaguely defined outcomes that technology might deliver, we end up on an endless journey—staring at an ever-receding point over the horizon, when we could be reflecting more deeply on our immediate surroundings. Moreover, by giving us too much confidence in our own technological abilities, the horizon bias leads us to exhibit a potentially dangerous preference for technological solutions to complex problems.
The horizon bias applies to our present and future, but it is rooted in a selective memory of our past. Technology’s many headline successes—eradicating smallpox, putting a man on the moon—dwell permanently in our collective memory, offering strong inductive evidence for the power of human ingenuity. At the same time, we often forget (or are oblivious to) all the times that technology promised to solve some problem but didn’t.
Remember the early years of the internet, when countless commentators and politicians from Al Gore to Newt Gingrich proclaimed a new age of freedom and democratization? The problem wasn’t just that they were being naïve, notes historian Howard P. Segal, it was that they were effectively repeating the same claims made by many previous generations of optimists who looked forward to revolutions in communication and transportation technologies. Yet even if one accepts that “freedom” has expanded in some places over time, it is a stretch to argue that specific technologies—rather than, say, social and political developments—were the cause.
One reason for our serial forgetting is that we tend to fixate on certain key facts about new technologies, rather than the feelings that earlier instances of techno-hype might have incited in the public. In advanced, industrialized societies, schoolchildren learn about the many inventions that made the modern age: the steam engine was followed by the internal combustion engine, which was followed by precision engineering, and so on. That story is simple and straightforward. More complicated is the story of our subjective experiences. We generally forget (or were not around to hear) what exactly was promised by those offering technological solutions at any given time in the past. With less memory or knowledge of all the hopes that were disappointed, we are left with a picture depicting only the hopes that were met.
To be sure, some past successes far exceeded the hopes of even their most ardent proponents—and these we remember fondly. Basic “low-hanging fruit” like flush toilets and indoor plumbing radically changed public health, allowing for an unprecedented increase in average lifespans. But, again, these subjective dynamics cut both ways. Plenty of other past successes might still qualify as such, but they would be viewed as disappointments by those who originally conceived them (as in Thiel’s observation above).
Indeed, this has been a recurring feature of the War on Cancer. Cancer researchers in the early 1970s were so confident in the trajectory of new chemotherapies that they expected the disease to be defeated within 5 years. They would be dismayed to see that these treatments have made hardly a dent on the overall cancer mortality statistics 50 years later. Insofar as they represent an improvement on what came before them, chemotherapies qualify as a success; yet against the specific criteria of what their earliest proponents promised, they are in that sense a failure.
Just as history is written by the victors, the story of technological progress features only the breakthroughs that actually panned out. But only by cutting out the warp and weft of subjective experiences can one arrive at the simple conclusion that Technological Man consistently accomplishes what he sets out to achieve.
By regularly forgetting the hopes that we (or previous generations) were originally encouraged to entertain with respect to technology, we are like those who frequent casinos. Casino operators have long known that to keep someone gambling, it is better to offer mystery prizes than prizes that are described in full. Since the mystery prize can be anything, it will on very rare occasions be something great, like a new Porsche. But when it turns out to be a free meal in the casino steak house, the gambler will soon forget his initial Porsche-level excitement. He’ll take his steak and then rush back eagerly to listen for the next mystery prize.
The untold part our grand technological story is replete with visions of Porsches that never materialized, and then quickly faded from our memory. When it comes to our past hopes and expectations for technology, our memories are no more retentive than those of a dog pleading for a scrap of food, and with no apparent memory of the preceding five morsels she already received.
By remembering past expectations alongside past achievements, we can examine our own present hopes for technology more fully. In thinking about civilizational threats, technology obviously will have an important contribution to make. But insofar as we are led by the horizon bias to prioritize technology-driven solutions, we risk ignoring strategies in which new technologies play only a subsidiary role.
These biases matter. Over the course of his career, a racially biased police officer will doubtless arrest genuine criminals. But to the extent that he is biased, he is a worse cop, because he will be more likely to make wrongful arrests, and less likely to make arrests that he should make. The same applies to our bias in favor of technological solutions. Sometimes, we will be right to rely fully on technology. Yet over time, our preference will prevent us from even considering many possible responses to our most pressing problems.
By understanding this predilection, we can start to rebalance our responses to systemic threats, removing some of the marketing sheen from the technologies that we can see ourselves deploying against climate change and future pandemics. The temptation to imagine technological solutions to these problems can easily lead to overindulgence when we start fantasizing about what these technologies could be, and how they could work in the real world. This is Porsche-thinking in action.
For example, suppose we could enhance our immune systems to ward off any member of the coronavirus family, present or future. We could then declare victory even against the common cold. Or suppose we could create technologies that would efficiently and cheaply remove carbon-dioxide from the atmosphere with the turn of a switch. We could then maintain or even double down on our current way of life without having to worry about the climatic effects. Just thinking about such straightforward solutions offers a hit of dopamine. And when these technologies fail to materialize, we promptly forget about that source of hope as we line up for the next bold idea.
What’s the problem with imagining a future in which technological advances fix our most pressing challenges? It is that the more time we spend obsessing over what technology could do for us, the less attention we give to social and political imaginaries. We tend to prefer engineered solutions because these make fewer awkward requests of us.
By contrast, social and political advances usually involve difficult changes that a significant cohort of people would avoid if given the option. Why think about what it would take to reduce meat consumption when we could be thinking about technologies to make ethical, low-carbon lab-grown burgers and steaks? Never mind that the availability of abundant ethical lab-grown meat would come with costs of its own; namely, by encouraging even more consumption of foods that are associated with a wide range of negative health outcomes.
The difficult scenarios that we would prefer not to think about each entail some level of self-sacrifice, trust in others, or old-fashioned hard work. It is perfectly understandable that we would seek to avoid these by ceding the floor to those offering free lunches.
In the absence of technological fixes, the next-best thing we have is metaphor. For example, we have an abiding tendency to wage metaphorical wars on hard problems. In 1971, US President Richard Nixon declared war on cancer, whereas today one often hears arguments for a wartime mobilization against climate change or COVID-19. The idea of declaring war on atmospheric carbon or on a microscopic viral particle seems quixotic. But there is a reason to retain this way of thinking, at least until our civilization acquires serviceable replacements. Wars have typically demanded organized responses, and marshaled the talents and ingenuity of the broader population.
Of course, Susan Sontag famously rejected the metaphorical war on cancer and hoped for new metaphors to direct our engagement with that disease. But we see the war metaphor’s frequent use as an indicator of its cultural power. If it can muster an organized resistance to climate disasters or pandemics without requiring too much explanation, we should avail ourselves of it. It is, in this sense, a powerful ideological technology—a widely understood idea that we have fashioned to make the most of our social natures and compacity to work together against common enemies. Perhaps a truly enlightened future age will look back on the use of war metaphors as primitive and passé. But, in the meantime, they are what we have to work with.
Moreover, war metaphors imply a collective response that differs from the pursuit of technological quick fixes. Technology has certainly played a key role in all past wars, but it has not been the whole story. When we recount the victory over fascism in World War II, we can focus on Spitfire fighter planes, Sherman tanks, and the A-bomb, but we tend to prefer the human stories, such as of the “Band of Brothers” who worked together and risked everything for their country and each other. These are good reminders of what it really takes to face down a truly hard problem. If a problem can be solved solely with technology, it perhaps was not so hard to begin with.