Invited to compose a message for posterity to be buried in a time capsule at the 1939 New York World’s Fair and opened five thousand years later, Albert Einstein sounded a dour tone: “Anyone who thinks about the future must live in fear and terror.”

His gloom must have disappointed the sponsor, the Westinghouse Electric Corporation, which was promoting the fair’s theme, “The World of Tomorrow,” alongside other paragons of American industry. The Ford Motor Company featured the road of tomorrow, the Borden Dairy Company had the dairy world of tomorrow, and, most popular of all, General Motors presented Futurama, where visitors lined up for an eighteen-minute ride on a conveyer belt across an imagined landscape said to represent the marvels to come in the year 1960. Life magazine said it was “full of a tanned and vigorous people, who in twenty years have learned to have fun.” As they left, each visitor received a badge that read, “I have seen the future.” They really hadn’t.

Einstein was thinking about the looming war, of course, as was Thomas Mann, whose time capsule message was, “We know now that the idea of the future as a ‘better world’ was a fallacy of the doctrine of progress.” Awkward, considering that progress was on display from more than a thousand exhibitors. The whole enterprise celebrated futurity. Participants claimed to be “selling ideas,” not just products. As Glenn Adamson frames it in his insightful new history, A Century of Tomorrows, they were engaging in a “kind of futurology.” Their crystal ball was rose-colored; their vision utopian. Nowadays the utopians are, to put it mildly, out of fashion.

The World’s Fair told a white story. Black Americans were invisible, implicitly omitted from the “tanned and vigorous” and explicitly excluded from the fair’s workforce except as maids and porters. The white press did not remark on this, but Black organizers did, and they counterprogrammed an American Negro Exposition in Chicago, taking note of the seventy-fifth anniversary of emancipation. The Black World’s Fair, as it was known, projected a contrasting view of the future, rooted in a different knowledge of the past. Highlighting Black artists from slavery to the present, the Exhibition of the Art of the American Negro emphasized a social realism that “goes beneath the jazzy, superficial show of things,” as the writer and philosopher Alain Locke put it. It represented hunger; it represented lynching. It reminded its visitors that the future is not a destination awaiting our arrival but rather, as Adamson writes, “a perpetual battlefield of ideas.”

The future shown at the New York World’s Fair was a future of technology as humanity’s helpmeet. It embodied “the presumption that a well-designed, well-oiled machine, once up and running, cannot help but produce a better world,” Adamson writes. At the alternative in Chicago, the organizers were voicing another kind of futurology, one that developed alongside mechanistic thinking, counterbalanced it, and to some extent even contradicted it. A machine is autonomous, defined by its own internal operations, self-regulating and self-propelling. Step back a bit, though, and what looks like a marvel begins to seem monstrous.

Writing the future’s story in advance has never seemed more precarious. In a roiling present, dread and unease have dimmed the utopian spirit. Obsessed as we are with peering ahead, we’ve seen visions of the future fragment and transform overnight. “We need to examine not just the emotions that accompany the future as a cultural form,” the anthropologist Arjun Appadurai has written, “but the sensations that it produces: awe, vertigo, excitement, disorientation.”

Adamson takes that to heart. He is a former director of the Museum of Arts and Design in New York City who usually identifies himself more as a curator than as a historian, and A Century of Tomorrows is a departure for him. His previous book, Craft: An American History (2021), examined (and subverted) the conventional distinction between artisan and artist; like this book, it told an eclectic and not always linear story.* Cultural ferment, political upheavals, spiritual awakenings and reawakenings have all relied on visions of the future. Adamson connects techno-optimism to the psychedelic enchantments of the Sixties and the Afrofuturism of the Nineties, all with their prophets and prophecies. He leaps freely among scientific, religious, and fantastical modes of forecasting—which influenced one another more than their practitioners realized. None of the futurologists has a claim to certainty. The futures they describe are products of collective imagination, continually regenerated and revised. They matter because they reshape the present.

So A Century of Tomorrows is not an examination of The Future; it is an examination of examinations of the future. Adamson tells a story about a special class of storytellers, the future-telling people: “We can call them futurologists: those who peer ahead and attempt to discern what is to come.” There have always been futurologists of one kind or another, which already says something about humanity. In ancient times they were oracles, prophets, soothsayers, and astrologers. They discovered the future in entrails and tea leaves. No matter how often they were discredited, the need remained.

Advertisement

Our own futurologists predict election results and hurricanes. They augment imagination with science, which makes them respectable. They accept uncertainty and model probabilities. The pace of modern life makes their work lucrative and necessary. Every major corporation employs futurologists, whether they call them market researchers, trend analysts, or cool hunters. Science fiction writers are futurologists, too—often running ahead of the scientists. Which forecasters to trust, which to follow, is the special challenge of our time.

“Futurology,” with its pretentious suffix, is a relatively new coinage. The OED credits Aldous Huxley in 1946; Huxley probably meant the word as ironically as he did the book title Brave New World. When H.G. Wells invented time machines a half-century earlier, he considered his own interest in futurity to be exceptional. Most people, he told the luminaries of the Royal Institution in 1902, never even think about the future—except “as a sort of blank non-existence upon which the advancing present will presently write events.” A relative stasis had been the norm for previous generations. Science was changing that, bringing an awareness of geological layering and biological evolution, driving faster technological change and speeding the pace of life itself.

Just a few years before Wells, a Massachusetts journalist, Edward Bellamy, conjured a utopian future in his novel Looking Backward: 2000–1887. Its hero sleeps into the future Rip Van Winkle–style and finds that hunger, war, poverty, and unemployment have been abolished. Money is obsolete; every man and woman is issued a “credit card” sufficient to their needs. Manufactured goods flow out from a central warehouse through pneumatic tubes. Likewise, musical entertainment comes to every home by way of electric wires (a mind-boggling new technology when Bellamy was writing). Everyone belongs to “a vast industrial partnership, as large as the nation, as large as humanity”—“the sole employer, the final monopoly in which all previous and lesser monopolies were swallowed up.” Utopia is notably rigid and rule-bound; nothing changes; progress is complete, because perfection has been attained. Everyone seems content. As a huge international best seller, Looking Backward inspired politicians and activists. Bellamyites formed Bellamy Clubs across the United States.

Previous fictional utopias had been displaced not in time but in space; Thomas More’s original (Utopia, 1516) was a remote island in the New World. Placing utopia in the future may seem obvious now, but it was Bellamy’s innovation. “This resulted in a sort of temporal montage,” Adamson writes,

in which an actual present and an imagined future come in and out of focus. The future could now be conceptualized as a shifting landscape, with multiple time horizons interacting with one another.

Time travel did that, too.

Leaving aside fortune tellers and charlatans, the business of prophecy had long belonged to religion, organized and otherwise. The first modern futurologists set themselves up as secular prophets, asserting a rational claim to truth. Since the future arrives piecemeal, they had to earn credibility. What first brought data-driven predictions into daily life was weather forecasting, a scientific innovation of the nineteenth century (messaging by telegraph was a prerequisite). Adamson notes that the basic conceptual breakthrough was a trick: “The future was contained within the present. That is, a lot of tomorrow’s weather is already here today; it’s just somewhere else, usually a little farther west.” Like all futurology, meteorology was, and is, notoriously error-prone, but even probabilistic weather forecasts had great value, so national governments, starting with Britain, established weather bureaus, and newspapers began printing forecasts. “Older vernacular methods—almanacs, moon observation, fingers held up to the wind—were rendered useless,” Adamson writes. “It must have seemed like magic.”

Where else could the magic be deployed? One domain was marketing—specifically, advertising—which relied on increasingly sophisticated ideas about predicting consumers’ desires before the consumers had formed them. Fashion changed like the seasons, with occasional storms. The pioneering J. Walter Thompson agency, which at the turn of the century placed more than half the advertising in the US, declared in 1909 that “the chief work of civilization is to eliminate chance, and that can only be done by foreseeing and planning.” Industrial designers like Norman Bel Geddes, who later created the Futurama pavilion for General Motors, advised corporations to “modernize” their products, streamline them, not only to embrace the new age but to drive it forward. So-called color forecasters, described by Adamson as another “emergent profession of futurologists,” did the same for fashion and home furnishing. In the Twenties and beyond, Margaret Hayden Rorke’s influential Textile Color Card Association predicted what colors would come into fashion—standardized them and promoted them—not just for the clothing trades but also for automobile design and the nascent movie business. They were creating “a new idea of futurology itself,” Adamson writes, “recasting it as a job for skilled technicians.” Never mind that their prophecies were self-fulfilling.

Advertisement

These future-tellers reveled in change for the sake of change, the faster the better. “To-day, speed is the cry of our era, and greater speed one of the goals of to-morrow,” said Bel Geddes. Knowingly or not, he was echoing Filippo Tommaso Marinetti, the Italian proto-fascist who declared in his 1909 “Manifesto of Futurism,” “We affirm that the world’s magnificence has been enriched by a new beauty: the beauty of speed.” He specifically meant racing cars, which he fetishized. Futurism inspired counterparts in other countries, avant-gardes looking only forward, never back, striding into the future by freeing themselves from the past. “We are on the extreme promontory of ages!” Marinetti said. “Why look back since we must break down the mysterious doors of Impossibility?” His was typical of the seething political movements born in the young century: not just reactions to the past but theories of the future.

The extreme case was the one in Russia. The Bolsheviks’ revolution in 1917 was a gong heard around the globe, announcing that the future—revelation and transformation all at once—had arrived. “All Russia plunging dizzily into the unknown and terrible future” was John Reed’s description in Ten Days That Shook the World. After a trip there in 1919 the muckraker Lincoln Steffens famously declared, “I have seen the future and it works.” To produce the future in orderly fashion, the Soviet regime embarked on a series of five-year plans, one after another, national and centralized much like the all-powerful state corporation of Looking Backward. All necessary data ran through Central Statistical Directorate.

The illusion of perfect forecasting and perfect control led to catastrophe: forced industrialization and collectivization caused one of the century’s most deadly famines, which killed five million or more in Ukraine, Kazakhstan, and elsewhere. Afterward the linguistic theorist Roman Jakobson, one of the original Russian Futurists, wrote: “We lived too much for the future, thought about it, believed in it; the news of the day—sufficient unto itself—no longer existed for us.” The Kremlin did not lose faith in the power of five-year plans, however. Their use spread to other countries and continued in the Soviet Union through the 1980s, the future perpetually arriving and receding on a five-year schedule.

This represented a fatally monolithic brand of forecasting, blind to the diversity and complexity of real societies. “Prophetic futurology is like a lens,” Adamson writes, “bringing some things into intense focus and clarity while distorting others, and dramatically limiting the field of view. It’s for good reason that crystal balls are standard tools for fortune tellers.” Nonetheless, futurologists only gained in influence as the century continued. Frank Lloyd Wright promised a version of utopia by dubbing his small private homes “Usonian.” The techno-optimist Buckminster Fuller called his (imagined) houses “Dymaxion” and branded himself as the kind of visionary futurist that appealed to early computer pioneers. Futurologists’ statistical methods got better; in the computer era, they developed formidable technical powers; and they seemed to see what ordinary pundits could not.

Scientific forecasting reaches its ultimate perfection only in fiction—notably Isaac Asimov’s Foundation novels, in which a mathematical system called “psychohistory” reduces human behavior to equations—as if human behavior obeyed laws analogous to the laws of physics, as if human interactions could be modeled like atomic interactions. Nonfictional social scientists aspired to that. The Harvard sociology professor Daniel Bell foresaw a “post-industrial society” led, for better and for worse, by technical elites, and attempted to systematize different forecasting methods in his 1964 essay “Twelve Modes of Prediction.” A less obvious futurologist, Adamson suggests, was Robert McNamara, the president of Ford who got his start as one of the “whiz kids” of the company’s Statistical Control group. Then, as secretary of defense, he applied his predictive methods to the United States, relying heavily on the systems analysts of the RAND Corporation. His forecasts and theirs led to disaster in the Vietnam War, but the RAND Corporation is bigger than ever. Its new area of interest is artificial intelligence.

The current proliferation of future-telling—a glut, even—is a special case of information overload. Bombarded as we are with forecasts, the problem is knowing which to believe. The twenty-first century has seen growing distrust of prophets, the technocratic along with the spiritual. In the run-up to the election last year, journalists, who by now should know better, yet again treated political polls as actual news about the future. This always fickle branch of futurology dominated the news for a year until its inevitable crashing end on election night. All the forecasts of pollsters and pundits came with a hard sell-by date; some guessed right and some guessed wrong; and their collective value fell to zero the Day After. The effort would have been better spent understanding the present.

On the other hand, in the realm of climate science, unjustified mistrust has undermined what should have been a triumph of computer-modeled prognostication. The most alarming climate forecasts have proved right, again and again. Much of the doubt was politically motivated, generated by petroleum interests, but not all. Some skeptics remembered the wave of overpopulation panic that spread in the Sixties and Seventies, epitomized in the many-year best seller The Population Bomb (1968; written by the Stanford University scholars Paul and Anne Ehrlich, though credited only to Paul). Most of the organized environmental movement accepted its thesis and promulgated it: that exponential population growth would inevitably doom humanity to global starvation. “The battle to feed humanity is already lost,” they wrote, when the world population was three billion. They recommended that governments urgently reduce their nations’ birth rates, to return the population to two billion or less. It’s eight billion now, and the worst driver of starvation and poverty is economic inequality, not a lack of resources.

In 1970 another influential best seller was Future Shock (written by Alvin and Heidi Toffler and credited to Alvin). The title catchphrase encouraged panic about change itself, especially technological change, which was causing “shattering stress and disorientation.” Their brand of futurology did not age well. Like the Ehrlichs, writes Adamson, “the Tofflers made breathtakingly bold predictions on the basis of selective anecdotes and wholly imaginary scenarios.” They proposed immediately training “cadres of young people” for relocation to colonies under the ocean and in outer space. Still, the stress and disorientation were real enough. Twelve years after Future Shock came Megatrends: Ten New Directions Transforming Our Lives, by John (and Doris) Naisbitt, a pastiche of then-current thinking about globalization, decentralization, networking, and related buzzwords. It predicted the auspicious rise of a booming postindustrial economy and sold 14 million copies. Adamson calls it “a truly bad book,” significant mainly for encouraging “many other equally dumbed-down books about the future…a publishing phenomenon that continues to this day.”

None of them, as of 1990, could see the thing that was about to happen: the emergence of a notional place, distinct from the “real world,” where some approximation of all humanity would meet and interact at light speed, with instantaneous access to some approximation of all human knowledge. Science fiction writers saw that first. William Gibson, in Neuromancer (1984), dubbed it “cyberspace” and “the matrix”: “bright lattices of logic unfolding across that colorless void.” It hadn’t quite arrived, and he idealized it: “a vast thing, beyond knowing, a sea of information coded in spiral and pheromone, infinite intricacy that only the body, in its strong blind way, could ever read.” Many spend time there now, living in a mode of continuous connection afforded by “phones” that are not really phones.

In science fiction, of course, the denizens of cyberspace include not only humans but “artificial” intelligences, and here they come. The AIs are not only the new favorite topic for prognosticators, they are also seen as potential replacements. It’s all too tempting to approach them as oracles in themselves: our new blind seers.

Alan Turing predicted in 1950 that the advent of thinking machines was near. Ever since, some scientists have endeavored to create them while also warning of the day they would render humans superfluous. Before they eliminate us, they might merely imitate us, as “replicants” do in Philip K. Dick’s Do Androids Dream of Electric Sheep? (1968) and Ridley Scott’s brilliant adaptation, Blade Runner (1982). Even the replicants don’t know for sure whether they are human or machine. The possibility of confusing one with the other has been a fear since Turing made it the linchpin of his famous test for intelligence. It’s a live problem now, as generative AI writes student essays and ersatz books, and AI-powered bots mingle with the humans on social media.

The most alarming prognosis for artificial intelligence is the one known as the Singularity. That’s when AI becomes self-aware and self-sustaining—a powerful new life-form—and human history ends. AI takes control, and we are supplanted or exterminated (take your pick). The idea of the Singularity, with a capital S, was first popularized by the science fiction writer Vernor Vinge in 1993: “It is a point where our old models must be discarded and a new reality rules…. The passing of humankind from center stage.” He expected the moment to come between 2005 and 2030. Tech people loved it. The self-described futurist Ray Kurzweil promoted the idea in The Singularity Is Near (2005), declaring confidently, “I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” For Kurzweil, the Singularity was good news: a version of immortality, humanity transcended. He organized a series of yearly “Singularity Summits” and has now produced a sequel, The Singularity Is Nearer, published last June. He doesn’t think much of plain, unaugmented humans: “We are far from optimal, especially with regard to thinking.”

This is futurology at its silliest. At least the Singularity would mean the end of futurology, as Adamson notes:

The Singularity is like an astronomical black hole, swallowing all possibility of future speculation into its gravitational field. When we are surpassed by AIs, the veil will descend…and the machines will sit in judgment over us, new Gods that we ourselves have enthroned.

Others have mocked the Singularity as “the Nerd Rapture.” The resemblance to Christian eschatology is unmistakable—worshipers drawn to the promise of end times, the zero hour, the Last Judgment. Both Raptures imply a moral reckoning: the chosen move on while the rest are left behind. Both also represent a form of escapism. Why worry about earthly problems like climate change and economic inequality when superhumans are about to achieve immortal transcendence?

The reliable forecast about computing machines is that they get better, faster, and smaller. Otherwise, the industry that makes them has been notably poor at predicting the future of its own products. Intelligence itself remains slippery and ill-defined; entrepreneurs are prone to hype, and humans have a known tendency to anthropomorphize shiny objects, particularly when we can talk to them. But it’s no use asking OpenAI’s ChatGPT, Google’s Gemini, or the one actually named Oracle AI to tell us what the future will bring. They don’t even have knowledge of the present; they have only reams of preexisting text and algorithms for manipulating and rearranging it.

Adamson, having exposed various strains of failed futurology, suggests nonetheless that we will and should continue to make our best guesses, competing with one another, always remembering that every prediction is a statement about the present: “They can’t be constructed so as to cancel one another out, but must be mutually legible and compatible. This, it seems to me, is the work that futurology still has before it.”

For the last fourteen years, Wikipedia has included a forward-looking entry titled “Timeline of the Far Future,” continually growing. At present it begins, “While the future cannot be predicted with certainty, present understanding in various scientific fields allows for the prediction of some far-future events, if only in the broadest outline.” The project is understood to be provisional and in flux. An editor responsible for one recent addition justified it with the comment, “Adds a bit of hope.” A different editor deleted it a few seconds later.