Are We Doomed? Here’s How to Think About It
June 3, 2024
In January, the computer scientist Geoffrey Hinton gave a lecture to Are We Doomed?, a course at the University of Chicago. He spoke via Zoom about whether artificial intelligence poses an existential threat. He was cheerful and expansive and apparently certain that everything was going to go terribly wrong, and soon. “I timed my life perfectly,” Hinton, who is seventy-six, told the class. “I was born just after the end of the Second World War. I was a teen-ager before there was AIDS. And now I’m going to die before the end.”
Most of the several dozen students had not been alive for even a day of the twentieth century; they laughed. In advance of Hinton’s talk, they had read about how A.I. could simplify the engineering of synthetic bioweapons and concentrate surveillance power into the hands of the few, and how a rogue A.I. could relentlessly pursue its goals regardless of the intentions of its makers—the whole grim caboodle. Hinton—who was a leader in the development of machine learning and who, since resigning from Google, last year, has become a public authority on A.I. threats—was asked about the efficacy of safeguards on A.I. “My advice is to be seventy-six,” he said. More laughter. A student followed up with a question about what careers he saw being eliminated by A.I. “It’s the first time I’ve seen anything that makes it good to be old,” he replied. He recommended becoming a plumber. “We all think what’s special about us is our intelligence, but it might be the sort of physiology of our bodies . . . is what’s, in the end, the last thing that’s better,” he said.
I was getting a sense of how Hinton processed existential threat: like the Fool in “King Lear.” And I knew how I processed it: in a Morse code of anxiety and calm, but with less intensity than I think about my pets or about Anna’s Swedish ginger thins. But how did these young people take in, or not take in, all the chatter about A.I. menaces, dying oceans, and nuclear arsenals, in addition to the generally pretty convincing end-times mood over all? I often hear people say that the youth give them hope for the future. This obscures the question of whether young people themselves have hope, or even think in such terms.
Are We Doomed? was made up of undergraduate and graduate students, and met for about three hours on Thursday afternoons. Each week, a guest expert gave a lecture and fielded questions about a topic related to existential risk: nuclear annihilation, climate catastrophe, biothreats, misinformation, A.I. The assigned materials were varied in genre, tone, and perspective. They included a 2023 report by the Intergovernmental Panel on Climate Change; the films “Dr. Strangelove,” by Stanley Kubrick, and “Wall-E,” by Pixar; Ursula K. Le Guin’s novel “The Dispossessed”; a publication from the Bipartisan Commission on Biodefense and Max Brooks called “Germ Warfare: A Very Graphic History”; and chapters of “The Precipice: Existential Risk and the Future of Humanity,” by the philosopher Toby Ord.
Daniel Holz, an astrophysicist, and James Evans, a computational scientist and sociologist, co-taught the course. Evans looks like he’s about to give a presentation on conceptual art, and Holz like he’s about to go hiking; both wear jeans. Holz is boyish, brightly melancholy, generous, and gently intense, and Evans is spirited, fun, and intimidatingly well and widely read. Evans and Holz taught Are We Doomed? once before, online, in the spring of 2021. “As difficult as the pandemic was, my mood was better then,” Holz told me in his office, where the most prominent decoration was a framed photograph of a very tall ocean wave. He had conceived of the course after making a series of thrilling research breakthroughs on black holes, neutron stars, and gravitational waves. “I fell into a postpartum depression of sorts,” he said. “I wanted to do something that felt relevant.” In addition to heading an astrophysics research group, Holz is the founding director of the Existential Risk Laboratory (XLab), at the University of Chicago, which describes itself as “dedicated to the analysis and mitigation of risks that threaten human civilization’s long-term survival.” In college, the other path of study that tempted Holz was poetry.
Evans’s research is focussed in part on how knowledge is built, especially scientific knowledge. He is the founder and director of Knowledge Lab, also at the University of Chicago, which uses computational science and other tools to make inquiries that can’t be made by more traditional means. Evans and a co-author recently published an article in Nature which, following the analysis of tens of millions of papers and patents, suggested that the most cited and impactful work is produced by researchers working outside their disciplines—a physicist doing biology, to give one example. Evans also studies complex systems, focussing on what leads them to collapse. He likes, basically, to be surprised, and to be open to surprise. “It was important to Daniel and me that there be a sense of play in the course, that there be a level of comfort with uncertainty and ignorance and being wrong,” Evans told me. It’s hard to envision what the future will look like, he said, because “today just feels like it did yesterday. It doesn’t feel like it’s any different. But there’s the potential for really nonlinear negative outcomes.” “Nonlinear” was a word that became as familiar as toast while I was observing this class—the idea of little changes that, at some threshold, lead to tremendous, possibly catastrophic, shifts.
On the first day of class, Holz told a story that is famous among scientists, though accounts of it vary. About five years after the end of the Second World War, during a visit to Los Alamos, the physicist Enrico Fermi was walking to lunch with a few colleagues. Scientists there were trying to develop a hydrogen bomb, a weapon easily a hundred times more powerful than the atomic bombs that devastated Japan. One of the scientists brought up a New Yorker cartoon that showed aliens unloading Department of Sanitation trash cans from a spaceship. The conversation moved on to other topics. Then Fermi asked, “But where is everybody?” They all laughed; somehow everyone understood that he was talking about aliens. Surely there existed alien life that was sufficiently advanced to say hello, and yet humanity had received no such greeting. How could that be?
The “Where is everybody?” problem came to be known as the Fermi paradox. One of the more compelling responses to the paradox is to ask, Can a civilization become technologically advanced enough to contact us before blowing itself up? For Fermi and his colleagues, the prospect of nuclear annihilation required no imaginative leap.
The average age of the people who worked on the Manhattan Project at Los Alamos was twenty-five, which is not much older than the students in the class at Chicago. The energy and conviction of youth is a superpower, for better and for worse. But young people live on the highest floors of the teetering tower of our civilization, and they will be the last ones to leave the building. They have the most to lose if the stairwells start to crumble.
On a sunny February afternoon, midway through the course, I spoke with some of the students in a conference room on the fourth floor of the building that houses the department of astronomy and astrophysics. The room overlooks a polymorphous Henry Moore sculpture (from different angles it looks like a skull, an army helmet, or a mushroom cloud) and the glass-domed university library, where robots retrieve your books from stacks that run fifty feet down.
Lucy, a senior majoring in math, deadpanned that she was taking the course because it wasn’t math. “And, also, I have an unrealized prepper soul,” she said. Olivia, a senior who designed her major around the question “How do we agreeably disagree?,” had previously taken a class on the history of the bomb. She thought that her interest also had to do with family background. “When you have people in your family who have survived the Holocaust, the question of ‘Are we doomed?’ is a really serious one,” she said. Audrey and Aidan, both physics majors, were especially interested in nuclear risk. Isaiah, a sociology major, said that he valued thinking about problems over the long term, on both a personal and a societal level. Mikko, a graduate student in sociology, had two relatives who worked in the nuclear field, which made him feel close to the topic; he was also invested in how the course related to sustainability. (Later, he told me that his own work was on a very different topic: it was about “shitty food porn” and the online communities in which people post photos of unappetizing food.)
The students were talkative, confident, buoyant, very much at ease, and clever. Isaiah, for example, pointed out that “doom” was a pre-modern fire-and-brimstone term, quite different from “risk,” which was tied to modern ideas of chance and probability. In various ways, the students declared the class to be a form of social therapy. Although most described themselves as “pretty pessimistic” or “not a fatalist but not an optimist,” they seemed, as a group, to intuitively inhabit, and occasionally switch, roles: the pragmatist, the persuadable, the expert. But Mikko, who had long hair and black-painted fingernails, and often wore a trenchcoat, was the designated class naysayer. He argued that the question “Are we doomed?” was unproductive, because it obscured a progressive future for climate change. He found it problematic that the A.I. conversation was driven by its makers rather than by the people most affected by the technology. “I’m a natural-born hater,” he said, acknowledging that his fellow-students sometimes looked at him as if he were wearing spurs on a shared life raft.
I was more than charmed by the students, I admit. Their temperaments were brighter than my own, their thoughts more surprising. It was a tiny, unrepresentative group, but they didn’t resemble “young people” as they are portrayed in popular culture. When I asked them whether concern about the environment or other risks was likely to affect their decisions about having families, they looked at me as if I were a pitiable doomer—no, not really.
Holz is the chair of the Science and Security Board of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock each year. The Bulletin was founded in 1945 by scientists in Chicago who had worked on the Manhattan Project and wanted to increase awareness, they wrote, of the “horrible effects of nuclear weapons and the consequences of using them.” (The first controlled and self-sustaining nuclear chain reaction—led by Fermi—had taken place beneath what was then the University of Chicago football field and is now a library.) The cover of the Bulletin’s first issue as a magazine was a clock set at seven minutes to midnight. The time was chosen, Holz explained, largely because it looked cool. But it was a powerful image; a ticking clock is a classic narrative device for a reason. “The farthest from midnight it ever went was seventeen minutes before midnight, at the end of the Cold War,” Holz said. A humble physical version of the clock—made of what looks to be cardboard and showing only a quarter of a clock face—is kept in a corner on the first floor of a building on the Chicago campus that houses the School of Public Policy and the Bulletin. Currently, the clock shows ninety seconds to midnight, the same as last year, and the closest to midnight it’s ever been.
Holz’s days often include listening to the detailed worries and assessments of non-agreeing experts who devote their lives to thinking about biothreats, nuclear risk, climate change, and perils from emerging technologies. It must, I imagine, feel like being pursued by a comically dogged black cloud. “It’s insane that one person can destroy civilization in thirty minutes, that that is the setup,” Holz said, in passing, while we were waiting for an elevator; no one can veto an American President who decides to launch a nuclear weapon. Yet, if you ask Holz anything about astrophysics, the sun returns. “Black holes are a beacon of hope and light,” he said, visibly pleased by the wordplay. (His papers have titles such as “How Black Holes Get Their Kicks: Gravitational Radiation Recoil Revisited” and “Shouts and Murmurs: Combining Individual Gravitational-Wave Sources with the Stochastic Background to Measure the History of Binary Black Hole Mergers.”) “Cosmology is a consolation, in part because it puts a positive valence on our smallness,” he explained. The universe is magnificent and more than immense, and we’re extremely minor and less than special—and then there are all those civilizations we keep not meeting. Somehow the vast, indifferent cosmos makes Holz feel more inspired to work to give humanity its best chance. “It’s the opposite of nihilism,” he said. “Because we’re not special, the onus is on us to make a difference.”
The students also had their own emotional weather systems. When I spoke to Lawton, a graduate student in international relations and a policy wonk, he said that he was “probably one of the most optimistic people here.” He wanted to work in government, and told me that he was counting on humanity’s desire to survive—that this desire, ultimately, would steer us from disaster. He also told me that he felt pretty different from the other students at Chicago, in part because he had attended a small college in Lakeland, Florida, and was working three part-time jobs, one of which was editing videos—work that, he pointed out lightheartedly, he would presumably soon lose to A.I. As a child, Lawton thought school was fantastic in every way; home was not a great place to be. He said that it was odd to have someone ask his opinion—he hated talking about himself and generally avoided it. When I asked him his age, he replied that he was born in 2000, the Year of the Dragon. I’m a Dragon, too, I told him. That reminded me that I was twice his age. I didn’t feel two Chinese Zodiac cycles older than him—but I did grow up thinking that the microwave was the end point toward which technology had been heading for all those years.
I was curious to learn the students’ first memories of the idea of an end-time. Mikko remembered as a kid seeing a trailer for a reality-TV show on the Discovery Channel, in which contestants battled for survival in faux post-apocalyptic environments. Isaiah recalled losing electricity during Hurricane Sandy. “I remember playing Monopoly by candlelight—at first, it was kind of novel, this lack of technology, but then it was just very depressing, so I think that was kind of when I had the sense that climate change can affect everyone,” he said. He went through a phase in middle school of being very interested in preppers and going deep into related Reddit threads. “Not much happened,” he said, smiling. “I didn’t have an allowance.”
At the start of the sixth week of class, Holz announced a linked film series that would screen at the Gene Siskel Film Center: “Godzilla,” “WarGames,” “Don’t Look Up,” “Contagion.” The visiting guest that week was Jacqueline Feke, a philosophy professor at the University of Waterloo. She guided students through the etymology of “utopia,” a word invented by the philosopher and statesman Thomas More, who was decapitated for treason. “Utopia” is the title of More’s book, from 1516, about an imagined idyllic place—speculative fiction, we might say today. More’s neologism suggested a place (from the Greek topos) that is nowhere (from the Greek ou, meaning “not”). The readings, which included E. M. Forster’s “The Machine Stops” and excerpts from Plato’s Republic, were less harrowing than those of other weeks, when students read chapters from “The Button: The New Nuclear Arms Race and Presidential Power from Truman to Trump,” by William J. Perry and Tom Z. Collina, and “The Uninhabitable Earth,” by David Wallace-Wells.
Imagining utopias, imagining dystopias—how do we get to a better place, or at least avoid getting to a much worse one? During the discussion, Mia, a graduate student in sociology who had experience in the corporate world, brought up “red teaming,” a practice common in tech and national security, in which you ask outsiders to expose your weaknesses—for example, by hacking into your security system. In this manner, red teaming functions like dystopian narratives do, allowing one to consider all the ways that things could go wrong.
But hiring people to hack into a system also lays out a road map for breaking into that system, another student argued. Thinking through how humans might go extinct, or how the world might be destroyed—wasn’t this unreasonably close to plotting human extinction?
“Yeah, it’s like ‘Don’t Create the Torment Nexus,’ ” someone called out, to laughter. This was a meme referring to the idea that if a person dreams up something meant to serve as a cautionary tale—for example, Frank Herbert’s small assassin drones that seek out their targets, from “Dune,” published in 1965—the real-life version will follow soon enough.
“Like, there’s a way that dystopian fiction is a blueprint—”
“It can be aspirational—”
“We’ll end up having a Terminator and a Skynet,” someone else said, in reference to the Arnold Schwarzenegger movies. The discussion was cheerfully derailing, with students interrupting one another.
“So are we thinking that we need to regulate dystopian fiction?” Holz asked sportively.
Evans pushed the logic: “Plato’s Republic says we can’t play music in minor keys because it’s too painful—do we want that?”
No, nobody wanted that, though the students had trouble articulating why.
“Maybe we need to stop teaching this class right now,” Evans proposed. The class laughed. “But we won’t.”
H. G. Wells, in his essay “The Extinction of Man,” writes of the possibility that human civilization might be devastated by “the migratory ants of Central Africa, against which no man can stand.” Wells chose an example that would be difficult to imagine, in part to point out the feebleness of human imagination. Although the term “existential risk” is often attributed to a 2002 paper by the philosopher Nick Bostrom, there is a long, unnamed tradition of thinking about the subject. Among the accomplishments of the sixteenth-century polymath Gerolamo Cardano is the concept that any series of events could have been different—that there was chance, there was probability. It was an intimation—in a time and place more comfortable with fate and God’s will—of how unlikely it was that we came to be, and how it’s not a given that we will continue to be. (Cardano’s mother supposedly tried to abort him; his three older siblings died of the plague.) A more modern formulation of this thinking can be found in the work of the astrophysicist J. Richard Gott, who argues that we can make predictions about how long something will last—be it the Berlin Wall or humanity—on the basis of the idea that we are almost certainly not in a special place in time. Assuming that we are in an “ordinary” place in the history of our species allows us to extrapolate how much longer we will last. Brandon Carter, another astrophysicist, made an analogous argument in the early eighties, using the number of people that have existed and will ever exist as the expanse. These and similar lines of thought have come to fall under the umbrella of the Doomsday Argument. The Doomsday Argument is not about assessing any particular risk—it’s a colder calculation. But it also prompts the question of whether we can steer the ship a bit to the left of the oncoming iceberg. The biologist Rachel Carson’s 1962 book, “Silent Spring,” for example, can be said to grapple with that question.
Jerry Brown, the two-time governor of California and three-time Presidential candidate, was set to speak to the class on a winter afternoon. One student was eating mac and cheese and another was drinking iced tea from a plastic cup with a candy-cane-striped straw. Holz entered the classroom while on a phone call. Brown’s voice could be heard on the other end, asking if “this generation” would know who Daniel Ellsberg was, or would he need to explain? Holz said that the students would know.
When Brown’s face was projected onto the classroom screen, he was red-cheeked and leaning in to the camera. “I don’t see the class,” he said, his voice on speaker. “There’s no audio here.” One of the T.A.s adjusted something on a laptop. Then Brown got going. He had plenty to say. “You’re young. The odds of a nuclear encounter in your lifetime is high,” he told the students. “I don’t want to sugarcoat this.”
Brown, eighty-six years old, spoke with the energy of someone sixty years his junior who has somehow had conversations with Xi Jinping and is deeply knowledgeable about the trillions of dollars spent on military weapons globally. “We’re in a real pickle,” he said. He brought up Ellsberg, a longtime advocate of nuclear disarmament. Ellsberg, who died last June, thought that the most likely scenario leading to nuclear war was a launch happening by mistake, Brown said. There are numerous examples of close calls. In June, 1980, the NORAD missile-warning displays showed twenty-two hundred Soviet nuclear missiles en route to the United States. Zbigniew Brzezinski, Jimmy Carter’s national-security adviser, was alerted by a late-night phone call. Fighter planes had been sent out to search the skies, and launch keys for the U.S.’s ballistic missiles were removed from their safes. Brzezinski had only minutes to decide whether to advise a retaliatory strike. Then he received another phone call: it was a false alarm, a computer glitch—there were no incoming missiles. In 1983, a Soviet early-warning satellite system reported five incoming American missiles. Stanislav Petrov, who was on duty at the command center, convinced his superiors that it was most likely an error; if the Americans were attacking, they wouldn’t have launched so few missiles. In both instances, only a handful of people stood between nuclear holocaust and the status quo.
“A world can go on for thousands of years, and then all hell can break loose,” Brown observed. Nonlinear. He spoke of the Gazans, the Ukrainians, the Jews in Germany in the nineteen-thirties. He spoke of the Native Americans. It wasn’t just a matter of worst fears being realized—it was a matter of catastrophes that had not been foreseen. It was only luck, Brown said, that we had gone seventy-five years without another nuclear bomb being dropped in combat.
The conversation shifted to student questions. What about the nuclear-arms package that Congress had passed? Was there a way to talk about nuclear disarmament without quashing nuclear energy? What did Brown think about the idea that with existential risk there’s no trial and error? How can predictions be made if they aren’t based on events that have happened? The time passed quickly, and Holz asked Brown if he was up for five more minutes. “I’m up for as long as you want,” he said. “We’re talking about the end of the world.”
Nuclear destruction had also been the topic of the first class of the term, when Rachel Bronson, the C.E.O. and president of the Bulletin, was the guest lecturer. In that first class, more than half the students had listed climate change as their foremost concern. By the end of the course, nuclear threats had become more of a concern, and students were speaking about climate change as “a multiplier”—by increasing migration, inequality, and conflict, it could increase the risk of nuclear war.
Toby Ord, who has systematically ranked existential risks, believes that A.I. is the most perilous, assigning to it a one-in-ten chance of ending human potential or life in the next hundred years. (He describes his assessments as guided by “an accumulation of knowledge and judgment” and necessarily not precise.) To nuclear technology, he assigns a one-in-a-thousand chance, and to all risks combined a one-in-six chance. “Safeguarding humanity’s future is the defining challenge of our time,” he writes. Ord arrived at his concerns in an interesting way; as a philosopher of ethics, his focus was on our responsibility to the most poor and vulnerable. He then extended the line of thinking: “I came to realize . . . that the people of the future may be even more powerless to protect themselves from the risks we impose.”
“I think about the Fermi paradox literally every day,” Olivia told me near the end of the course. “When you break down the notion that it’s not going to be aliens from other planets that will be the end of us, but instead potentially us, in our lack of responsibility . . .” But she wasn’t fearful or anxious. “I’d say I’m more interested in how we cope with existential threats than in the threats themselves.”
Finals week arrived. It’s like the world stops for finals, one student said, of the atmosphere on campus. Evans was doing downward dogs during a break in class; Holz was drinking a Coke. Both seemed discreetly tired, like parents nearing the end of their kids’ school sports tournament. The class had been a kind of high for everyone. And soon it would be over. The students had been working on their final projects; the assignment was to respond creatively to the themes of the class. In the 2021 course, a student wrote and illustrated a version of the children’s classic “Goodnight Moon” which was “adapted for doom.” (“Goodnight progress / And goodnight innovation / Goodnight conflict / Goodnight salvation.”) One group made a portfolio of homes offered for sale by Doomsday International Realty: a luxury nuclear bunker, a single-family home on the moon.
Lucy and three classmates were putting together syllabi that imagined what Are We Doomed? classes might look like at different points in time: the Enlightenment, the Industrial Revolution, and the year 2054. The majority of Lucy’s contributions had been to the Industrial Revolution syllabus. Alexander Graham Bell was the guest lecturer on technology and society, and the readings for his week included works by John Stuart Mill, various Luddites, and Thomas Carlyle. Lucy spoke of how Carlyle wrote with alarm, in “Signs of the Times,” about what had been lost to mechanization, the decline of church power, and how public opinion was becoming a kind of police force—observations that, she pointed out, are still relevant. Everything was going to hell, and always has been. A question that came up repeatedly in class discussions was whether our current moment is distinctively risky; most experts argue that it is.
Lawton was working with two friends on a doomsday video game, in which a player makes a series of decisions that move the world closer to or farther from nuclear destruction. “You have three advisers: a scientist, a military chief of staff, and a monocled campaign manager who is focussed entirely on getting you reëlected,” he said. After facing these decisions, each with difficult trade-offs, the player receives an update on how various dangers—nuclear war, climate change, A.I., biothreats—have advanced or receded. If your decisions lead to nuclear annihilation, the screen reads “The last humans cower in vaults and caves, knowing they are witnessing their own extinction.”
Mikko, too, had incorporated a game into his final project. Holz had asked the class to think about how effective the Doomsday Clock was in drawing attention to existential risk. Mikko and his project partner wanted to develop graphics that would better communicate the idea of climate change as a progressive existential threat. “We are already knee-deep, and it’s about mitigation and adaptation,” he said. He thought that the Doomsday Clock, while effective, had a nihilistic feel: even though the time on it can be changed in either direction, our human experience is of time ceaselessly moving forward, which makes nuclear Armageddon feel like a foregone conclusion. The game Snakes and Ladders was an inspiration for one of the graphics, which included a stylized ladder. “More rungs can be added to the ladder or removed from it,” Mikko said, explaining that this made it focussed on action. With climate, he feels that it is not only counterproductive “but also a kind of cowardice” to give up. We can never go back to what we had before, he said, but that was “a prelapsarian ideal about being pushed out of the Garden of Eden.” In his own way, the nay-saying Mikko sounded like what most of us would call an optimist.
I decided to rewatch “La Jetée,” by Chris Marker, a short film from 1962 that was on the syllabus for the week of “Pandemics & Other Biological Threats.” In “La Jetée,” the protagonist is part of a science experiment that requires time travel to the past. But he must also travel to the future, so that he can bring back technology to save the present from a disastrous world war, left mostly undetailed, that has already occurred. The protagonist prefers returning to the past, where he has—as one does in French films from the nineteen-sixties—become close with a beautiful woman whom, before the time-travel experiments, he had seen only once.
I remembered being perplexed and bored by the film when I watched it years ago. Isaiah had made it sound interesting again. “What was so compelling was that the main story wasn’t exactly whatever the disaster was, or what the future was like,” he said. The way the character was stuck in the past, even as the future kept proceeding without him, reminded Isaiah of the pandemic, of how he felt stuck in a “liminal state.” He remembered feeling as if he needed to be told, as happens to the character in the film, to go to the future.
The students were so much less daunted or flattened by reflecting on the future than I was—than most people I speak with are. I wondered, Do we have less equanimity because we know or feel something that the students don’t, or because we don’t know or feel something that they do?
Mikko described a change of sentiment that he had experienced in the final weeks of the course. “I was thinking about the nature of being doomed, on a personal level and on a societal level,” he said. Being doomed is connected to a lack of autonomy, he had decided: “You’re fated to a negative outcome—you’re on rails.” On a societal level, he said, he doesn’t think we’re doomed. But, on an individual level—the majority of people probably are doomed. “And that sucks.”
He said that the course had made him think about people throughout time who believed that their world would soon end. “The last week of discussion, I wrote about the cathedral-building problem,” he said. How could people who faced such uncertain lives build cathedrals, the construction of which could go on for lifetimes? “The argument I made was that the people who built cathedrals were people who believed in Revelations, who were sure they were doomed.” He digressed for a moment: “It’s astonishing how many end-of-the-world myths there are, almost as numerous as creation myths.” Then he returned to the cathedral builders, or maybe to himself. “It’s a weird feeling—to be certain that the world will end,” he said. “But also not certain about the specific hour or day of when it will happen. So you think, I may as well dedicate myself to something.” ♦
New Yorker Favorites
-
A prison therapist grapples with a sex offender’s release.
-
What have fourteen years of conservative rule done to Britain?
-
Woodstock was overrated.
-
Why walking helps us think.
-
A whale’s strange afterlife.
-
The progressive politics of Julia Child.
-
I am thrilled to announce that nothing is going on with me.
Sign up for our daily newsletter to receive the best stories from The New Yorker.