The Future of Earth

Is There a Best Way to Think About the Future of Earth?

The optimal way to think about the future may not be as long-term as you think.

Written by Becca Caddy
man with earth in his head
Dewey Saunders / Inverse / Getty Images
The Future of Earth

Émile Torres spends a lot of their time thinking about the end of the world as we know it — and how to avoid it. To some, this kind of theorizing and strategizing might seem like a grand exercise in anxiety spiraling or perhaps even hubris. But for Torres, studying the last of things, in their case, the last of humanity, is their life’s work. It is essential, they argue, to consider how we might end.

“Existential ethics,” they explain, “is my term for questions about whether our extinction would be right or wrong to bring about if it happened.”

This might not seem like the best way to think about the future of human life on Earth. In fact, it is pretty fatalistic on the surface. But take a beat and Torres’ philosophy of the future is less nihilistic than it might seem. Instead, it might offer a blueprint for a better way to think about the future that doesn’t neatly fall into the big competing thought camps of climate doom, human or machine-led annihilation, or more optimistic longtermism.

“My approach is to take the future seriously,” they explain to Inverse. But what that means is a little counterintuitive. When we think about the future, we tend to either think in the very short term (what’s for lunch?) or the very long term (the year 30,000 C.E.). Put in these terms, to secure a better future for humans, the long-term mindset is appealing: Prioritizing future generations means putting challenging issues, like climate change, artificial intelligence, and global inequality ahead of immediate needs, like hunger or shelter. Maximizing human potential, as envisaged by prominent longtermists like Jeff Bezos and Elon Musk, is the driving force behind the push to put more humans in space and break technological boundaries. But Torres is more concerned with the indiscernible middle-to-long term.

And, they argue, spending more time thinking about the future in this way could help us live better here and now — and avoid catastrophe.

“I think a lot about human extinction.”

“Trying to anticipate the future is like driving on a winding road at night. You can see what’s in front of you, and things in the distance ultimately come into view as you move forward. But beyond that, you can’t know,” they say.

They worry this kind of thinking overlooks present-day problems and could even be used to justify harmful actions if they might benefit future generations.

To understand the best way to think about what comes next, Inverse contributor and tech journalist Becca Caddy spoke to philosopher and eschatologist Émile Torres about the future and the inspiration for their upcoming book, Human Extinction: A History of the Science and Ethics of Annihilation, which is due out in July.

INVERSE: First up, what is eschatology? What do you spend your time thinking about?

Over the past 15 years, my work has focused on global catastrophic and existential risks. I try to understand them and devise strategies for minimizing them.

But I’m an eschatologist more than anything else, and eschatology literally translates as the study of last things, so I think a lot about human extinction.

Émile Torres is a philosopher and scientist who studies human extinction.

Courtesy of Émile Torres

Let’s talk about long-term thinking. You’ve written about your concerns with longtermism in the past. How do you feel about it now?

I’m very opposed to it. I think it’s a deeply problematic view and the philosophical foundations are pretty tenuous. I worry that if people take it seriously and believe in longtermism, it could be used to justify extreme actions, including violence, while giving wealthy people in the Global North yet another reason to ignore the plight of people in the Global South.

What do you believe are the core problems with long-term thinking?

The key idea behind longtermism is that there could be enormous amounts of value in the future and that whatever that value is, we ought to maximize it. So let’s say there’s one unit and two units of happiness — which I know is a weird way to talk about things, but that’s how the longtermists sometimes frame it — two units of happiness is twice as good, right?

So to maximize the total amount of value in the universe, you shouldn’t just focus on making the people who currently exist better off. Instead, if you increase the human population, you could also increase and further maximize the total amount of value.

Longtermism’s supporters include figures like Elon Musk.

Tyler Boye/WWD/Penske Media/Getty Images

So they’re very keen on us all making babies? Or does it go beyond that?

It’s all about creating the largest population possible. So we must go into space and build planet-sized computers to create virtual reality worlds. You can cram more people, digital people, into these virtual-reality worlds than you could on exoplanets. In the longtermist view, we must do something like this.

It makes me think of the way many of the wealthiest people in the world seem desperate to recreate the futures imagined in problematic sci-fi novels. But beyond that, what’s the problem?

With this thinking, if something presents a blockade, or a risk to the creation of this future, you suddenly have a pretty good argument for why violence might be justified.

Many historical cases of utopian ideologies had the same structure of reasoning. The idea that utopia is just beyond the horizon, but you’re standing in my way. I believe the features of past utopian ideologies that made them so dangerous are right there at the core of longtermism.

This means I’m genuinely worried that there will be a true believer in longtermism who finds themselves in an apocalyptic moment, facing a hypothetical existential catastrophe that they believe is about to happen. An existential catastrophe or existential risk is basically what longtermists call any event that would prevent us from realizing this future value, this maximizing potential.

You could imagine somebody in a situation where a catastrophe is about to happen. In their eyes, they need to avoid that at all costs. Maybe that means violence. Maybe that even means genocide. I don’t think this is hyperbolic. History provides examples of exactly this sort of reasoning.

The terms “potential” and “value” often come up in longtermist thinking. But who decides what those words mean?

Many longtermists are hesitant to provide details about what fulfilling our long-term potential means. Obviously a big focus is on reducing existential risk. They see that as a priority for us as a species. Some people then suggest we enter a stage of reflection, where we sit around and consider what we want our fundamental values to be.

“I think what the future could be is just inscrutable to us.”

One of the big plans is to think about the big plans?

Yes. It’s a bizarre and implausible nonstarter. Some longtermists make it sound like we’ll just figure out these fundamental values.

Although some fundamental values are undecided, there are next steps longtermists do agree on, right? Many seem keen to colonize space.

Ultimately, space expansionism and transhumanism are at the core, and many longtermists are explicit about that. It’s ridiculous they don’t consider other conceptions of what our potential might be or involve.

Does happiness not factor in at all?

It’s all based on this capitalistic notion of going out, plundering, subjugating nature, extracting resources and maximizing, maximizing, maximizing.

The SpaceX Starship rocket is designed to ferry humans to Mars to set up long-term habitats there.

PATRICK T. FALLON/AFP/Getty Images

Do you think imagining the distant future is a pointless exercise?

I think what the future could be is just inscrutable to us. We have no idea what the world will look like in 1,000 years. Trying to anticipate the future is like driving on a winding road at night. You can see what’s in front of you, and things in the distance ultimately come into view as you move forward. But beyond that, you can’t know.

My approach is to take the future seriously. To understand that our ability to anticipate what the future will look like is highly limited. And that the track record for predicting the future could be better. There have been many comical and completely ridiculous mistakes.

We generally don’t seem all that great at predicting the future. Is there a best way to think about the future?

There needs to be more serious thought about the future. Some of that’s built into our institutions, like quarterly reports and election cycles. These things make it difficult for us to look further ahead. So I do think we need to pivot more toward the future. But making bold claims about the world in a trillion years is ridiculous.

These are timescales our brains weren’t designed to comprehend.

“The long-term view I would advocate for is focused on a century or a millennium from now.”

I think longtermism has recently become a popular talking point because it gives people a framework to think about what’s coming next and their place and purpose in the future. I wonder if that’s appealing because the future seems frightening to so many right now. How should people think about their futures instead?

You don’t need to think about the future by casting your eyes on the very distant temporal horizon. You should care about the future and the long-term future of humanity and Earth, but don’t be a longtermist.

The long-term view I would advocate for is focused on a century or a millennium from now. A timescale that’s relevant for the planet, climate change, nuclear waste, and all sorts of issues that environmentalists have been discussing.

We should also question the fundamental commitments of longtermism, like maximizing value. There are all kinds of other potential responses to value that aren’t this kind of perfunctory maximization. Maybe things that are valuable should be cherished, preserved, loved, and cared for, rather than just maximized.

A participant in an April 2023 demonstration by the climate protection group Extinction Rebellion.

Paul Zinken/picture alliance/Getty Images

As someone who literally thinks about the end of the world, are you worried about the future?

I’m frightened about climate change and very concerned about AI — especially the possibility of deep fakes and large language models. There’s enormous potential to propagate disinformation and misinformation.

But although I think there’s momentum pushing us toward futures that should inspire a degree of fear, they’re not inevitable.

That’s comforting. In what ways do you think things aren’t inevitably screwed up?

Part of the reason AGI (Artificial General Intelligence) is a goal of DeepMind, OpenAI, and other companies is because they think AGI might be the vehicle to utopia. That’s why the goal has been to develop AGI as soon as possible. But now the rate of progress has accelerated, they’re backing off and thinking, Holy shit, I don’t know if we’re ready to develop these really advanced, powerful technologies.

They’re now putting pressure on OpenAI to slow things down. Although I doubt it will work, it’s not a completely hopeless situation. I feel like there’s a moral duty to do whatever you can even if the situation looks bleak. I’m trying to do my part by raising awareness of some of these concerns, especially around AI, encouraging people to protest however they can.

What makes you feel hopeful about the future? And what would you say to people who don’t feel hopeful about tomorrow?

I’m heartened by the fact many smart, amazing young people are leading global movements to raise awareness about climate change. To pressure the government and political leaders to actually implement meaningful climate mitigation policies, and could imagine something similar with respect to AI.

The fact these kids are so motivated and effective at organizing, it gives me hope. If I could say one thing to young people, it would be, “Thanks for your brilliant, inspiring activism.”

Related Tags