Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Philosopher Nick Bostrom in the near future he wrote a paperwhere he argued that the low chance of AI destroying all of humanity would be worth the risk, because advanced AI could free humanity from its “global death sentence.” This exciting gamble is a huge leap from his previous dark views on AI, which made him the godfather. His 2014 book Superintelligence it was an early assessment of AI vulnerability. One memorable thought: an AI tasked with making paper can destroy humans because all the needy humans are a hindrance to paper making. His latest book, Deep Utopiato show a change his mind. Bostrom, who directs Oxford’s Future of Humanity Institute, dwells on the “solved world” that comes if we get AI right.
STEVEN LEVY: Deep Utopia they are more optimistic than your old book. What changed?
NICK BOSTROM: I call myself calm. I am very happy with the possibility of greatly improving people’s lives and opening opportunities for our development. This is related to the real possibility that things are not going well.
You wrote a paper with an amazing argument: Since we’re all going to die, the worst that can happen to AI is that we die sooner. But if AI works, it could improve our lives, perhaps forever.
This paper focuses on only one aspect of this. In any academic paper, you cannot explain life, nature, and the meaning of everything. So let’s just look at this little issue and try to nail it down.
That is no small matter.
I think I’m annoyed by the arguments of the doomsayers who say that if you build an AI it will kill me and my children and how dare you. Like the latest book If Everyone Builds, Everyone Dies. The most likely is that if there is none close, everyone dies! This has been happening for the last 100,000 years.
But in the event of annihilation everyone dies and there are no more people born. Big difference.
I am obviously very affected by that. But in this paper, I am looking at another question, which is, what would be good for the people who are here like you and me and our families and the people of Bangladesh? It seems that our lives can improve if we develop AI, even if it is dangerous.
In Deep Utopia you think that AI can produce so many amazing things, that humans can have a lot of trouble finding a purpose. I live in the United States. We are a very rich country, but our government, apparently with the support of the people, has policies that prevent jobs for the poor and reward the rich. I think that even if AI could give more to everyone, we wouldn’t give it to everyone.
You may be right. Deep Utopia it takes its origin from the idea that everything is going very well. If we do a good job in leadership, everyone will get a share. There is a deep philosophical question of what a good human life would look like under these ideal conditions.
The meaning of life is something you hear a lot in Woody Allen’s movies and maybe in the philosophical realm. I am very worried about how I can support myself and participate in this information.
This book is not just about meaning. This is one of the different groups that they consider. This would be a wonderful release from the pain that people have been experiencing. If you have to give up, say, half of your waking hours as an adult just to make ends meet, doing a job you don’t like and don’t believe in, that’s a sad situation. Society has become so used to it that we have built all kinds of ideas around it. It’s like a little slavery.