Channel / Source:
TEDx Talks
Published: 2013-03-26
Source: https://www.youtube.com/watch?v=P0Nf3TcMiHo
I wanna talk to you today about %HESITATION existential risk on the first slide I will show will be picked some of the greatest catastrophe of the last century and so the squeamish among you might wanna look away this shows the net effect Hoffa two world wars Stalinist purchase the Holocaust the rundown genocide and the Spanish flu I think if we if the technical terms that don't
even show up a the total number of human lives that were lived on this planet hasn't really been much affected by even this worst disasters but we have experienced if one wants to consider some event that would actually show off in a graph like this we have to go back further say to the Middle Ages were something like the black death would have made a dent
in this kind of population growth but even that kind of a catastrophe is not what I want to talk to you a about today %HESITATION such a risk is something different does this stuff philosopher here at Oxford to wrote the book back in nineteen eighty four Clarissa some persons on him at this simple thought experiment that helps bring out what this if they kept he asked
us to consider three different scenarios so one is but nothing happens there is peace things continuous normal another if there is a nuclear war that kills ninety nine percent of the world's existing population the third scenario is that there's a nuclear war that killed everybody now if we are asked which one of these would be prefer obviously would prefer a if I had to choose between
BNC we would say be Israel a horrible but sees even worse so the rank order here is pretty clear needs better than be which is better than see but then perfect asked us to consider a different question consider how big is the difference between these different sorrows not if I ask how big differences in terms of the number of people that are killed but it's clear
that the difference between CNV this much smaller than the difference between beyond a the difference between a and B. is about in today's terms almost seven billion people where is the difference between BNC stuffed one hundred thoughts of seventy million people however it's a different question that is more relevance to our decision making which is how big is the difference in the badness of these difference
Norris here the order is reversed part it argues that the difference in how bad sees and how bad bees is far greater than the difference in how bad bees and how bad a if because C. comes to pass a hundred people hundred percent of everybody I stunned it's not just that there is a massive number of people who are killed but it's also that the entire
futures to B. O. cursed by contrast you might eventually %HESITATION climb back up and you might have as many people living in the future US would have lived anyway so this goes to the heart of why I think existentialist it's a particularly important and relevant category to consider here is another way to bring it out we can consider different types of catastrophe and draw two axes
here one %HESITATION on the Y. axis here catastrophe scope how many people are affected it could range from a personal catastrophe something that affects one person off to a local global or a trans generational a pound and rational something that affects not just the current existing people that all generations to come I don't have access we can plot severity how how badly affected is each affected
person saw in the lower %HESITATION left corner here we might have an imperceptible personal respect the loss of one here it's a very small harm up suffered a lot of those harms in recent years but %HESITATION and then as we go up two words %HESITATION the rights and up in diagram who get increasingly severe catastrophes and we could delineate roughly a class of global catastrophic risks
which are ones that are at least global in scope and at least and durable intensity I put up there in the upper right corner we have the special category of existential risks so an existential risk is one that would have crushing severity which means Beth or something in the ballpark of being as bad as that's something that radically destroys the potential for a good life like
maybe severe permanent brain injury are a lifetime imprisonment that kind of thing anda time generational in scope that is affecting all generations I come so we can define an existential risk is one that threatens a premature extinction of earth originating intelligent life or the permanent and drastic destruction of its potential for desirable future development let's consider the values that are at stake you for it discussing
kinds of risk it's possible that the earth might and in a good because now I remain habitable for at least another a billion years %HESITATION suppose that one billion people could lift sustainably on this planet for that period of time and that a normal human life is a hundred years that means that ten to the power sixteen human lives of moderation could be lived on this
planet if we avoid existential catastrophe that's because the implication that the expected value this is when you multiply the value we the probability the expected value of producing existential risk by mere one millionth of one percentage point this is a small reduction but it's a noticeable but reducing it to special risk by mere one millionth of one percentage point is at least a hundred times that
value off a million human lives so if you're thinking about how to actually do some good in the world done done many things you can do to try to cure cancer ours and dig wells in Africa but if you could produce such a special risk by mere one millionth of one percentage point arguably on this line of reasoning it's worth more then a hundred times the
value of saving chameleon human lives so this is my body not the values can get even bigger if if we really work to survive for a very long time it will develop more advanced no it is maybe our descendants will one day colonize the galaxy and beyond %HESITATION maybe they can find different ways of implementing mines Peterson so forth and it's fun runs some calculations fare
%HESITATION using less conservative assumptions a much much larger number of possible life could result in our future if everything goes well this would make the expected value of producing existential risk vastly greater than than this this suggests but one might simplify arm action that this motivated by altruistic concerned that if if you really want to make the world better insofar as I to do that %HESITATION
you can simplify decision problem but I don't think this mock sepulchral which is to maximize the probability often okay outcome we're not have come as any that the voice an existential catastrophe because any other good effect and all the good effects like helping people %HESITATION here now even if it doesn't affect existentialist but that the expected value of those actions will be trivial compared to even
the slightest reduction in existential risks the final argument this is not reflected in the current dot parties of academic research where we see that human extinction is a rather neglected area there's more research on sink oxalate thunder some human extinction moron snowboarding than there is on single click and much more on the dung beetle than there is on all of the others combined %HESITATION I think
we have this fun sometimes that %HESITATION too big of a two part and and and %HESITATION too enormous for really it is a full within the the microscopic plants off of of academic research perhaps there might be other explanations as well but there seems to be a disciple cation of attention %HESITATION attention is not always directed to what is most deserving of attention is most important
now maybe this could be defensible if the probability was so negligible that even though the values at stake would be enormous if it just can't happen done there will be no recent props to worry about it it just doesn't seem to be sound if difficult or impossible vigorously to assign particular probability to the next level of existential risks a century but people have looked at the
question have written books are examined aspects of this epically assign a substantial probability we had the IBM conference here a couple of years ago in Oxford we brought together experts %HESITATION in different I risk areas from around the world and at the end of that we made an informal %HESITATION pull and the median answer to how likely do you think it is that you monitor would
be extended extinct by the end of the century %HESITATION Mungo's group of experts were nineteen percent thought %HESITATION it's roughly in line with what other people have said have written about this now it might be more it might be much less %HESITATION but either way it seems like we do not have any solid evidence that would enable us to assign say less than a one percent
chance of this happening in the next century and of course if we consider longer time scales than the probability increases type what we currently take to be the normal human condition is really I shoot for the anomalous condition in space like Ursus this very rear crumble most of it is just vacuum inhospitable to life any time the modern human condition is is a very unusual on
geological evolutionary even historical time scales %HESITATION and the longer that appeared in the future we we consider the greater the chance that humanity will break out of this human condition either downwards by going extinct are upwards by maybe developing into some kind of pulls human condition now what are the major existential risks well we have only limited time here so we can't going to hold details
but are different ways in which you could classify a carve up the spectrum of existential risks here is one way now noticed that human extinction is one kind of existential risk it's not the only one remember that next special risk with defined as one that threatened to destroy our entire future including our potential for desirable development so another type of existential risk would be permanence technician
another would be sold realization we would do develop all of the technological capabilities that we could develop but then we fail to use them for any worth what purpose and you can also consider fourth category where we initially developed all the technology isn't initially use them for good but then something goes wrong %HESITATION so it's worth bearing in mind that in addition to extinction there are
these other ways in which we could permanently lock ourselves into some radical is suboptimal states and that might be in the ballpark office bad I six think some now the other ways in which you can sort of carve up the spectrum of existentialist you could begin to look at particular risks maybe particular risk from technology could say well what about the bio engineer weapons what about
technology what about artificial intelligence and that that can be informative for certain purposes and I just want to highlight one type of risk here %HESITATION in the interest of saving time consider this model for how you manage to behave sweet have a big urn of possible ideas possible inventions possible discoveries and we put our hand in this idea by doing research and experimenting and being creative
and would pull out the lightest and try new things in the world and so far we made many discoveries we've invented minute technologies anon house killed us yet %HESITATION most of them seems to have been pretty good like that white balls here and some have been mixed nuclear weapons technology for example has been props well a dark shade of gray but so far we have never
expected from this earned a black people it's like an invention such that it would for example make it possible for an individual to destroy humanity look suppose that nuclear weapons which are quite destructive but really hard to make you gotta have these highly enriched uranium or plutonium these very difficult to get resources a big industrial facilities to make these through the heart but before we had
discovered nuclear weapons how can you be sure that that wasn't a simpler wafting that's like baking signed in the microwave oven or something like that like that just could make some destructive capability available so obviously you can weapons don't work like that but if we keep making these dimensions maybe eventually we will stumble on one of these black balls a discovery that makes it easy to
wield enormous destructive power %HESITATION even for individuals with few resources and once we've made a discovery we don't currently think have the ability to undiscovered we we don't have any with putting the ball back into the urn currently so one class affixes special risk if of this type that because we have very weak global coordination because we can't serve on invent things we have invented if
we keep pulling balls out maybe eventually we will be unlock and and discover something really destructive there so %HESITATION what I would suggest is that %HESITATION rather than thinking of sustainability yes on ideal that involves some kind of stay if it's that if Rodham aiming to work some kind of condition which is stable in the sense that we could then be in that condition for a
very long time we should perhaps think instead of for the dynamic Nelson of sustainability where the goal is to get on to that trajectory that is sustainable a trajectory on which you continued to travel for a very long time are definitely %HESITATION such use a metaphor consider a rockets and that has been launched on its own media now suppose we wanna make this rocket more sustainable
will what could you do well one thing you could do is to say reduce the fuel consumption and the rockets are that it go slower and in that case it could hover in the air for a bit longer a but in the end it's gonna crash down the other thing we could do is to keep the engines roaring and maybe try to %HESITATION achieve escape velocity
and once we're out in space then the rocket Congo indefinitely but in this second started it you would actually temporarily decrease sustainable if elected burn fuel at a faster rate but in order not to be on a more sustainable traductor and it might be on that that's a manatee needs to think similarly in terms of some predictor that might involve at some point I'm taking more
risks in the short term in order to reduce risk in the long term %HESITATION this graph here suggest that one might think in terms of three different axis where we have say technology on one axis we want more of fact ultimately inside another axis we also want more of that and coordination we wanna be able better to collaborate and cooperate I ultimately want to have maybe
the maximum oval office this is the way to realize humanities potentially long run but that still leaves open the question of in the short time is it all it's better to have more technology or more coordination of more insight maybe need to get more of one before you get more off the other maybe need have a certain level of global coordination before you invent thing I
really powerful new weapons technologies are are dangerous discoveries in synthetic biology are in nanotechnology or something like that so %HESITATION you might not wonder but what can we actually do to reduce existential risk and well that's that's of course a topic for another day but I think that even getting to the point where we start to seriously ask ourselves that question is an excellent way to
