Transcript
OCNH3KZmby4 • "Life Will Get Weird The Next 3 Years!" - Future of AI, Humanity & Utopia vs Dystopia | Nick Bostrom
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/TomBilyeu/.shards/text-0001.zst#text/1080_OCNH3KZmby4.txt
Kind: captions Language: en if at one extreme you had an AI that was like exactly functionally identical to human that had um that lived for 80 years that had like a human-like body that humanlike memories that had this brain in brain like an artificial brain structured very much like a biological brain I think in that case the it would be a very strong moral case that we should treat it as as a moral subject as well that it would be wrong to mistreat it and be cruel to it Etc Nick Bostrom welcome to the show happy to be here all right written language gave rise to nation states because they could track things like laws and taxes the printing press gave rise to religious persecution and Wars the internet gave rise to decentralized media and the age of conspiracy what will AI give rise to I think there are several possibilities there so one is that the future is just uh shaped by and dominated by AI Minds that have kind of disconnected themselves ultimately from their human origination um in in roughly the same way that we've kind of disconnected ourselves from I don't know the great apes uh that that we or or the IND theander thals uh this kind of um but if we imine a kind of human society with these AI tools I think there are like maybe um there are certainly dynamics that could increase centralization and make centralization more extreme right now if you have a a totalitarian system like a dictator like the dictator can't rule on his own they they you still even if you're dictator need the buying of some fraction of you need of the population like at minimum the security forces the military like like some some key families maybe so so maybe you need 10% support or something like that to rule uh but with automation of police forces and military forces you could imagine an even hter concentration of power and and better abilities to surveil what is going on out in the land to keep track of what everybody's opinion is about the ruler and what they are doing their sort of political sentiments um so that that could enable kind of increasing levels of centralization of power um that's that's one possible Dynamic um and another is that like just this sort of AI amplification of current Dynamics in our metics just become more powerful that we develop sort of hyper stimuli that hijack our minds as it were like super memes or virtual reality worlds that are so compelling that people kind of check out of real reality to spend all their time just we kind of already are to a significant extent like with television people spending hours and now in front of like their social media feeds um and and this could maybe be kicked to the next level if you had like just a higher level of Technology doing that um so those would be some of the kind of negative dynamics that one could like worry about so you wrote a book called Deep Utopia it's a very interesting exploration of what if AI goes right when I encounter the ideas though I do start to ask whether I would want to live in that kind of utopia and whether Utopia would be a positive thing at all can you define Utopia before we get to deep Utopia can you define Utopia uh so we can contrast that to this idea of a deep Utopia yeah generally in sort of utopian literature it's usually people coming up with like a blueprint for the ideal Society they have some vision of what like what would a perfectly Fair Society be like where everybody has enough and everything is nice and uh um and so this kind of utopian literature has kind of Fallen a little bit into disrepute uh partly for good reasons in that often times people with these social Visions uh if they actually gain power to implement them have created a trail of havoc and misery um and so people became a little bit skeptical uh in in like the second half of the last century in particular after sort of the um Soviet experiment and the Nazi experiment and some other other regimes that ran with crazy ideologies uh like resulted in massive human tragedies people became well now maybe this Grand Vision for society thing is uh is actually really dangerous well so that that's kind of what Utopia traditionally mean now deep Utopia I really just use the word because I I am interested in the philosophical question of what would what would the ideal what would a Great human life be like if you abstract away from a lot of the constraints that currently limit what we can do so imagine you had super advanced technology you have AIS and robots who can do all the work you have like super Advanced biotechnology that give us unprecedented control over our bodies and minds and psychological States in in in this kind of condition and and let's suppose also that somehow um government work really well as as well so we don't have like Wars and oppression but just like wave the magic wand and imagine like a really great society all right so then under those conditions what would like a really great human life look like and so that actually brings us into and and I'm glad you you you you you sort of um started thinking about this question would I actually want to live in this kind of world because once you think about a lot of what we base our sense of dignity and worth and and fill our lives with are because we our our efforts are needed currently so you might pride yourself of beinging a bread winner or of making a positive contribution to the The World At Large or or maybe just within your family like you're a valued person who contribut something and to the extent that we Define ourselves by our ability to make some instrumentally useful contribution right then in this world where AIS can do everything better than we can do that is a kind of a threat to our sense of self-worth we would certainly need to rethink a lot of uh the fundamentals of our values if if we're moving closer to this world of of human redundancy do you think those values are malleable or are they an echo of what I'll call an evolutionary algorithm running in our brain our values are what they are um but I think you can maybe distinguish superficial values and and the deeper values underneath that that uh justify or underpin them and so my My Hope Is that although a lot of The Superficial values we have we'll need to maybe give up because they no longer make sense in in this kind of solved world and that there are deeper values that could be more fully realized that is possible today if we remove some of these constraints so you can kind of go through a bunch of different candidate values that people might have and see whether they would be instantiable in this kind of Sol world so like if you take the simplest Value First like purely hedonistic subjective well-being pleasure let's say uh like having fun in the sense of being in a certain subjective state so that would be trivially easy to realize in a solv world in that you would have very Advanced neurot Technologies that if if all you really wanted was pleasure you have like a super drug without side effects that could give you as much pleasure as you want or maybe more direct ways of uh interfacing with the human brain so you could check that box like if that's like actually enjoying life and feeling good about it like that's a check mark right off the bat um you can then go through a bunch of others where it starts to get more um problematic is when we get to values like um meaning purposefulness significance um where if if all problems can be better solve by Machine then like what would give us purpose in in our lives if there's nothing we need to do um so then to the extent that you think your life goes better if it has meaning in it or purpose then maybe to that extent uh it would actually be a worse life in this whole world because it wouldn't be useful um now not that you have to distinguish it between the subjective and the objective sense of say purpose like so the subjective sense of feeling imbued with a sense of motivation and drive and there is like something you're striving towards that you really want and that energizes your like so that again would be trivial right if you have these very Advanced neurot Technologies um but some people think there is like an additional element like what you might call objective purpose that not just is there something you feel you want to do but there is something that actually needs doing uh and and that is a lot less um clear whether you could have that in a solved World a most problems would be solved right if it's really a Utopia B the problems that remain would be better solved by machines than they would by by you uh in in a technologically mature Society all of my anxiety around AI hinges on one idea that I believe there's no way for humans to get around unless we rewire our biology and that is uh it isn't about the pursuit of happiness happiness derives from Pursuit so I think think from an evolutionary standpoint that we have been designed over God knows how many hundreds of thousands of years generations to have to do very hard things in order to survive and so Evolution only has Pleasure and Pain to get us to do the things and so I think when you work hard in pursuit of something that's valuable to not only you but to other people and you feel like you're about to be successful even before you're actually successful that moment is like the greatest thing that life has to offer you and I think that when people are in pursuit of something and it gives the meaning that you were talking about man that's when it feels good and the second you're either not working hard you're just being given things or you are working hard but it doesn't matter because everybody already has everything that they need and there is literally nothing that you could contribute to the group that would uh make the group better off then you ask a fundamentally corrosive existential question which is why do I matter why exist it all and I think if we end up with a social structure that leaves people asking that question we are really in trouble do you see a flaw in that fundamental base assumption I think there is some truth to that as a psychological observation about uh our current Minds um that some of the malayas in modern societ might be from uh the absence of certain kind of survival pressures or opportunities or that that were always there in in our evolutionary past I mean just as kind of obesity is probably a function of refrigerators and plentiful food and stuff and fast food it wasn't really a problem when when you were a hunter gatherer and so there is a kind of mismatch between our like physiology and our current circumstances and I think that's also true mentally to some extent and although is amazing just how adaptive humans have been that to we can still Thrive as much as we can in in these very different environments than than we kind of people but I think a key question here is whether we value this purposeful striving you describe because it uh creates mental health and good feelings um or whether it we value it for its own sake so right now it's not important to differentiate these because they the only way you can get like maybe the uh the mental health and and well-being is by actually doing these hard things that you describe and then feeling the satisfaction but um but in this hypothetical that I explore in deep Utopia these two elements could come apart you would have this kind of the perfect drug that could induce exactly the same sense of satisfaction and fulfillment and you know energized relaxedness or whatever it is that like hard effort produces but without actually having to uh to make any effort um yeah another way to get it it maybe is um to what extent you think our artificial purpose would be uh a good enough substitute for the natural thing so so real purpose might be like you're being chased by a tiger and you really have to run as fast as you can otherwise you get eaten and there are very real Stakes there right it's not something you just randomly make up that you have happen to like to run away from the tiger once the tiger is there chasing you like you know what you have to do and it's a given and it's a very real purpose but contrast that to somebody who is playing a game uh like maybe they really want to win uh but in some sense the game itself is like an artificial purpose there's no re if you're playing golf there's no reason why the ball has to go into these sequence of holes other than that we just decided let's try to do this we make up this random goal and then once you accept the goal then you have the purpose of like trying hard to achieve it and people can work you know decades to try to perfect their golf game and and find a lot of satisfaction right but in some sense the over the whole thing kind of made up it's artificial right arbit it it is however run the thought experiment of imagine if you could play golf but no one would ever know about it no one would ever see no one would ever know that you were better than they are um it I think is a proxy for social status so you are trying to rise within a hierarchy and so the question becomes uh this is really there's there's two things here that we have to put on the table because there's a huge hurdle before what I'm about to get to but let's say that everything's taken care of uh Utopia is here as you defined it earlier we don't want for anything everything's equal um and now will people be interested enough in status games that that will be fun or will they be like but this is all just a status game and so there's a deep emptiness I think we have Clues right now that answer that question so right now you have video games that are unbelievable as somebody that discovered Minecraft in my 40s I will just tell you that game is unbelievable I cannot believe that kids get to grow up in a world where that game just exists but nonetheless if I'm playing it and it doesn't feel like it's going anywhere other than I'm playing it it does have an emptiness which I think is part part it is a small part this is a huge problem but a small part of the sort of male sense of meaninglessness that certainly is sweeping across the West okay that that's the question you're putting forth I think we have enough of an inkling to say it probably doesn't pan out as well as we would want it to but I think there's probably a more important question that has to be asked before we even get to that which is humans with their current value set now their current brain wiring now how are they going to um are they going to accept or radically push back against AI when they see that it will lead us to a world where they are irrelevant by today's standards if I had to guess I think more likely it be we'd be seduced by it and uh uh and the displacement I mean to to in part this is economic people losing their jobs maybe right or downwards pressure and wages that will arise as automation uh advances interestingly in this case it might initially hit more certain kind of white colar work that like traditionally automation mostly like affected lower skilled workers but this this this kind of current language model technology seems to like hit right at the sort of midlevel White Collar work like people summarized documents for living and stuff like that um so interestingly exactly if the AI succeeds at automating a wide range of jobs there will also be massive economic growth um which to some set extent might offset the the kind of U uh economic impact of unemployment like if you have a a booming economy there are more tax revenues it's easier to there's more demand in other sectors that haven't yet been automated there's more people can spend more money on you know hiring gardeners or nurses for their grandparents or or whatnot um but it still leaves this question about meaning and purpose and and social status uh I'm wondering about social companions in this context um like AI uh social companion Bots this might become another kind of is that a very nice way of saying sex bot well it would Encompass that but it could also just be uh friends and uh fans and uh all all the different elements of social interaction uh like maybe even some sort of fake status uh like like you were saying that if if people want to feel that there are high status in the real world maybe they aren't and it's just a frustrating experience and rather than work for years and decades to get like one notch up on the status hierarchy by I don't know like stressing yourself out in the gym to get a slightly better body or like educating like all these achievements they are hard and take a long time right imagine if you could instead tap into this virtual world where you have perfectly realistic uh virtual characters and where you are playing you're the king or something like that and you have these admiring um digital characters that maybe that if that becomes good enough would like be extremely compelling to to people um for for the same reason like I mean like drugs are really compelling and and often in case the word your real life is the more attractive like a kind of the alternative of some opiate or something is right that can sort of um so um I I think that yeah this whole AI social companion technology will uh Advance very rapidly over the next couple of years do you see that being like online dating where at first people are weird about it and then it just becomes the norm yeah I wonder so on the one hand there is like something uh it's like slly dystopian seeming but like if you imagine a world where more and more of our time is spent not with real people but interacting with these like Bots um on the other hand it might be one of those things where there's like a generational thing where like uh uh I'm the old fuzzy duddy like Grandpa you don't get it and then people who grew up with this is like yeah of course it's just much more interesting this AI bot is much Wittier and like really pays attention and like these these humans are kind of a drag and we all just using these AI Bots now and like is that is that because if if that happens is that because this whole generation will have made like a big error or is it just that uh um they they they would have more familiarity with this and like on its marit chosen to spend their life in that way rather than by hanging out as much with the other fellow humans it's hard so it's easy to have opinions about these things but it's hard for those opinions to actually be grounded in in some kind of objective truth as opposed to just merely reflecting your own personality or upbringing agreed um I think there's certainly value in both but one of the more interesting things about you and the way that you've approached these problems is with the anthropic principle and finding ways to at least ground things in probabilities um I'd like to talk about your probabilistic look on what does AI do in the next three to five years and then what does AI do in the long run obviously we're talking probables here um but as you've used that to great effect I'd love to see how you think through that using anthropic principles well uh yeah I don't know about using anthropic principles I I do feel um AI timelines appear relatively short from this point on I mean we are really far along the the path towards AI already the things that are now possible if you had asked people 20 years ago uh it I think many people would have assumed well if you can do these things if you actually can have like an AI that can uh have a conversation with you in ordinary language and like you can't even tell whether it's an AI or a human like unless you're really an expert to know exactly how to Pro that that that seems like AGI computers that can write code like at the level of like maybe an entry programmer um so um we have this tendency I think with each advance in artificial intelligence to to move the gold post uh and and to immediately discount and take for granted what's like the same thing like when when deep blue uh beat Gary Kasparov in chess like before that people saw chess as this great game of the human mind like the most complex thing the human mind could do is to learn to play chess as a high level right really deep logic and then after computers could do it we say ah it's just a game of chess like there are simple rules and then the same happened with go and then like when you could have AIS look at pictures and actually visually understand what what's in the picture um and now with natural language um I feel there are not that many um of these steps left because before before you have ai that can do all kinds of AI research better than than humans can do at at which case you have I think an intelligence explosion because that then you have the AI research being done by digital Minds at digital time scales um and and then you get a very rapid feedback loop right with each subsequent Improvement the the the force that is doing the improving gets stronger and uh you might then have some kind of Singularity um now exactly how many years that is away is hard to tell but I think we are no longer in a position where we can be confident that it couldn't happen even within some very short period of time like a year or two uh I'm not saying it will but we are not in a position where we can be like really sure that it won't it's like it might just be somebody make some other breakthrough of the uh level of the Transformer architecture and applying that to the already really large models we have you know maybe that will be enough to sort of unlock a lot of latent potential there or maybe they will be like need two or three more of these advances um or or more scaling up uh of the size of the data centers um um we we just don't know exactly but we are sort of I think close enough that we can't be confident that it couldn't happen any time poor sleep can sabotage everything in your life including your success and that is why I am obsessed with optimizing ing my sleep from blackout curtains to sound machines to mouth tape which I do every night I've tried it all now I'm excited to add eight sleeps game-changing pod 4 ultra into the mix it's clinically proven to give you up to an extra hour of quality sleep every night plus the Pod 4 ultra has an adjustable base that fits between your mattress and bed frame and if snoring is an issue it detects it and automatically lifts your head to improve air flow and stop the snoring now my wife Lisa and I are temperature opposites so we're excited about the Pod 4 ultra's personalized Heating and Cooling features head to 8sleep.com impact and use code impact to get $350 off your pod 4 ultra they currently ship to the US Canada the UK Europe and Australia every dollar counts when you're running a business and that's why it is so important to cut costs wherever you can without impacting performance netsuite by Oracle is one very smart way to do that netw Suite is the number one Cloud Financial system bringing accounting financial management inventory and HR into one platform and one source of Truth it reduces it costs and cuts the cost of maintaining multiple systems plus you can improve efficiency by bringing all your major business processes into one platform slashing manual tasks and errors do not be left behind over 37,000 companies are using netsuite right now and by popular demand netsuite has extended its one-of-a-kind flexible financing program for a few more weeks click the link in the show notes or head to netsuite.com Theory right now that's netsuite.com Theory again it's netsuite.com SL Theory what would you advise to somebody that is a junior in high school scho now these are American terms but Junior and high school now they've got to get really serious about where they're going to go to college or whether they're going to go to college what they're going to study um how can if if we are and I heard you this is not a guarantee but if we are potentially within a year of AGI um how can somebody even plan for the future it just seems like such a big question mark yeah I mean but to be clear it could also be 10 years or 15 years or like it's just um yeah I think I mean I'm always wey of giving a general advice to everybody I feel that's like giving advice on what's the best shoe size uh like what's good for one is like you know so some people maybe are too too hard on themselves and good advice to them might be to you know ease up a little bit like go easy on yourself and then for other people that might be exactly the wrong advice they they might actually need a Stern message you really need to pull yourself together here like you're just wasting your time discipline so the same message might be completely right for one person and wrong for the other depending on how they are currently going wrong and similarly with career advice you know it depends a lot on what your talents are and what you your passions are in life I think more than I meant looking for something specific I'm looking for a guiding philosophical principle so I know that you used to run um The Institute for the future of humanity so you I'm sure I've thought a lot about where we go how we deal with it well and so yes we're not going to say you should be a dentist but there I'm guessing you have a framework that people would benefit from in terms of facing such a rapidly changing environment yeah I mean so it depends like so there's like a small fraction of people might actually you know be be looking to directly contribute by researchers or AI scientists and stuff like that that's like one Avenue um I think in general probably useful to familiarize onel with the current tools and and the Next Generation so that you kind of know roughly where things are and what they can and cannot do to be adaptable um but then for other people who are not really technically minded uh um I mean it might be that going in the opposite direction you know being really developing your skill with people I think uh there's like enormous needs for various Care Professionals I think like say with elder care like if we just had more resources in in theory every old person should have like their own full-time person Al uh would be great right some some like younger person who could live with them fulltime and just help them if they fall help them up like we we we can't afford that but like in principle the need is kind of unlimited there um I would also say that don't forget to actually enjoy life um right now like I wouldn't sort of plan on a 4year career and make big sacrifices now for 10 or 20 years in the hope of then it paying off like when you're in your 50s and 60s because you know maybe the future doesn't exist um at that point um it would it would risk being a kind of a um yeah I would maybe focus a little bit more on short-term strategies when you say the future may not exist what do you mean well I mean several different things actually at the same time but in one one thing I meant was that if this AI Revolution happens within the next five or 10 years or something then these long-term investments in human capital that we might make now with a payback time of 20 30 years might not pay off because by that time maybe human uh Capital will have depreciated as a result of AIS uh supplanting us across the board so that's one sense in which the future would not exist there are other senses in which it might not exist as well um related to the simulation uh hypothesis which we don't need to get into but yeah so enjoying things now also like with college education I think if you're would like really enjoy your time at College that's one thing then maybe do it but if it's just something you have to drag yourself through for the sake of getting a diploma I would like maybe seriously consider if there are not ways to sort of cut out those three or four years to get straight to what you want to do um and similarly with like PHD programs that can take like in the US five or six years that's a long time um if I I think in many cases uh it may be too long for it to be worth it these days just because the rate of change is so accelerated yeah because the timelines might be shorter and so uh um I mean if if you imagine if if you had like uh so suppose you had the view that there was like a 10% chance every year that the world will uh blow up and be destroyed so then U like you wouldn't make 20year investments really right like you you'd kind of focus on things that have a shorter Payback time so like having a def facto higher interest rate or hurdle rate for your own long-term Investments maybe would make sense in this picture now I I would like hedge a little bit because this could all be wrong and if if the AI think doesn't happen or if it's like you know banned or it stalls out you you don't want to end up completely dry either right where you have nothing you're 30 years old you lived for the day uh planning that the AI Revolution would happen somehow it fizzled out that was like a global ban and then you're now a 30-y old with nothing no skills no job no nothing uh so you might want to depending on like what your sort of social safety net is you might might want to hat your bets a little bit there now I know that a lot of people have what they call a a p Doom number so How likely you think we are to basically blow up the world whether with AI or something else uh as I'm listening to you it does beg the question what is your p Doom number yeah I don't actually have a specific number but maybe one way to think about it is if you could divide it into so there's like ways in which things could go really badly right we kind of you know blow up the world or some dystopia then there are sort of the more utopian scenarios everything is clearly we Cur a lot of diseases like wonderful Prosperity right so so each of those would have some probability but I think there's like a third bucket in the middle which is actually perhaps maybe the most prob which is that the world is such that even if we could actually see what would happen even if you had like a little binocular so you could look at the future and and stud it right you wouldn't really know whether to count that as a success or a failure it would like maybe be very different from the current condition um better in some ways worse in some like strange there's like some kind of Minds doing stuff there like they are not exactly human minds but you know they're sort of a little bit doing the same things so you count that as as as there being humans around or or is like are we all dead and just replaced or did we sort of grow into this new life form um so I think it's not obvious that the future would be such that if we could see it we would necessarily know even what to think of it um you can think of in an individual life so right now we have children like say a four or fiveyear old who eventually becomes a 25y old and and the 25-year-old is quite different from the 5-year-old in many ways like mentally they have different interests they're no longer interested in the toy train right they they are interested in like their romantic partner or their job prospects or us politics or whatever it is like the Roman Empire um so in many ways what was there at age five is all gone and yet we don't think it's bad for the child to grow up I mean in fact most of us would probably think it would kind of be something sad and unfortunate if a 5-year-old never grew up to become a 25-year-old if they remained at the level of a five so I wonder if there is like a similar thing where we now are basically children in the SK we never not none of us ever get the chance to truly grow up because we we just like biologically develop to for 20 years and then stagnate and then then we sort of rot away and die after a few more decades just biologically we are not we we can't live for 500 years continuously growing and expanding and learning new things like it's we are kind of cut short and and maybe like 80 years is just not enough to really uh fully uh realize our in inherent potential like we are kind of Zapped by our rotting biology and uh so there might be different kinds of lives that would become possible if if you could live for a million years and if you could gradually upgrade your capabilities um that might be really wonderful but that would maybe change us as much as like the year old is changed when he or she grows into a 25y old or more what that perhaps suggest is that especially when we're zooming out and thinking about these more radical scenarios we should not really focus so much on comparing two states like the current state and some later state but maybe thinking more in terms of trajectories um leading out from the current state and then evaluating how desirable those are like maybe it's fine if ultimately we end up in a very different weird postum condition like in 10,000 years from now like but if if if we went there slowly and we sort of had a chance to grow into it properly um that that kind of trajectory might I think be more attractive than one where we just remain humans and keep doing the humanlike things for 500,000 more years I don't know five million more years like at what point is enough at at some point you'd want to maybe sort of unlock the next level right it's like playing the same level of a computer game at some point you want you need to move on and maybe similarly uh the kinds of values and lives that can be lived with our current human physiology is like a limited set of all the possible values maybe we haven't yet exhausted it we might want to spend some more time and go slowly through the level rather than just skip to the sort of the the final level right that might be another mistake but still thinking in terms of a trajectory that eventually leads to to Greater forms of development including ones that ultimately take us out of the human Ro okay you're playing with a lot of ideas here and I want to start pinning some of them down so uh one is the idea of trajectories and I think people right now today are going to care a lot about that so uh through regulation through what people end up pursuing as entrepreneurs we're going to have a tremendous amount of influence over what gets developed what doesn't get developed and so I think that's the big question of today is what trajectory do we want to see this go on and so I'm very curious to hear your take on how much can we control the trajectory and do you see an ideal trajectory most of the uncertainty about how AI pans out is um uncertainty about how hard the challenge is that we will confront uh rather than uncertainty about the degree to which we will be getting our act together and make a good effort um like we don't know how hard this is we've never had a machine intelligence transition before right we haven't studied a million other planets where some humanlike species developed Ai and we can like studies the statistics we we have you know we're coming to this a fres we have no idea whether it's like relatively easy or like fishlyn obviously we can at least nudge the odds in a better direction if we really make a good effort we work on this collaboratively you know we are really smart about it we studied hard like be careful then like we can improve the odds a bit but but most of it is just I think baked in so in that sense I'm kind of fatalistic like you could say I'm a moderate fatalist in in the sense the moderate coming from well we can still affect the odds at least a little bit on the margin by getting our act together but fatalist in the sense of for the most most part it's probably just baked into our situation and and like the technology itself when you say it's baked in what do you mean that it's going to happen the outcome for for Humanity like for example whether we end up destroying our like AI kills us all or we achieve alignment and uh manage to align it to human values um I think some of those things might be baked in uh in in the current situation Elon Musk has said that he thinks of AI as a demon summoning Circle and that we should be very careful about what we wish for um I'm hearing tones of that in what you're saying now and he said his life got a lot better when he became more fatalistic about AI um what do you think about his take his level of anxiety about AI warranted not warranted um yeah it seems warranted he is also uh the founder of uh xai uh which is an AI startup and as well as Tesla which has major AI operations um and um one of the original investors and founders of open AI uh so um so I guess his his attitude is complex like I think he recognizes that there will be big dangers but um it doesn't necessarily follow from that that the conclusion is that each person should unilaterally remove themselves from the race okay so when you look at this you have a similar uh this is going to happen this stuff is baked in um I see a world where we end up bifurcating as a species so I I consider myself wildly technologically optimistic uh I have a natural bent towards somehow things will just work out um but I also look at what I can feel brewing in culture right now which is a massive Resurgence of um religious fervor people reconnecting to that refining Faith accounts that are focused on faith on podcasts and YouTubes YouTube uh are starting to dramatically increase in popularity and I think in many ways this is a response to a hyper technological world where even just us humans are using technology a lot whether it's O zic and losing weight whether it's anti-depression medication uh whether it's AI that they see this influx of um what I think many will read as antihuman things and there's this feeling a desire to connect with something traditional and certainly Divine and I see that creating a bifurcation in society and what I predict timeline gets a little bit fuzzy because it's all going to be predicated on the rapidity with which AI disrupts our normal life uh so on whatever ever time scale that is I think what you will see is a group of people that spring up that I'll call Puritans that will not want to use AI uh they won't engage with art that's created by AI they won't support companies that use AI to create their product or their marketing um and then other people that will um sign up for neurolink when it becomes available and they will literally augment themselves they will use AI whenever and wherever humanly possible they will fantasize about free energy and the Utopia that AI is going to bring and I think over time those guys will end up pulling apart uh especially if AI helps some people augment themselves and you could be augmenting yourself directly you could be augmenting your children just through genan selection let alone Gene editing um do you think that's plausible likely delusional yeah I think the debate is likely to become polarized if we're talking about the sort of public debate about what should be done about this we're already seeing a little bit uh you know on the one hand the sort of the doomers right and then the EA that like kind of go forward with all maximal speed on everything crew that are sort of dividing themselves up into two different tribes that can now start to um hate on one another um um and I think maybe broader segments of society will be recruited into this debate as as the impact s start to be more widely felt um I could see yeah like this it's interesting to think about how the speed of development might impact the degree to which this happens as I think maybe there are actually three different regimes so like extremely sudden and fast a like super intelligence is invented like next week just comes out of the blue right then okay that there won't be much more polarization than there currently is because people didn't see it coming enough and so now I think also maybe if it's extremely slow and happens over many decades then it might be kind of the boiling of the Frog phenomenon where people like are using this technology and of course every little increment makes it better like if you're going to have like a a medical diagnosis spot surely you want it to make slightly fewer errors rather than you know more errors and so every little step along the way will just be better if you have self-driving car you want it to be slightly smarter so it crashes less often like for every application it's clear that more capable means better and so if you just follow that long enough you eventually end up with super intelligence but at no point is there like a clear jumping off like an alarm signal so that now in the intermediate scenario where you have kind of turbulence where people feel uh dislocated because like every other month there's like a new thing and now a big sector of workers were like laid off and now there is this other thing that has created these propaganda Bots that are running around and then there's like deep uh fakes or like then and and then some big disaster happening because the AIS were running the power grid and it all sort of malfunctioned and like in that kind of world or drone swarms coming in and killing a bunch of people in war you can imagine maybe more that's that turbulence creating a kind of increased resistance um now I think you were also asking not just about like the conversation around this but also where the different communities will form that sort of like the Amish decide to only use certain Technologies and that whe whether many people will opt out of this this uh AI technology um very much so yeah I don't know um it it's kind of so I mean unless you go full full like unless you're like really hardcore about it like some of these communities are where where you don't even want cars and stuff otherwise it's pretty sort of integrated into the modern economy like if you're using Google like you're using AI right in the future every car will have ai the electricity grid will be optimized with AI algorithms um you know all all these different systems that that you interact with like the doctor they will have like probably use some AI bot and you have some weird weird mold they'll take a picture of it and scan it and then some like skin cancer diagnosis system will look at it and flag it like they will just be everywhere so so so it might it might not be easy to opt out unless you're really willing to sort of completely rip tear yourself off uh the kind of the the fabric of modern society do you see that happening that seems self-evident to me that that's going to happen well the question is on what scale right so I mean there are people who live Off the Grid or who are Amish and stuff but they are still a small fraction of the world population uh you know with a higher growth rate because fertility rates are larger so if you imagine rolling the tape forward hundreds of years then eventually uh those groups would expand and and others would kind of dwindle into nonexistence unless they changed their ways but I'm just thinking the time scales for that kind of population dynamic stuff to to play out is is multigenerational whereas the technology is moving forward year by year and so so I'm I'm thinking like yeah like it it it will not have that there will not be enough time for these slower processes to really would be my guess to really have a big impact so here's how I see it and maybe you can um pull me back off the brink uh which I would love but It ultimately when humans feel either emotionally distressed or financially distressed which usually the two are intertwined uh they will go all the way to killing their fellow humans with just absolutely no problem whatsoever um if you take the French Revolution right things got bad enough economically they just pulled people into the streets and started beheading them uh I don't think we are fundamentally different than that version of humanity and I think if AI begins to disrupt enough jobs and creates enough turmoil that it's not like the Industrial Revolution where yeah you had a generation that had it kind of rough because they weren't able to uh rapidly change but there was just such anep economic boom that everybody the people that were winning from it far outweighed the people that were losing from it and so it ended up being fine but I think what you're going to see is a disruption that happens so quickly and touches on the one thing that if you break you're going to have a real problem which is meaning and purpose and the only hope we have and this is ultra I opian is that we have enough entertaining things that people are numbed to the fact that they're no longer climbing that life isn't going to be better for them than it was for their parents that they've lost their job or whatever and so they drink do drugs um online porn play video games and that just becomes a a sort of get by existence and they just sort of give up on it that that's the hopeful outcome but I think the more more likely outcome is that this becomes a political divide uh where the battle ends up being drawn along the lines I was saying before were people that utterly reject it and just want to absolutely shut down AI put it back uh in a bottle and then people that wanted to develop it and just another terrifying twist if AI comes out slowly enough that we see let's just say China makes a major advance but it's not a big enough Advance where we would automatically lose in a war I could see a preemptive escalation of violence um to shut it down to make sure that either we're able to hit parody with them or we're a by by elevating ourselves or by tearing them down it's very hard to predict these kind of sociopolitical Dynamics and cultural Dynamics we don't have the kind of scientific theory that that can tell us how social sentiment will change over the course of five or 10 years when you start to I think like in the past a lot of revolutions I guess is like or lack of food to eat I think that hopefully would be relatively easy to supply with some degree of political mobilization especially in these rapid growth scenarios so you could have bread and circus um um um the the the the me the purpose purp uh issue might be harder to remedy but then maybe the line is well like let's be honest here most people's lives today like just how grandiosely purposeful are there really uh like you go in you make a paycheck and then you spend the rest of your hours you know relaxing or having fun or playing with the kids like most people are not really trying to change the world or imagining that they are like some historical figure be striding Humanity to shape its Destiny like it's like just not reality and so if you didn't have to go in and work for eight hours every day using doing some pretty boring stuff maybe that you don't really want to do and you'd rather sleep in and have fun and like you know would that really be such a tragedy uh you got the same paycheck let us say um but uh but but but but without you know doing these chores that that seemed like a win uh potentially uh hopefully a lot of the energy that people put into work could then be put into instead building up leisure activities like to have clubs and hobby organizations that that that create sort of activities for people who now have more free time to to do things um and so there would have to be this cultural reset that that seems like a maybe better outlet for the Surplus uh time and energy than uh trying to tear everything down it's interesting so uh have you read a Brave New World MH what do you think about that because it feels like you have sort of dueling dystopias you've got on the one hand 1984 massive suppression you could think of this as an AI tool that's watching you all the time like if you reread 1984 with the thinking of um AI doing the surveillance this suddenly becomes super real so you have that version of dystopia where even wrong think gets you punished and then you have over here uh the other version which just keep taking taking your drugs feeling good being blissed out all the time uh both read as dystopias um did you take a Brave New World to be dystopic is there something that I'm missing in that interpretation yeah I think it's it's missing some elements that if they were added would make a world lot better so there's like no it was a long time since I read but I think like no real romantic love for example in Brave New World um no appreciation for True art and Beauty at at the higher level as as opposed to sort of easy distraction and shallow kind of uh Flim Flam and so if you imagine um a A Brave New World like scenario but where people actually had a lot of free time that they spent being with people they loved and and cultivating hobbies and like appreciating great literature and learn like cultivating the art of conversation um you know maybe taking arts classes to deepen their appreciation of great art like all kinds of things like also less cerebral things like people some people might be doing more sporty things or being into nature or whatever it is but that a society where where people kind of were um uh yeah focusing on on sort of develop developing a high culture of of living well I think could be pretty utopic the other thing with um Brave New World that I think cast a kind of dystopian poll over it is the really stratification of their society where people are sort of destined from birth to be a particular class most people in Brave New World have various degrees of engineered mental retardation I think they add like alcohol to the um the fetus to sort of deliberately brain damage certain people so that they would be suitable then to work as elevator operators or menial jobs without having some kind of so so that that obviously makes it pretty horrific uh then if if you imagine a society where everybody were like more allowed to encouraged to sort of grow to their full potential and to be full participants in in like a kind of um democratic politic then I think that would also brighten up the that that kind of uh future the main thing that might be then missing is kind of certain forms of heroics uh that that there would be no need for sort of heroic self-sacrifice like you know some some some some great warrior who sacrifices himself for the sake of the comrades by rushing the enemy with a spere like that kind of thing okay so that might be a certain value in that that would no longer make any sense in Brave New World but H we can probably have a pretty good life while just kind of reducing or redefining a few of those particular values really fast so I want to move on um now to how this could actually be amazing I think there are some interesting um what some people re read as religious implications but there's certainly value system implications but um before we do that or as a a ladder into that walk me through what will AI actually do boots on the ground that could lead to a Utopia where we're able to pursue more Leisure things uh our passions whatever what what is AI actually going to do well so it could start with a lot of um work that could be automated so you know instead of having some guys who have to drive the garbage truck around the city every morning to collect the garbage right you could have a self-driving garbage truck uh with like an Optimus robot that HPS off and uh picks up your garbage can and uh does all of that automatically right but then you can sort of go through job by job and if you really have artificial general intelligence like AI with the same learning ability as a human mind then um basically every intellectual human job would would like every every while every job that could be done over like U a video conference link could be automated right and then then if you have robotics to go along with that then also all the manual work could be automated with with very few exceptions um like for example that might be job for athletes if if consumers just happen to prefer to watch humans uh Runners compete than to watch Robot Runners compete right maybe that's just a root preference that people just prefer to watch to humans then that might still be jobs for humans in those areas or if people just want the human priest to officiate their wedding even if the even if the robot priest could like in tone the same words uh just as well as the human you might just care about like a certain task be done by human then those would be carats but for the rest you could automate so this would create a potential enormous uh High rate of economic growth because it's like you can just build more and more robots and and run more and more instances of these AI mins so everybody could have like um a bunch of these that that would be on the lookout for their interests and and helping not just to clean up the home and to cook their food and uh but AI Bots who were like scanning for really great movies that you would watch that knows all about your preferences that could you know um check in on your health and make sure you take your medicines or like spend hours just researching your particular health conditions to optimize everything um so everybody could in that sense as step number one live like extremely rich people in terms of their uh their access to to human labor and and the cost of products would go down if if like you take human labor out of the equation and it's all automated um but then beyond that and more excitingly I think is that like you could actually move the technology envelope outwards by doing like accelerating the pace of new research like for example in medicine there are a lot of conditions now that no matter how much money you have you can't cure them because we don't have you know cures for all cancers and for hard so so there you could imagine accelerating and have like a thousand years of medical research progress in just a couple of years when you have these digital Minds working on this um like maybe unlocking cures for to to to reverse the aging process Etc and then forestalling a hug huge amount of human misery and death that is currently uh pretty unavoidable um and in other areas like much better and entertainment uh where you could have like AIS making these you know movies and artworks and virtual reality G like computer games um designing experiences and and just generally organizing things in a more delightful manner um and U and then then beyond that I think also like these ways of then improving the the human organism itself our own brain is kind of ultimately limiting how good our lives can be um so then you could imagine these different forms of uh upgrading opportunities you know maybe step one extending your Healthy lifespan I think that's like perhaps an obvious one it's just no fun to kind of have terminal diseases and just have your body decaying and falling that all the pain like you just like making sure you remain healthy like and then on top of that you could then start to maybe add uh boosts to your well-being to your ability to understand your like your musicality your sense of humor like your ability to form deep emotional connections to other people um and and then yeah like the kind of having a trajectory where people can continue to grow and develop and Achieve ever greater levels of flourishing and uh and I think like yeah and then freeing up space for more spiritual practices and aesthetic experiences uh as as opposed to kind of lowquality mundane work rated experiences that that currently occupy much of our waking hours we all live constantly connected to the internet and using it so much that we tend to forget just how dangerous of a place it can be the reality is every account you have every time you input your name your address your phone number you're risking your information being leaked on the dark web it happens so often these days it's become an inevitable risk of being on the internet so protect yourself and your family online with today's sponsor Aura Ora provides all-in-one proactive protection to keep your information safe and secure online stopping threats before they strike from financial fraud protection Identity Theft Protection parental controls even spam call protection and more Aura has your back when it comes to you and your family's online security so secure your Digital Life with proactive protection for your assets identity family and Tech across every device with aura go to aura.com impact Theory to start your free twoe trial today that's aura.com impact theory in a um solved World which I think you define as all things that we know to be technologically possible have actually been done uh so in that world do you think that we'll be able to perfectly manipulate our nervous system so we could see and feel and hear and taste and basically completely um manipulate our senses to orchestrate experiences yeah I think so I think if we have as our organic brains still that might be some limits to that I think you could get pretty close by yeah com combining like perfect virtual reality so you could have like whatever sensory input that you choose right sound and smell and VIs and stuff combined with imagine some kind of super drugs that allow you to fine tune each emotion precisely the way you want it without side effects and addiction potential um that already I think gets you some way there then if you imagine other kinds of neurot Technology like kind of neurolink interfaces or other more kind of invasive forms Gene editing various brain circuits and stuff I I think you could get even further along that path to sort of having fine grain real-time control over the content of your experiences if if you uh go to more radical scenarios where maybe you upload into a computer and sort of digitize digital Minds then you might have even more fine grain control because those data structures would be completely accessible uh right you could sort of edit every little neuron in in in real time potentially but I think even with Biology you could get pretty close to that with mature technology okay so that gets into what I think if if that comes true I that is where I think we inevitably end up when I put my utopian hat on that are you familiar at all with the Japanese genre of Storytelling known as isekai not really no um it's kind of like ready player one you you wake up inside of a video game but you're actually in the video game and so you can have relationships with the characters if you get killed you actually die there's a whole thing about that uh that feels to me minus the you actually Die part that feels like if we could completely manipulate our nervous system or upload ourselves that the sort of ultimate expression of Life the maximally cool way to live would be to have like you go to Netflix and you're like ah today I want to be uh a Master Chef from the 1800s France go and you then go live as that person for however much time you want you could speed time up you could slow your perception of time down whatever so you could play for an hour but it felt like you played for 50 years um you could do something I want to do like I'm traveling the cosmos and now you don't actually have to violate the laws of physics to travel somewhere billions of light years away it would just be an imagined simulation and but you're actually you feel like you're there you smell everything and I don't know with our current way of responding to things I I don't know that people could um avoid that that you would end up with an a universe that is tailored to you that you may change frequently or you may live in one uh for an extended period of time but that feels like again with with the way we are now because obviously we could change our own biology so fundamentally through AI that you can throw this out the window but that feels like the end game yeah I wonder though whether it might not be possible to do even slightly better than that I mean you might say if it's either that or the current world uh I mean that does seem pretty good because the current world has a lot of I mean it's not just like you know your baby smiling at you or like a sunset right most of the real world is pretty much a horror show for many people uh for many animals and so even just getting rid of the negative might be just well but if you have all of that I think one of the sort of values that seem possibly missing from the picture you uh painted there um uh is this value of again coming back to purpose and or meaning or significance there is like some sense I think when people consider that yeah this would be like you know a lot of fantastic experiences that would be a lot of fun but it seems a little sort of shallow or like a little arbitrary or random or atomistic and so I think one might want to think about whether there would be the ability to have a lot of that what you just said but with some constraints that create more of a structure around it um that within which we also can have some forms of purpose I think there are these constraints particularly rising from our commitments uh social culturally to to other people and to Traditions um so think think of a very simple uh form of purpose in a technologically mature world uh suppose that person a happens to really want that person b gets what they want so person a cares about B they really want person B to get what they want they just happen to have this desire and then person B now is in a position to give a purpose if B wants that a should do something on their own Steam not just outsources to some robot but that a should put in their own effort to do a certain thing now a has purpose in the sense that the only way they can achieve their goal that b gets what they want is by a themselves actually putting in the effort because nothing else would actually satisfy be's preference um and so now a would actually have a real purpose the only way they can achieve their goal is by putting in effort themselves no matter how advanced the technology is that they have access to they there would be no other way of actually ACH achieving these preferences so I think in this very reductionistic way it seems a little hokey with like person a wanting this and person but I think more complex and subtle ways of that it could actually be a plausible um framework of constraint for these future deep utopians where they kind of care about various Traditions various social norms and and activities in some ways that give them reason for doing things on their own steam um even if that technologically they would be a B to Outsource a bunch of stuff uh like for example if you value uh upholding a tradition there might be no other way of upholding it than actually by continuing for humans to do various things on their own like setting up a population of robots that kind of enacted these past chonies or something might not count as continuing the tradition and so maybe that would be a kind of more ceremonial uh and social aspect to these future utopian lives that would still give a lot of room for this kind of of uh uh unbridled fun but but maybe that there would still be a sort of more I don't know serious uh Contours around it that gives it a little bit more shape and structure that could also allow the utopians to realize various forms of purpose including natural purpose and not just this artificial purpose that we see in game playing where you make up some random goal just for the sake of having a goal but where they care about each other's opinion about things and maybe like what you were saying wanting higher Social Status if the thing that gets you so status is doing things on your own steam like then you know people have to do things on their own steam if they actually want to achieve the social status um so something like that that kind of on the upside gives maybe some scope for human effort uh but then on the downside has this like amazing safety net where uh like the AI make sure that anybody who falls seriously sick is kind of cured and that everybody has you know all the food they need and like all the material condition for like a great comfortable existence that that everybody's brain is kind of have the opportunity to like experience great levels of Joy like some people are just born depressed and it's kind of their whole life sucks just because they had too little too few neuro transmitter receptes in some some area of their brain like their whole life is just basically uh ruined that you could fix that like so that um so like like lift up the floor and then like at the very top you could still have these kind of little games that people people are playing perhaps to sort of make small differentials like through your own effort you could change your life you could make it instead of being fantastically good it could make it super fantastically good something like that that could be sort of the span the stakes that that uh that we fight for if we could create AI whether in a robot or whether in a simulation that was indistinguishable from a human do you think people would care whether they're interacting with a human or they're interacting with a simulated human um that probably uh start caring less about it as they get more experience with these kinds of interactions would be my guess I think one important question for me would be whether this robot is conscious whether it's not just externally hard to distinguish from a human but if it has the same inner psychological Life as a human has in which case I think it would be uh a moral patient like it wouldn't just be a thing that you can throw in the trash if you no longer want it it would be a thing who has uh welfare interests of its own it matters how life goes for it not just like whether it makes humans happy but it it would be like a member of this population that we would want to have a good future and I think in fact this maybe is little bit of a tangent now but like the general ethics of digital Minds is going to think become increasingly important it is like even the AIS we currently have we are very soon if not already at the point where it's not really we can't be very confident that they don't have some rudimentary forms of of of um moral status or or awareness or sub subjective capacity for feeling pleasure or pain and stuff so it will increasingly also become important how how well the future goes for these AI Minds themselves if you're saying it's already hard for us to tell for sure one way or the other what metric will we use to know when this is something that we can just turn on or off make it do what we want how we know when we cross a line into something that has moral rights well it's it's a difficult question I think there needs to be more philosophical and scientific work on that I'm involved with some research groups trying to do this but at least we can say something like if at one extreme you had an AI that was like L functionally identical to human that had um that lived for 80 years that had like a humanlike body humanlike memories that had this brain in brain like an artificial brain structured very much like a biological brain I think in that case the it would be a very strong moral case that we should treat it as as a moral subject as well that it would be wrong to mistreat it and be cruel to it Etc um but I think something far short of that I think just as most of us would agree that at least some animals have various degrees of moral status um um like like it's wrong to to be cruel to a dog or to like mistreat you know a chimpanzee like if if you're a medical researcher even just on a mouse if you want to perform an experiment you have to anesthetize it before you do surgery on on a mouse right because we think it is capable of experiencing suffering um so if even a mouse has like possible claim to sentience and at least some simple form of moral status then uh AI systems that are roughly equivalent to a mouse in in their sort of Behavioral repertoire I think would also be Prima fascia candidates for moral status but it it does get more complicated because like um in some cases AIS can achieve similar behavioral output using very different internal mechanisms and so then you need to think more carefully about what exactly is it that we think matters morally uh these things can kind of come apart more with with humans and animals you have a cluster of things that all come together like animals they can sort of yes they have a have a brain and then they can squeak and they have eyes and they can they have like a body that can they like a whole bunch of things that you don't need to have all of those attributes together with AI so you need to think more carefully about which ones of them are actually the weight carrying elements do you think it would be um morally acceptable to create a simulation uh where let's say the AI is conscious so um is it morally acceptable to create a simulation where um a young girl dies of leukemia I think we should avoid doing that um but if I'm I don't really feel in a position to sort of offer some um overall verdict on like human history all of the good things and the bad things and how to weigh that up it feels more like um a judgment that know like we are to sort of limited to really even understand what what what is on the scales there it feels like I'm not entirely sure I understood that answer in the context of uh the question so uh what does it have to do with human history well I was imagining that uh where you would go with that would be to say well if we look at all of human history it has not just one girl dying from leukemia but a whole bunch of other horrible things as well yeah there's two paths before us that I'm I'm interested in so path number one is where you're going which is um does that make you think any differently about whether this is a simulation which is something that we haven't talked a lot about but you're obviously very famous for putting forward the simulation hypothesis which is that what you're living in right now could be a simulation and in fact mathematically probably is uh but if it is it seems pretty immoral by our own standards if all of the I mean I guess it comes down to the question of can we say that the suffering is intentional and is that where the moral break is or do we just say a set of rules let's call them physics are put in place and then what what comes comes and maybe it couldn't be anticipated and yes unfortunately suffering is going to be one of the things that may or may not have been anticipated we don't know first of all supposing we are in the simulation what exactly the motives are for having created dissimulation or what what the the re like what the Alternatives would be uh if this these hypothetical simulators like we know so little about their world or the choices that face or or about them that it's hard to really come in strong with a clear of like we know exactly how you should do things like we have no we've never even met them we have never heard a s like so I feel a little uh it would be kind of hubris to to to sort of come in there loudly and bang our chest and and sort of tell everybody what's what and how they have like we these like idiots running around here having no clue about what's going on in the universe uh so so that's the first thing um also I guess at least theoretically there seems to be some possibilities of in a simulation of um avoiding or ameliorating or remedying various kinds of ills that might occur in a simulation so um there might be partial simulations there might be uh characters in a simulation that get continued after the simulation end uh you could imagine uh models in which which people opt into being in a simulation there's like a whole big space of possibilities um some some of which I'm sure would be like morally very bad but also many others that it's harder to evaluate uh the morality of Sega morality question that's closer to home how would you feel if an AI could write a Nick Bostrom book as well as you can well it would I guess save me a lot of time and effort then um but imagine that somebody puts it under a different name uh Tim brostrom and uh wow it's shockingly like your last book if they had that level of AI I'm I'm not sure why they would like constrain themselves to writing uh my kind of flawed human outputs they presumably would be able to do something better and more like it would be like it seems harder to write a book that's exactly the kind of book so else would write than to write something better um um because like I I would have to yeah so it seems like an AI that could for each person write a a book in their that person's style that would be as good as what that person could write I think an AI that could do that could probably also write a better book than any of these people could write if you could request a AI not be trained on your works would you ask it to not be trained on your works yeah uh I'm I'm I it's hard hard hard to to to Really uh uh to know these I mean I think it's they they're out there on the internet so uh I don't know um I mean it would be nice maybe to I mean what one is always aware of like giving up rights in perpetuity forever for any purpose including purposes we have not yet imagined like it's like it just seems like a big license to Grant um so like on on just being conservative with not kind of giving up big chunks of possibility space without knowing what's in there I think one might not want to do that like if it were one specific AI for one specific case it would be easier to evaluate it like limit downside it's going to be used for this thing okay that but like if you give it up for any use for whatever purpose in perpetuity through what they have on these contracts like in in throughout the Universe using known and unknown New Media of communication I don't know if you have signed one of these release forms they they sometimes uh uh your your mind boggles at what kind of uh thing you're actually putting your signature to when you're signing one of these yeah they they are needless to say very expensive like scroll through all of these updated terms of services like you click yes I'm sure in one of these updated there's like something and you hereby sell your soul to the devil or like if somebody threw that in there like I I this is like yeah I feel a little bit like it's weird that um we currently are responsible each one of us have this kind of dictatorial power over an entire human life so like each of us is like our own life right we have complete like which is kind of weird if you think about it being having complete dictatorial power over an entire human being for their entire life that's like like a big responsibility are we really capable of shouldering that responsibility I feel uh sometimes maybe not it's just that there is no alternative like there is no kind of other set of people you could trust to do a better job than each person themselves but you do feel if somebody screws up their life you know they they are like you know 20 years old they get into Bad Company they make some bad decision and then you know either they're in prison for the rest of their life or they're like in a car accident and are like named for the rest of their life or they're like they some little dumb decision they make early on have these like decades long consequences I feel sometimes that there is a disproportion like maybe our con our choices should have consequences but a little bad decision that like you suffer for decades for like it feels so I worry if if all of us are slightly kind of it's a little bit like we were you know maybe 14 year olds who whose parents died and we were now had to fend for ourselves in the world like like people had to do before social welfare services existed so yeah maybe they can kind of do it but it seems a little like they're not really fully ready to take full responsibility and I feel maybe we're all a little bit like that um currently given that yeah you made me think of Ilia setk what do you think he saw that I read it as scared him enough to leave open Ai and start a new company called uh safe super intelligence what do you think he's worried about that makes safety the number one thing in his mind fortunately they haven't uh released any details and it's probably better if um AI uh Labs have some form of infos SEC information security so that not all advances are immediately distributed all over the world because if at some point there is like some actually uh dangerous capability Advance it might be better to have the option at that time you know to decide whether it should be disseminated rather than sort of forfeit that opportunity to ster um I I don't know like but probably just like some possible path technological path forward that seemed to you know maybe be able to scale more than the other path and that seemed harder maybe from a alignment point of view to get to work or something like that I don't think it was like any currently existing system but more maybe a research path or something like that and what do you think about alignment I've heard you say that we need to give AI our values my pitch would be that we need to give AI a completely different set of values where it does not value progress for instance because if it's constantly trying to progress and ever sees us as uh prob atic in that we've got trouble and because humans kill so many humans definitely don't want them to have our values depends a little bit on whe whether you are willing to sort of differentiate between more superficial values uh and and deeper values so we have many things we value because we think they are associated with various kinds of consequences um so you know maybe you value exercise because you think it will make you healthy strong and uh you know successful but if if you learned that you were actually wrong about these things like in fact for you the way your particular Body Works exercise actually will harm you and and like make you die early because some valve will break when you put too much pressure on it like some that like if you learned that then suddenly you would no longer value exercise like in fact you would value not doing exercise in that scenario right because it would kill you and so so there you discover exercises it's not really your Basic Value it's like what you actually value is perhaps more like you know health happiness uh strength or something like that and so then but then maybe you could ask the same question for each of those uh and then it's only when you like dig down to your sort of ultimate values that you would have at least a candidate if if you think we should align AI to human values like maybe that's what you would align it to rather than um some of these kind of instrumental values um so that that's one observation um another is with human conflict which is the source of a lot of misery it's there's a kind of obviously if you align AI with one set of humans and and they are at war with another set of humans then it might not be good for the set of humans that uh they are at war with if they just get more powerful tools uh to to defeat their enemy so if you care about all of humanity you presumably would want the AI to be aligned either with all of humanity or with some entity that actually cared altruistically about all of humanity as as opposed to just some random little bit of humanity um but then it it it is also possible to try not to align it in that way and to have AIS with more limited goals that deliberately uh diverge from Human goals because maybe you just feel more confident that you could actually instill those goals um and with of like not Desiring growth I think was your example uh there are problems that crop up there that even if you don't sort of explicitly instill a value Val of of of growth like it might just emerge as one of these instrumental sub goals so it wants something else this is kind of the um one of the points of this paperclip maximizing AI example right so you have an AI that doesn't want growth it wants to make paper Clips as many paper clips as possible this is like the goal you give it but then it turns out that by say growing like by getting more resources it can make more paper clips so then you have an AI that actually for instrumental reasons want to grow wants to avoid being shot off wants to increase its intelligence accumulate more power not because it cares about or values those things in their own right but just because it kind of calculates that I can actually make more paper clips if I have more more power and more resources so some of these goals that you might want an AI not to have would sort of unless you're really careful just be a natural side effects of almost any other goal that you put into the AI that is these kind of convergent instrumental reasons and that's one of the reasons why you might worry about sort of scaling up these AI systems developing very powerful agents it doesn't the sort of the catastrophe scenario doesn't depend on somebody deliberately putting in like a really evil goal in there like it could just have some random arbitrary goal and then like bad things are done in the name of that goal that emerge as sub goals Nick bosom this is an incredibly pivotal moment in human history and uh I have thoroughly enjoyed spending this time with you where can people follow you and hear more about your philosophies um the easiest way is just to go to uh nick bom.com uh is my homepage there's like videos linked and a bunch of writings I'm not really active on social media so yeah just go to my homepage we'll have to get a bot out there publishing in your voice we'd all be better for it thank you man so much for joining me today and everybody at home if you haven't already be sure to subscribe and until next time my friends be legendary take care peace if you like this conversation check out this episode to learn more we have two Futures in our world today either a Mad Max future or a Star Trek future do you think that as we transition over to AI that it will take us through a valley of Despair or is this going to be a straight line to Utopia oh no big valleys of chaotic despair