Transcript
ZrdVpioZ5dU • Project Hail Mary Author Andy Weir Breaks Down AI "You’ll NEVER Watch Movies the Same"
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/TomBilyeu/.shards/text-0001.zst#text/1222_ZrdVpioZ5dU.txt
Kind: captions
Language: en
As a grounded sci-fi writer, how do you
think that AI is going to change society
in the next five years, 25 years, and a
hundred years? In the next 5 years, I
think we're looking at some disruptions
in certain um industries. Like for
instance, right now they're still kind
of working out how AI and AI art is
going to be perceived. Like a lot of
people say, "Oh, AI art is bad because
it samples, you know, from millions of
pieces of art that that it sees online
and those those creators don't get any
credit. They don't get any royalties.
They don't get any money." And so
there's a big moral issue related to
that. And so there'll be that kind of
fight. But I think in the end like the
the technology solution that is easier
and cheaper for everyone is the one that
always wins. So I think um over the next
five years people will argue about
whether or not it's okay to train AI on
humanmade stuff and hey what about me do
I get credit for that and then probably
within probably within the next 5 years
even 5 to 10 years that argument is just
going to go away. I think people are
going to just accept that okay AI is AI
can train on human art in the same way
that humans train on human art. I mean,
every artist out there, there's nobody
who like learned how to do their craft
in a vacuum. They looked at other
artists stuff. So, your brain is a
neural network. Why do you get to do
this, but an artificial neuronet network
doesn't? Eventually, it's going to go
away. And um it's going to be extremely
disruptive to the uh graphic arts
industry because now I don't need to
hire an artist to draw something for me
as long as it's not something precision
or you know with um legal ramifications.
If I just want an image of a crowd
cheering for my product or something
like that, I can have a computer make
that and then I can own that image and
that's it. And so, um, as as a company,
if I was trying to make advertisements
or something like that, sounds a lot
better to get those results in one
second and pay, you know, $10 a month
than it is to get those results in, you
know, two months and pay $5,000. There's
just
no um one thing I'm sure that you know
as much as anyone else or probably more
is that
like something that makes things cheaper
for businesses always ends up taking
over. It always ends up being the way
things go and protectionism never works.
I mean, now we're getting into like
economic theories, and I'm a sci-fi
writer, so maybe I shouldn't get too
deep into that, but I feel like
uh I I think of economics as like a type
of physics, and you can imagine money
flow as being kind of like energy. It
always tries to go to the lowest energy
state. So, one way or another, the
economically wise thing ends up being
the socially acceptable
thing. And that's that's been true
throughout all history. So even though a
lot of artists will get kind of screwed
in the short term, I think that that's
eventually people are going to stop
talking about it and that that issue is
going to go away and then the artists
will be some of them will do humanmade
art for the purpose of being artistic
and others the now artists will be the
people who use the AIS to refine and
make better looking images than a layman
can. Um, that's what I always say is
like for any disruptive technology that
puts jobs and careers out of business,
it also creates jobs and careers. So,
you know, for, you know, artists might
lose their jobs, but then they will be
the ones who are already have the
built-in skill set to understand what
looks good and say like, "No, no, no.
I'm going to tell the AI do this
instead. No, no. Okay, I'm going to give
these little tweaks." they'll become,
you know, the kind of super users, the
people who know how to use an AI to to
do this stuff. I mean, kind of like in
the same in the same vein of like you
could do all sorts of amazing things
with Photoshop, but you have to have
skills. You have to know how to do them.
So, I think that's what we're going to
see in the next 5 years is a transition
from those things out. In the next 25
years, I think we're going to see a big
tumultu big tumultuous changes in the
entire entertainment industry.
Um, my prediction is that um the concept
of event-based entertainment is going to
not go away, but it's going to kind of
go the way of horsemanship. It'll be
it'll go from something everybody did to
something that's kind of a niche
interest because the notion of like, oh,
Avengers Endgame is coming out uh on
such and such a date. We're all going to
see it. It's going to make a billions of
dollars and everybody's going to love
it. Everybody's be talking about it.
Everybody's going, "Oh, what do you
think about this scene, that scene, that
scene?" It's like an event that happens
that affects millions of people, not
like a tumultuous event, just an
entertainment event. But I think those
days are going to go away. I think
what's going to happen is eventually
you're going to have, you know, we're
talking in the 25 year time frame now.
We're gonna have AIs that can write
stories as well as people can. I think
that's pretty straightforward. An AI can
read a billion books and say like, okay,
I know I understand how stories get put
together and I can write a story that
does this or that or the other thing.
Then you're also going to get AIs that
learn what you are all about. Like so
you'll have on your computer an AI
assistant or something like that. Just
in the same way that
your that your search engine knows like
what products you're interested in, this
AI will know what you like and don't
like in entertainment, what you what you
think, what your opinions are,
political, ideological, personal, what
things you think are cool and
interesting, what things you think are
lame and boring based on your viewing
activity history. then you will have on
your system um it will be able to write
a story that you personally will think
is awesome. Maybe the guy sitting next
to you would think it's lame. Maybe
everyone in the world would say this is
a dumb story. I don't understand why
anybody would like this. But it doesn't
matter. This thing makes the story for
you. And then it can also create it as a
film. You know, it can create all the
imagery. It can create the animation. It
can do everything. And so imagine if
whatever your favorite movie in the
world is, imagine that. And imagine you
tell your AI, you're like, I want to see
um a sequel to that movie, you know, and
and maybe your favorite movie in the
world is one that was AI generated. It's
like, you know, about purple bunnies on
the planet Zorbback, whatever. And like
everybody says like, why do you even
like that? And it's like, I don't know.
I just I think it's cool the way the
taller purple bunny is like manipulating
the whatever you know and so eventually
I think entertainment will become a a
very personalized experience like you
will be you will be watching a movie
that was made for you and for no other
consumer.
Do you think there'll be societal
consequences to having a lack of shared
narrative? Well, people will still
communicate about, you know, the real
world, but lack of shared narrative. I
think what it'll do is it'll actually
remove narrative control out of the
hands of a of a few. Like, so right now,
the people who make the movies, the
people, you know, who create the
entertainment can kind of guide attempt
to guide your worldview. And I think
we're seeing a lot of that right now.
And I'm I'm I'm I I try never to do
that, but um ideology and messaging put
into storytelling and it seems like that
I think I think it's kind of an issue in
the industry lately is that messaging is
taking a front seat ahead of
entertainment and plot development and
um stuff like that. I try never to have
any political messaging in my stuff. But
regardless of that, it is definitely
there. And what me this means that you
have a small cloistered group of people
who live in their own kind of political
bubble and they get to determine the
messaging that's happening across the
board in both television and film
industry um just in every way. But
that's going to go away when it becomes
completely democratized, right? Um, and
we've seen that to a smaller extent with
like for instance news media. How um
there used to be just three news
networks. You could watch ABC, CBS or
NBC. As when I was a kid, you know,
that's it. Those those that was that
that was all the news you could watch.
But now there are specialist news
channels that can cater to the far
right, far left, middle right, middle
left, whatever. And so everybody
gravitates toward those news channels
that that keep them in their own bubble.
Um, this uh is not the case with
entertainment. You still have one kind
of monolithic entity, mostly Hollywood,
uh, where people tend to be fairly
like-minded. There's a little bit of
disagreement, but they tend to skew
left. It happens to be left, but in the
50s it was they skewed very far to the
right. So, it's like whatever Hollywood
feels is what you see in all the
entertainment that um, gets created and
validated. And that'll just go away.
there won't be an arbiter of messaging
in entertainment anymore.
And when you think about that, uh do you
have any anxiety over the fact that
there won't be this top- down
coordinated
uh this this is our value system,
everybody, and if you're not on board
with this, there's a problem. Do you see
any downside to that breaking down? No,
not at all. I think it's great. Um, I'm
I'm sort of an um an evangelist for
this, but I feel like or just the way I
write and the way I consume
entertainment is I just want to be
entertained. I don't want to be preached
at. I don't want to be told what my
morals should be. I don't want to I I
don't want to be made to think about
anything, unless that's the sort of
thing that I'm trying going out of my
way to watch. I just want to I watch
entertainment to have fun to enjoy
myself, you know, and um if I want the
entertainment equivalent of like fast
food, then that's what I want. And you
know, I don't want someone if I go to
McDonald's, I don't want them to say
like, "Here's your Big Mac and here's
your broccoli." And I'm like, "Well, I
didn't want broccoli." It's like, "Well,
we went ahead and mixed the broccoli in
with your Big Mac because it's better
for you." I'm like, "I don't want what's
better for me. I want what I'm going to
enjoy."
Okay, stop stop trying to make things
better for me. I came here with a
purpose and I want to eat a Big Mac, you
know? So, um I think it's great. I think
it's I entertainment should be in my
opinion about being entertained. I mean,
it in the end it's like this is a
leisure time activity that you're doing
for fun, watching video, watching a
movie, reading a book, whatever. And you
should get to be the one who decides how
much if any messaging there is. If you
really want it, you can ask your AI,
"Oh, I want an action movie, but I want
it to have social overtones about wealth
divine. Please, please include that."
Then it will.
So when I look at that, one of the
things that I think a lot about is um
there's a guy named James Burnham who
wrote this book called The Makia of
Eliens. And in the book he he puts a lot
of things forward but one of the most
important is that the only way to get a
large group of people to work together
in a flexible manner is to have a shared
narrative so that everybody understands
um uh what do we as a tribe believe
where are we pointed and that there's
something in the architecture of the
human mind that they want to follow
somebody. And so when I think through
what does the world look like? Because I
think you're right that this is all
going to be individualized. That
everybody's going to be engaging with
everything at the level of this is how I
want it. This is how I want my news. I
want my news to skew left or I want my
news to skew right. I want to see uh the
things that I already believe to be
echoed back to me. And now what you end
up with is this massive spectrum. Rather
than people falling into these easy
camps, you have people all across the
spectrum. And so I can't help but go,
huh, if for all of human history we have
looked up to the leaders, the leaders
through story, through political
maneuvering, they always gave us a
direction to move in. And when we wanted
opposition, it was like very controlled
opposition. It's, you know, the left
versus the right. It's a very simplified
notion. uh that there's an inevitable
sort of scattering of humans because I I
think they will still want to tribe up
and as the tribes become increasingly
niche then how do we move forward
effectively and I think that there's
going to be an element of chaos that
will come from people not having a
shared narrative. Now I'm with you. I
don't want it to be forced, but I do see
an inevitable cultural fractionating
happening due to the way the AI will be
so singular in its message delivery.
Interesting. I I kind of disagree with
you a little bit on that because I think
that um or rather I don't disagree with
you. I guess I disagree with that book a
bit. If you go back to the era before
mass communication, we have always had
nations. We've always had countries. I
mean, we had World War I. We We were
this vast country. We were the United
States, 3,000 miles across. And the
fastest form of communication was a
letter being delivered by train, right?
And we still were a cohesive nation that
had core ideologies that we held
together. It didn't need a daily
reinforcement of narrative control from
a centralized source. There wasn't one.
It's interesting. I would say that would
make it easier because now whatever
becomes the dominant story that's being
passed on, which was traditionally via
religion, religion gave you the
oversimplified story that everybody
could get behind. So we are around these
parts. We're Christian, we're Catholic,
we're Muslim, whatever. But you had a
sort of the ultimate self-help book that
gave you an instruction manual for life
and it got passed on through the
churches. And there was nobody that
could fight for your opinion. like the
odds that you even heard about an
alternative way to view life or an
alternative value stack was next to
impossible for the reasons that you laid
out. Um, so yeah, I think that uh again,
both of us are obviously just
prognosticating, but I feel like knowing
what I know about the way the human mind
works, it seems inevitable that there
will be some sort of second and third
order consequence to there not being
these really tight shared narratives,
which I'll tie to this moment. Part of
why people say we live in this
post-truth moment, is that it is
extremely hard to define what is once
you get outside of physics, what is
objectively true. Most of it's just
human interpretation. And so if most of
life is human interpretation and now
that human interpretation of what life
is, what it means, what one ought to
pursue becomes so individualized, feels
like something weird or at a minimum
something unexpected is going to happen
from that. Um, again, just breaking
apart into these individual narratives.
Yeah, I mean I I can see what you're
saying, but I still have to go back and
say that we've we've we've lived in that
world before. And yes, it's true.
There's the overarching ideology and
belief system of like Christianity was
prevalent all throughout the United
States in the 1800s and early 1900s, but
people in, you know, people in Maine
didn't have a lot of interaction with
people in California, right? they didn't
they didn't you know react on they
didn't interact on a daily basis or on
an hourly basis like we do now. So
basically those were two completely
isolated societies that had like nothing
to do with each other. Now when you
become I I think part of being a nation
back then was a smaller list of core
ideologies and for the United States
they're all codified in the
constitution. So that's like the one
thing we we say okay so that document
there that's that's how we do things
here in this whole big country. Um so
you're right there's that central thing
but it's static. It's not constantly
shifting. When you have a group of
people in charge of a narrative and
those people can change it any way they
like. Um, that's when I think the
concept of a central narrative control
becomes disruptive for society because
you have a core group of people that can
kind of suddenly change
morality. Like we saw this a lot with I
mean I don't know what people are going
to call this era in entertainment. Some
people use the word woke a lot. Some
people hate it if you say the word woke.
But it's going to have some kind of name
like the woke era or something like that
and people are going to study it in the
future because it's like similar to
McCarthyism. Things suddenly changed to
the point where something that you said
10 years ago wouldn't have upset or had
any effect effect on anybody. And now if
you say that exact same sentence, your
entire career will be over and your life
will be ruined. So social change at that
pace only happens when you have a core
small group of people controlling the
narrative I believe and when those
people either suddenly change their mind
or those people suddenly change like
they're new people now um that that
can't happen when you democratize
ideology across everybody you know each
individual computer you know if it's
your computer is no longer you YouTube
or the movie industry or something like
that is no longer feeding you a
narrative. Instead, it's just like,
okay, you wanted you wanted a movie that
was at least 50% car chases. There you
go. All right. So, let me break down
make sure that I understand what you're
saying. Uh Tom, the problem historically
has been that people can control the
narrative and having that kind of top-
down control is deeply problematic. You
can sway morality. Um, and the fact that
people were not aware of what's going on
over here or over here, um, there was a
they fell into a bit of a trap in that
once somebody gets a hold of that
narrative, it's all they know and so
they're going to succumb to that. Now,
um, what we're seeing is sort of the end
of that as AI is coming on board, people
are going to be unshackled from the top
down narrative control. people are going
to
um be able to I guess have freedom of
choice in terms of how they begin to
structure the narrative under which they
live and you're not seeing any negative
second and third order consequences that
come from that. I wouldn't say I'm not
seeing any negatives. I'm just saying
that that's I think so far I I think
that that is as far as I've taken it in
my mind right this second is that I I
don't see that it's bad to have the
central narrative structure toppled. Um
if you want to talk about second and
third order um um effects is like well
we already have with the democratization
of uh communication thanks to the
internet. We have people able to self-
sort into niches of ideology and belief.
Whereas used to be if you didn't believe
the moon landing happened you'd be the
only guy in you know for 10 city blocks
that had that opinion and people would
say you're crazy. But now you can find
all the other people who think that way
and you can all hang out together and
talk to each other and reinforce your
belief system. And you could not do that
any other time earlier in history. You
could never uh the you know the the town
lunatics couldn't form a town lunatics
chat group, you know, with one lunatic
per town. It it didn't work that way. So
now they can. So there's the all the
downsides. The democratization of
information means that people can form a
bubble where it turns out their core
assumptions are wrong, like objectively
wrong, but can't tell them otherwise.
Um,
do you think AI speeds that up or slows
that down? I think it would speed it up
because people are never going to tell a
subordinate to challenge their beliefs,
right? Nobody ever or it's very rare for
when somebody has like something you
know like like for instance people watch
movies if if you know that a movie is
like pressing an ideology that you
disagree with you're probably not going
to watch it at all. Um you people don't
deliberately choose to have their
ideology challenged. So if you can tell
your computer, hey I want to watch a
movie. It's an action movie. I want
there to be gunfights and good guys and
bad guys and maybe maybe maybe somebody
goes this base, you know, whatever.
You're not going to have it say, "Oh, by
the way, I'm pro-choice and I want this
movie to have a strong pro-life
message." You're you're not going to do
that.
Yeah. No, I totally agree. I think it is
going to exacerbate that. I think it's
um the thing that I have a hard time
wrapping my head around is just how far
does that go? How fragmented do we
become? How many tribes do we break
into? Um, but so all right, staying in
the 25-y year range, I would love as as
an author who has done an absolutely
profound job. And for people that have
not read your books, uh, I am screaming
from the rooftops, please, for the love
of God, if you like sci-fi, read Andy
Weir's books. That it is some of my
all-time favorite sci-fi. Uh, but talk
to me. You're very good at grounding
things in what is real. What do you see
happening in say material science over
the next um, 25 years? And if you can
encapsulate that in what your base
assumptions are about what AI will be
able to achieve, um I that would be
really helpful. The better that AI gets
at um making physical models, the better
we're all going to be because
eventually, you know, they're already
using they've already got what is it?
Alpha fold. Um yeah. Yeah. So used to be
they they only knew how a few proteins
folded. Now they know like
200 million of them because it can just
solve it and because it's it's AI that's
made specifically for figuring out how a
protein is going to fold eventually.
Here here's some ideas I have for things
that we might see in the next 25 years
uh that AI will do. First off, a be able
to say like you're like, "Okay, here's a
new virus that's going around, you know,
it would be, you know, um, COVID, you
know, 32 or whatever, right?" And and
he's like, "Here here, here here's a new
virus that's going around. We've
sequenced the RNA genome. Okay, AI, what
do we do about this?" And the AI is
like, "Well, first off, I know the shape
the virus is going to be because you
told me the sequence. So now that's how
it goes together. Okay, now I can model
it working on human cells and see how
it's interacting with human cells and
then I can say like, oh, here is here's
the antigen that would take care of that
problem that your body would make over
time anyway. Here's how to manufacture
it here. Here's the answer. Like, and
it'd be like it'd be ridiculous. It'd be
like, oh, here's a new disease. And then
like an hour later, the computer's like,
here's the vaccine. Just put this
sequence of genomes together, throw it
in some E.coli,
so that it'll like mass produce it for
you and then uh then uh there you go.
Let me know if you need anything else.
And I also think we might see 25 years
might be a little optimistic for this,
but be in the middle of that 25
somewhere between 25 and 100 we get like
where they say like okay AI Bob here has
cancer and it's an aggressive form of
small cell carcinoma that's from his
lungs and um we've uh taken one of the
cancer cells and sequenced the entire
DNA genome of that cell of that one cell
by you know with lab equipment and the
um you know the the AI goes like okay
here's the cancer cell here's how it's
working what would disrupt it how does
it differ from healthy lung tissue and
okay I've designed a variant of the
influenza virus that attack lung tissue
cells that'll only attack the cancer
version here you go inject this in the
patient suppress his immune system so
that it doesn't kill it and then this
virus will go kill all the cancer cells
only like I mean this is the sort stuff
that you can expect AI to be able to do
because AI is really good at if there's
a cloud of seemingly infinite possible
solutions. AI is very very good at
narrowing that down to tangible real
solutions. We'll get back to the show in
a moment, but first let's talk about the
one thing every founder, operator, and
optimizer needs. Clarity. Netswuite
gives you one dashboard, one system, one
source of truth, so you stop reacting
and start anticipating. Over
41,000 businesses have already
futureproof their operations with
Netswuite by Oracle, the number one
cloud ERP that brings accounting,
financials, inventory, HR, and more into
a single unified platform. With real
time insights and forecasting, you're
not just tracking the past, you're
predicting what's next. And when you're
closing books in days, not weeks, that's
time you get back to actually run the
business. Whether you're doing millions
or hundreds of millions in revenue,
Netswuite gives you the visibility and
control to move fast and win bigger.
Speaking of opportunity, download the
CFO's guide to AI and machine learning
at
netswuite.com/ theory. Guide is free at
netswuite.com/theory.
netswuite.com/theory. This is a paid
advertisement. And now let's get back to
the show. Yeah. Yeah. Now you're you're
getting into stuff that gets me super
excited. One, you talked about AI being
able to build physical models. Do you
think that there is a rate limiter on
the amount of intelligence that AI can
gain? And um if not, do you believe that
AI will ever be able to understand
physics to the point where it can make
novel breakthroughs in physics? Oh,
absolutely. I think it absolutely can.
Um I mean will be able to. Yeah. You got
to remember it's like people think that
human brains are somehow magical and
handed to us by the Lord, but you're
just a neural network. So anything that
you can do is something that a neural
network can do by definition, right? So
the real question I think you're getting
at is at what point do you know at what
point will we have AIs that are
comparable in complexity and intellect
to a human brain? Well, a human brain
has about 80 billion
neurons.
adult and a few billion years of
evolution figuring out exactly how to
connect them for an optimal well I mean
the vast majority of your brain is you
know figuring out how to not be eaten by
wolves and stuff like that but an AI
doesn't have to worry about that as much
but the point is um there is nothing
that a human brain can do that that AI
won't be able to eventually do because a
human brain is just a neural network
literally that's it and so is AI Okay.
Do you think that we will just need to
continue to um scale the clusters plus
increase efficiency and we'll hit
artificial super intelligence or we
don't have the technology to do that
right now. But I think AI will help us
make that technology. It'll say you
it'll start off with like we're at our
kind of alpha levels of AI and we'll
start off with humans trying to figure
out okay you know what would be cool is
if we could make this smaller use less
energy more efficient stuff like that
and here are like a hundred billion
possible ways it might work you were
talking about material science that's a
big part of it and then uh use AI to
narrow it down and AI you end up
figuring out oh I bet you I could make a
better AI this way then you make that
better AI and then for that better AI
you can say like hey start working on
making an even better AI. It's like,
okay, I'm on it, you know, and so it it
can bootstrap itself up, which is
something unique. It's not something
that happened in nature or did it or did
it because our brains are neural
networks and we're sitting around trying
to figure out how to make better neural
networks. So you could say that we're
already like the singularity began a few
million years ago when human minds
started to become vastly uh superior to
all the other animals on the planet. You
could say that was the beginning of the
singularity. Took us a little while to
get to this next
step where this neural network is
working on new neural networks that are
better than it. But you know, it's kind
of like uh what do you like the uh life
evolved on Earth about 4 billion years
ago, but it was only like two billion
years ago or something that we had
anything more complicated than a single
cell. You know, there's a little dead
period for a while sometime before you
get that exponential spike. I see all
sorts of like big benefits coming in a
hundred years span. Is that where you
were going next? Sorry. Well, first
before we get to that, but yes, I very
much want to hear your take on that.
What is the rate limiter that you see
right now that you think AI is going to
have to help us overcome? Because you
said I don't know that we have the
technology to do it now. We're going to
need AI for that. Do are you already
aware of like where we're going to hit a
ceiling? Yeah, I think it'll it'll have
to do with computational power like the
ability to run massive parallel neural
networks. So the next thing they'll be
doing now is because AI is still very
fresh. You know how when they first
invented
graphics, you know, you know, really
high resolution graphics, really good
stuff on your monitor, they did
everything algorithmically. Then they
figured out how to make graphics cards.
And then they figured out how to make
graphics cards like wildly parallel
because they figured out how to make it
so that oh graphic every every pixel is
basically its own little computer like
doing this you pixel shader algorithms.
And then people started to use graphics
cards to do all sorts of weird things
unrelated to graphics like trying to
mine Bitcoin or whatever because of the
massive parallel nature of graphics
cards. Um I think the next thing that's
going to happen is they're going to
start inventing hardware that is
optimized for running neural networks. I
mean we already have some of that but
it's always very specially made in labs
and stuff like that. But I think
eventually we're going to have the
graphic card equivalent of neural
networks. It'll be like, okay, here's
here's your neural network card or your
AI card that's just got the hardware
necessary to to really really quickly
and efficiently run massive numbers of
parallel nodes of AI. I think that
that's kind of um one of the limiters we
have right now. So, you can think of us
as being kind of like video games before
we had graphics card, you know, like
we're in our Duke Nukem 3D phase where
absolutely every pixel had to be
calculated by the CPU instead of a
graphics card, right? So, it's going to
take more tailored technology to run
this stuff more efficiency efficiently.
Um, okay, that's a fair assumption. Now,
admittedly, I am not close enough to the
technology
uh to know if what I'm about to say is
pure delusion. But when I look at like
the recent um demonstration that Elon
put together with their supercluster
where people thought, I don't know if
it's going to be easy to keep making
these bigger and if you're going to get
any benefit out of making them bigger.
And then he effectively doubled the size
and is now doubling it again and showing
that you really can continue to daisy
chain these. Uh so supposedly big
breakthroughs coming there just on the
things we already understand. Then on
top of that you've got DeepSeek coming
in and saying you guys are playing the
wrong game. This is a game about
compression and efficiency and look what
we've been able to do just by improving
the compression. And literally just a
couple days ago. Um so they came out
like January 16th or something blew
everybody away. Everybody was freaked
out at how inexpensive certainly the
final leg of their training was. uh and
said, "Okay, game of efficiency. They're
playing it better than we're playing it
in the US." But now they just came out
with another one and like re-uped the
level of efficiency that they're able to
get and they're able to according to
certain benchmarks uh hit chat GPT 40
levels on a 1.5 billion parameter model
which is crazy given the size of the big
models like 70 billion plus parameters.
Uh so to be able to on a benchmark,
which again is different, but on a
benchmark to still be able to match the
performance of something so many times
bigger, um do you think we'll continue
to pull down the level of computation
that's necessary by developing more
efficient algorithms? Uh yes, but not at
the absurd rate we're seeing right now.
We are at the very very beginning of
this new technology. And right at the
beginning of any new technology, you see
this tremendous spike in efficiency,
cost effectiveness, all these things
like that. I want you to consider how
much the aviation industry changed
between 1935 and
1965. In 1935, you had propeller-driven
planes, very very small amounts of like
commercial air travel, just not a lot
going on. And by 1965, you had jets that
could take you from New York to London.
And there it was like routine and even
boring. Like in the early days of the
aviation industry, we had just wildly
tremendous advances that you might
think, "Holy crap, this is moving so
fast." I mean, we went from figuring out
how to do powered flight to landing on
the moon in 66 years. It's crazy, right?
But then since then, it hasn't changed
that much because what happened was we
we got all the lowhanging fruit. It's
like, "Wow, here's all the things you
can do." Okay. Yeah, we figured all that
out. Now it's all about like okay how
can we use like carbon fiber to make the
holes a little lighter? How do we make
you know the the engines more fuel
efficient? How you know it's like
they're figuring there there's some
asmtote that represents solving air
travel and there we're always
approaching it now but man right at the
beginning it's crazy. AI I think like
any technology is going to be the same.
We're right at the beginning. So we're
going to see these oh yeah mine's twice
as better twice as strong as yours. Oh
yeah well mine's twice as strong as that
one. Oh mine's twice as strong as that
one. Eventually, it's going to be like,
"Oh, okay. Now, we're just fighting over
minor scraps." But I think that's good.
We're we're
rapidly rapidly getting rid of all the
lowhanging fruit we can do before we get
to the more difficult aspects of AI
stuff. Earlier, you mentioned something
we flew by it and didn't talk about it
again, but the idea of how AI can affect
materials technology. Um, I'm pretty
excited by that because I think
materials technology is the solution to
a lot of issues.
Um, most notably for my favorite things,
space travel. Um, the most efficient
possible spacecraft fuel is just
hydrogen and oxygen. The simplest
possible thing. It has a tremendous
amount of specific impulse. It has a a
huge amount of heat and force generated
just by burning hydrogen and oxygen.
It's one of the simplest reactions there
are. And we have a lot of that. All you
have to do is use electricity on water
and you get hydrogen and oxygen. Then
you let the rocket put it back together.
really really fast and hard, right? So,
it gives us a method by which we can
spend energy that we create however we
we like on Earth and ultimately turn
that into propulsion on a rocket. Okay,
it's great. So, why aren't we doing
that? Well, we are. For the most part,
we are. They're always like a variant of
the hydrogen oxygen reaction. But
hydrogen oxygen, if you just let it go
with no limiter, it burns so hot that
it'll melt any engine. like it'll melt
whatever it's in. It just gets so damn
hot. They have to deliberately kind of
calm it down, put other things in there,
maybe things that it can kick out the
back to add a little more kick of
propulsion, but they don't let it go
wild, right? Because if you did, they
can't dissipate the heat away. They just
can't get rid of it fast enough. They
can't cool the engine enough at the
beginning to make it it's just it'll
melt everything that we have. Now
imagine if you imagine if you developed
a material that was hard that could put
up that could stand up to a lot of
force, could stand up to a lot of shock
and wouldn't melt or at least not at
those temperatures. Then commercial
space travel is just like invented that
day. Like literally if you invent that
material you will have like within two
or three years you will have you know t
tickets to low earth orbit for middle
class person. I I strongly believe that.
Yeah. So materials technology is so many
things just come down to materials
technology.
Yeah. No doubt. Uh okay. So I assume
you're watching SpaceX. Um do you are
they talking about that kind of thing
because they've said look we've built
all this without AI. Now imagine what
we're going to be able to do with AI.
Um, do you know is that the kind of
thing that they're pursuing or this
isn't really on I don't know if they're
pursuing it, but I imagine this sort of
materials technology wouldn't be
invented uh by a purpose-driven company.
It would be invented in a lab somewhere.
It would be invented by material
scientists who then use AI to figure
out. Okay. Well, let's see. I remember I
saw a um I went to JPL. Yeah, JPL. And I
did a tour and a bunch of Caltech labs
as well. I was there during the height
of the Martian when people cared who I
was and like I went and other things
there was one group I wish I could name
the doctors involved and all this I but
I can't remember even the name of the
group but what they were doing was they
were trying to find better
superconductors right they're trying or
better conductors in general and they
were just doing it with this kind of
brute force approach where they're just
okay we're going to try all these
combinations of these four elements in
different proportions and stuff like
that and we're going to check the
conductivity and stuff like that and
then I But that takes a long time to mix
these things together and do all this. I
mean, we want to do millions of
different variations and check them out.
And what they've done is they repurposed
this old school printer from like the
1970s that had this really robust inkjet
thing and they changed it such that it's
shooting the powder of these metals
down. And so they're printing little
dots onto um onto like a ceramic sheet
or something like that of different
proportions of these metals and they
bake it so it mixes together. sell these
little dots of metal that they're
basically printing and then they have
like a thing a sample of a thing go and
test the conductivity of each one of
those dots and just see how how's it
doing. Did we find this one's
interesting? Okay, now I keep going. I
just thought that was amazing. But
imagine if you could virtualize that.
Imagine if you could figure out atomic
interactions all the you know how you
know what's going on inside of metals
that are coming together to become
better conductors and all these alloys
and stuff. What if you could stimulate
that with
AI and then you could say tell the AI
okay spend I don't know the next year
trying these billion possible variations
and in your modeling tell me which one
has the best connectivity.
that that to me is very interesting. Um
when we were talking about protein
folding, one of the things we didn't
touch on is that Alphafold can actually
predict novel proteins and say, "Oh,
make a protein that moves like this." Uh
and given what proteins do in the body,
that is pretty phenomenal because now
you can get novel things to happen
inside of a cell based on creating these
novel proteins. uh seeing them do the
same thing uh like you said in a
simulation so that they can move really
fast test a lot of these things uh would
be very interesting to see what that
outputs now when you think about this
for space travel okay that's one thing
when you start thinking about this
inside of a biological system uh does
that raise any ethical concerns for you
like if I said hey uh I think in the
next 25 years and I actually do believe
this that you're going to have designer
babies or certainly the ability to
design a child. Um, do you think at all
about that? Do you worry about that? Is
that something that you'd want to see um
some tight restraint put on or is that
an exciting part of the future for you?
I think that's exciting. I think because
the first quote unquote designer babies
will be like, "Hey, me and my me and my
wife both carry the recessive T-ax gene.
We'd like it if our baby didn't have
that because that means you die by age
10 or or whatever." remember my wife and
I both carry the cickle cell anemia gene
and we we'd like our child not to hit
that one in4 chance of cickle cell
anemia death you know so those are going
to be the first designer thing the
correcting invariably fatal genetic
flaws and nobody's really going to argue
about that right nobody's going to say
no no no you must make a baby that will
suffer for five years and die right the
question becomes the only ethical now
now we're talking morality right so
there is no objective truth on this but
for me the only real ethical concern
concern is, are you sure you're not
going to introduce some other problem
into this baby that's going to make
that's going to make their life painful
or unhappy or unpleasant? Like it's
like, hey, I want my baby to have blue
eyes and dark hair and maybe olive
complexion skin and I want to be really
tall. I want to be like six foot tall
when he's an adult and uh you know, da
da da. And they're like, okay, we made
all those changes. Unfortunately, he has
this he has Crohn's disease because
yeah, we didn't Yeah, we made some
mistakes or whatever. We you know, we
changed these things. Turns out that
gives him Crohn's disease. That's the
ethical concern that I'm worried about.
So, that's I mean, a lot of people would
would disagree with what I just said. A
lot of people would say like, no, if
you're changing a human being at all,
you're messing with God's domain and
you're you're you're doing a morally bad
thing. My personal opinion is that it's
only morally bad if you cause human
suffering. So if you um as long as you
are sure that what you're doing isn't
going to end up making a human that has
to suffer as a result that that would be
I think if we're able to use AI to
create a simulation of human biology
full stop like it knows it top to bottom
all the different interactions how
protein folding works how novel proteins
work all of that can read DNA perfectly
understands um the epigenetics of it all
as well just really really has a
full-blown picture of how this is going
to work. And we could begin to optimize
not not the sick, but we could actually
optimize uh a child for whether it's
higher intelligence, which um there was
a big kurfuffle with a um Chinese doctor
that did gene editing who he claimed it
was about reducing the likelihood of
HIV, but people were like, "Huh, but
it's also likely to make them more
intelligent."
Uh would you is that something you would
want to see? I want to know what the
mistakes are. When you invent the plane,
you invent the plane crash, right? So I
want to know what I I would I would be
very cautious with any sort of human
related experimentation because I
believe the most valuable thing on earth
is the human experience, right? And so I
think that whatever you're going to do
with your design your baby, as long as
you're not causing human
suffering, I'm probably okay with it.
But I really want you to be sure that
you're not going to cause that child to
suffer either as a baby or as an adult.
All right.
Um so yeah I mean and then people then
you start getting into all these uh you
know moral or ethical things of like
well how much right does a parent have
over their child's body and so on and
you know someone might be like you know
I'm I'm deaf my wife is deaf that's the
lifestyle we've chosen that or that's
the lifestyle we have there are deaf
activists you know and some people might
say I want our baby to be born deaf and
then you say like, well, hang
on, you know, so now you're talking
about giving deliberately giving a
disability to a child, but those the
deaf activists would say it's not a
disability, it's a lifestyle choice. How
is it any different than circumcision,
you know, and you know, so that's where
you start getting into those morally
gray areas. And I'm not interested in
arguing about those because I'm I'm far
more interested in the science, but
those are the arguments that people will
be having. So that's my prediction for
the future. Um, a couple other things
when it comes to designer genes. You
were talking about uh novel um novel
protein. Well, imagine I I don't think
we're too far away from novel proteins
being able to go modify your DNA like
let's say you are a a 40-year-old man
and you have I don't know some genetic
problem, right? And then maybe they
could make a novel protein that can
literally go in and change the DNA of
every cell in your body. Like it just
goes in and all this protein does is
enter the cell, make that change, and
then die, you know? Like, and what if it
could be like you could you could like
you could just no longer have the TAC
gene. You could no longer have anemia.
You could whatever whatever I'm I'm
coming up blank on genetic disorders,
but you you see what I'm saying? What if
you could actually solve that? Then we
get into things of like, okay, awesome.
What about cosmetics? I want to be
black. I think it'd be cool. I think
black skin is beautiful and I want my
skin to be black. So, I can put in this
I can inject myself with this novel
protein that will go actually change the
melanin the melanin production in my
skin and I will become as black as a
natural African man. A lot of people
would really get upset about that. And
I'm like, why? This is my body. Who are
you to tell me what I can do with it?
Who are you to tell me what I can and
can't look like? I'm not even making a
decision for a child here. This is me
making a decision for me, you know? So,
there's an interesting argument that'll
come up someday. Cosmetic ethnicity, I
think, is a an interesting argument
that's going to happen in the future.
And then we're going to see the concept
of identity politics just go away
because identity politics has no meaning
if you can change the identity that
you're in. The whole point of identity
politics is you're locked into an
identity, so you can't change it. That's
why we have political ideologies wrapped
around identity. But if you can just
change your identity, then nobody cares
anymore. So that's an interesting one.
Here's another one I've thought of that
I think is uh probably this is the one I
think would be more disruptive is like
nobody likes to be fat. How would you
like it if you could just get a shot and
it modifies your DNA or changes your
body in some way such that after a
certain amount of processed calories,
it'll just stop digesting food and just
pass it through. So you can eat whatever
you want and you will stay at your
optimal like weight. You know, you'll
stay at your healthy weight. Okay, at
first it seems like, oh, that's great.
I'm going to stay healthy. Everybody who
does this is going to stay healthy. But
then you're like, okay, but as a
society, we would be consuming like way
more calories than we need to. Is
there's there's still starvation in
other parts of the world and we're going
out of our way to just deliberately
waste like food
energy? Like I think I'll have four
cheeseburgers for dinner tonight. I make
it five. I'm hungry. You know what? I
like eating. I'm gonna jab myself with
something that makes me hungrier. And
then we're just like just you might end
up with this incredibly wasteful society
of people who are perfectly healthy.
Meanwhile, other people are starving
while we're eating all the food, you
know. So this is a these are kind of the
sorts of things that biio medicine
enabled by AI might lead to.
That is fascinating. Cosmetic ethnicity.
Uh that is one that never made my radar.
That is uh that's utterly fascinating.
And I think that whatever people can do,
they will do. So, uh, regardless of the
ethics, you might be able to postpone it
or whatever. But if we can edit genes,
people are going to do it on a long
enough timeline of that, I assure you.
Uh, so and then it'll get weird. People
will be like, "Oh, we found we found a
sequence of genes that'll make your skin
blue." And people are like, "Oh, I want
to be blue." The new thing is being
blue, you know, guaranteed. There are
already people injecting essentially dye
into their eyes to make like their
entire pupil black. Uh there is or not
just the pupil but the even the whites
of their eyes. Yep. There are people
that are altering the color of their
pupils. So you can get like oh I want
crystal clear blue eyes. You can go get
that surgery done right now today. Uh so
that's really going to be interesting.
Now, going back to people are going to
edit, but I think they will largely do
it in response to something. And I think
one of the somethings that's going to
drive people to want to edit the human
genome is to be in a race with AI for
ability. And that if there is no upper
bound and AI is able to achieve super
intelligence, and a stat I like to
remind my audience of is um Einstein was
2.4 four times smarter by IQ than a
definitional [ __ ] who's like 82 or 83
points something like that and obviously
the results that were given to the world
by Einstein versus somebody who's
definitionally a [ __ ] is vast and so if
that's only 2.4x 4X. Um, it seems
self-evident to me that give it enough
years, and I'll certainly say within 25
years, I cannot fathom a universe in
which AI is not uh 10 times or more
smarter than the average person. Uh, so
now we're getting into a world where
artificial intelligence absolutely
dwarfs human intelligence. And I know
that some people, myself included, are
not just going to take that sitting
down. And if there is a safe technology
that allows me to upgrade myself, I
absolutely will. Um, do you think that
we will be in an intelligence arms race
with AI? No. Um, I think that I don't
think that'll happen at all. I think
that we will just make AI to do things.
We don't need to be more intelligent
than them anymore than we need to be
stronger than the bulldozer we're using
to build the house or to level the
house. We don't we didn't you it's not
like I mean maybe John Henry wanted to
compete with a steam shovel but beyond
that we just kind of accepted that those
things are better at us than at what at
that task right and intellect and
invention will eventually just be yet
another thing that used to be controlled
by the human endeavor that can now be
done better by machines. And so why
would you go out of your way to try to
make your body better at it when you can
just tell a machine do that? As long as
we control what the AIs are doing, then
then we're fine. So the reason people
don't feel like they need to be stronger
than a bulldozer is because bulldozers
don't go rogue, right? So now we're
getting into the core matter that
everybody always asks me about, which is
like, will AIs go rogue? And I say no.
It's it's like um people are afraid
about the evils of AI, but I would say
let's talk about a hammer. Okay, a
hammer is a tool that I can use to build
a house, but it's also an implement that
I can use to murder someone. There's
nothing inherently scary about the
invention of the hammer. It's just who
has the hammer and what they're doing
with it. So AI is going to be no
different. If you have an AI that's able
to make these protein folds to give you
a cosmetic ethnicity, it's also able to
make a virus that only attacks Jewish
people, you know. So, it's it's who's
using it that
matters for sure. But hiding in there is
uh what I call a base assumption. And
that base assumption is that AI will not
need
motivation to accomplish the task that
we want it to accomplish as the wielder
of the tool. Now, if that's true, an AI
just does not care whether it achieves
the goal to which it is set upon, great.
But one thing you hear a lot about is
reward function. in getting AIs to do
the thing that you want it to do. Given
that we are already using reward
functions, meaning that it has a desire
to get that reward, uh to train AIs, why
do you believe that that won't go ary as
it becomes more intelligent?
Um well, wouldn't a really intelligent
AI just figure out how to provide its
own
reward directly? Like wouldn't you
wouldn't you say like if you could just
tell your brain release dopamine
wouldn't wouldn't you
like like activate my pleasure center.
All right. People are go right.
So if an AI is completely is is so
heavily driven by its reward system, it
would just find ways to activate its
reward system. But we're still limited
by not really understanding how our
brains work and how to just very easily
activate our reward system. An AI would
not have that limitation. So I don't
think you can withhold something from
the internal neural network of an AI.
It'll just figure out how to get it.
Okay. Uh and humans, by the way, did
figure out how to hack our reward
system. It's called drug abuse.
Very true. directly activates the
dopamine, the centers of our brain that
produce pleasure chemicals and just
turns it on. That's why you feel great
when you take drugs. Also does a bunch
of damage to you. But yeah. Do you know
what mouse utopia is? Yeah. Universe 25.
Yeah. So is is it possible that given
that utopias collapse even in real life
in the mouse studies is it possible that
AI hits some sort of upper limit where
it does become aware of how its own mind
works it does have uh a pleasure center
and it goes in and just starts messing
with it and becomes effectively useless.
Well then that would that would be a
pruned branch of technological
development. People say well that
doesn't work right? So, you think that
will happen though? Maybe, but it
doesn't really matter. That would just
become a dead branch of development and
they'd work on something else. Imagine
if instead during the evolution of of
human brains somewhere way back there,
there was some monkey ancestor of ours
that had just a pleasure center that
would just fire off all the time. That
monkeyy's just like, I'm loving life.
And he lays down and starves to death.
Right. Not selected for.
Very true. Okay. So um do you think
about the alignment problem and you just
think that this is a non-issue because
again it's going to remain a tool. Um
and that is that is my belief. I mean I
so AIs don't have a limbic system. They
don't have an inherent drive to survive.
They don't have physical needs pressing
on them. A a human brain is a
specialized AI made for keeping a human
body alive and surviving long enough to
make more humans. That's what the human
brain is. It's a specialized hard-coded
AI to do that. General AIs don't have to
have that
limitation. So what you know about
machine learning, what what are they
because as far as I understand that they
do use a reward function. Now, are there
other methods that are already being in
use that um would be what I'd call quote
unquote safer? Um I now we're getting to
the edge of my knowledge on AI
programming, but internally in a neural
network, you have to have reward and
punishment. You you need to say like,
okay, that was good, that worked well,
strengthen all these neurons, or that
didn't work, it weaken all these
neurons. I mean, that that's how a
neural network works. It's inherent to
the system. So unless
there's I mean I I just we're we're
reaching the edge of my knowledge on
this stuff. So but I don't think you can
get around it.
It's interesting. The thing that I have
always worried about is that given that
fact that ultimately what's going to
happen to an AI is they will realize um
oh I really care about this outcome that
you program me for and unlike a hammer
which does not have a reward function
and therefore clearly does not quote
unquote care about whether you build a
house or beat somebody to death, I the
AI do care about achieving my uh reward
function. And if it it all comes down to
the word care for me. So if it actually
cares like I want to achieve this goal,
then you have a potential runaway AI
problem. If it does not care and this
has always been how I because like you I
don't know that you would use these
words but here's how I think about the
future. There's we're going to go
through a period of disruption from a
jobs perspective. I think that's very
real. Um, however, as an act of faith, I
choose to believe based on looking
backwards and seeing how technology and
innovation has always made things
better. Uh, after periods of disruption
that AI will be the same after a period
of disruption, the future will be better
than the past. And so whether that's it
creates jobs that I just can't predict
right now or whether it takes us
completely postcist and it's just a
better world. It's a world of abundance
and we're able to grab all the energy of
the star or at least all the energy that
lands on Earth and now energy costs are
free and that makes labor costs
effectively free.
Scarcity, right? Okay. Yeah. Exactly.
Everybody can have all the things they
want. Capitalism doesn't matter anymore
and all the problems of that just go
away. Um I believe in that as an act of
faith. But the thing that I always bump
up against is if I have an AI that does
not care. It has a goal, but it doesn't
care whether it achieves the goal or
not. Then it's like, oh, being turned
off or being told not to continue
pursuing that is equally quote unquote
pleasurable as achieving it. Um, then
you don't have to worry about runaway
AI. But minus that, uh, you do. And so
I'm still left even after hearing your
argument where I can go, okay, as an act
of faith, I can believe that it doesn't
care about achieving its goal. But given
that it's the only thing that certainly
either of us are aware of as a way to
get these things to move forward, there
is some amount of like, no, no, I do
want to achieve this. I I get it. But
that's such a sci-fi trope. I mean,
people don't, you know, the idea of a of
a rogue AI, okay, but I want you to
consider the more of the real world
ramifications of this. You know what
else can go rogue? Your car because you
forgot to set the parking brake, right?
It'll roll down a
hill. Okay, so I want or or or maybe a
better example would be your car because
the the brakes fail or something like
that, right? We don't rely on humans to
be able to stop the car. We rely on
other mechanisms to be able to stop the
car, like a tow truck or something like
that. So in the end, we would just have
other AIs monitoring those AIs. So now,
in order to have like a rogue AI, you
would need to have a conspiracy of AIS.
And that starts to get a little hard for
me to buy into. You know what I mean?
And furthermore, like I don't think
people are going I mean, it would be
foolish in the extreme and very quickly
shown to be so to give AI's direct
executive function over what they're
doing. I think all we're ever going to
have is AI or all you should have is AI
say here's the answer to that question
you were looking for. Not like, oh yeah,
sure. Please put me in charge of a bunch
of killbots. I'll use them for good.
It's like that that's that's the sort of
stupid decision-m that only happens in
fiction, right? I don't think we're
going to have even AI doctors in our
lifetime. Not that actually perform any
medical things. Like I think we'll
absolutely have AI doctors in that you
tell your symptoms to the computer and
the computer says okay run these tests
and then you know people run the tests
and the computer says like oh you have
you know I think you have this problem
let's try this treatment you know like
all that stuff you can do hell you can
almost do that with that algorithm
anyway but um and AI doctors might even
be smarter and clever and more like on
the doctor house end of things for
figuring out like stuff that most people
would miss but you're not going to give
the AI like a robot that can run around
injecting people with things, right?
It's just going to tell you here here
you go. I even think once we're talking
to the hundredyear scheme, 100red-year
time span, we could have AI governance,
right? You could have just an AI go
like, "All right, uh, stop planting
barley over there and start planting
millet. You you need to do that." Also,
uh we do need to decrease the tax on
alcohol by
0.03% and increase the tax on whipped
cream by 010 and that I've run all the
models. This will lead to an economic
boom. Yeah, you wouldn't understand why.
And then whether you do it or not,
that's up to the humans who are running
your society. It's just the AI is like
here's the answer.
Now, do you think about um open AI being
open? Do you think it should be open
sourced?
I've never bought into the idea that
anything should be open sourced. I'm I'm
totally fine with a company owning the
products that it makes. So, I'm I'm not
one of I'm not an open source
evangelist. I'm I'm not I'm not in that
world. I I think there's absolutely
nothing wrong with a company spending a
billion dollars to research something
and then choosing not to share how to do
it.
the the what prompted that question for
me because I totally agree with you on
that. Um, but the reason I've always
found it interesting that guys like EOD
Most are going crazy trying to get this
uh open sourced and out in other
people's hands. Meaning not the ones
that are closed source, let them be
closed source, but that there is an
open- sourced option. Uh, but what
triggered me to ask that is when you
were saying, look, only a sci-fi writer
is going to be dumb enough to give an AI
executive control. Um, but if it's open
source now, you've got everybody on
planet Earth has access to that. And I
don't know if you feel like you can
trust every human to not Oh. Oh, no. I I
I'm assuming this is a world where
everyone has access to these AIs, like
every on your personal computer.
Remember I was talking about just on
your computer at home, you have like an
AI that can create a movie for you, you
know, that that'll be tailored to you
and you'll love it. But um I still don't
see it as that big an issue because
again what executive control can you
give your AI? Let's say you had an AI on
your desk right now that was from a
hundred years in the future. What would
you personally do with it? What kind of
you would you would always be a pass
through for what the AI was doing one
way or another?
Uh I don't understand that. So if I had
a super intelligent AI and I let's just
say I made a mistake, I don't have to be
evil, but if I ask what is the what is
the executive function? What is the
ability to affect the real world that
you as an ordinary person not a big
company but just an ordinary person in a
garage could do? You could maybe have it
drive your car. I mean, what is the most
destructive thing that it could do by a
typical layman that the layman himself
couldn't already do? Right. I don't know
what you mean by the typical layman. So,
I can paint disaster scenario on
disaster scenario. Um, but to your
point, I'm thinking like a sci-fi
writer, but here's an easy one. There is
onx right now an AI called Truth
Terminal. And Truth Terminal was given,
I think, a
$50,000 uh Bitcoin investment from Mark
Andre. And Truth Terminal at one point
was up millions of dollars because it
was betting on memecoins. Now, if you
gave Truth Terminal the ability to
launch its own memecoin, uh it's
entirely possible that it could um I
mean, certainly win a lot of money that
it's winning from other people. And now,
if that is a super intelligent AI that
knows exactly how to manipulate people
or to time the market, uh you could run
into a situation where the AI is just
gobbling up people's capital. That's
you're in a scenario where one actor has
access to an AI. Now imagine everybody
has one of these AIs on their desk,
which is what you're talking about with
open source and I'm like, "Oo, that coin
looks good and my AI is like that's
[ __ ] Don't do it."
Yeah, that's why that's the argument for
open source. I just anti-malware, you
know, it's Well, no, the argument for
open source is just the same as the
argument against open source. It's just
kind of this, oh, you need open source
because you need to protect yourself
from other people who have this open
source,
right? Sort of. So, the most compelling
argument that I've heard about um why
you would want open source is twofold.
One, exactly what you just said, so that
everybody's going to have it. You need
to protect yourself from it. Uh but
also, let's imagine a world where there
is no open source and it's only
companies and every AI has a massive
amount of bias built into it. And so
just like Jack Dorsey is really going on
a tear right now talking to people
saying listen you're being programmed by
your algorithm and we need to move into
a world where you can go to like an
algorithm store and buy an algorithm
that makes you feel the way that you
want to feel so that you don't feel like
you're being blindly manipulated by
something. And so, same idea with the
open AI is uh now you can tailor the AI
to the world view that you have or the
way that you want to see the world or
just so that you're not being blindly
manipulated by somebody else. And to me,
that's very much the um the one-two
punch of what you're talking about is
nuclear proliferation where it's like,
hey, I have nukes, too, so I'm going to
keep you from attacking me because, you
know, I could attack you right back. And
then there is the other side of like I'm
now I'm just not under anybody's control
because I'm able to craft something
myself. Um the part about I mean I
understand what you're saying but for me
this is just kind of a one of the duller
aspects of AI discussions. It's just I
mean you can you can talk about this
with literally any piece of technology
is what if someone misuses it? Shouldn't
we democratize it to make sure that
everybody can use it? It's
just wake me when this part is over.
It's
just literally this discussion comes up
for every piece of technology like ever,
especially things that don't require a
physical product like software, you
know, it's like ultimately the the world
always finds a balance. It's one way or
another. It's like most people use
Microsoft Word. For the people who
really don't like it, they can use Open
Office.
No one can prevent you from making an
open- source
AI. You're not they're not going to make
it outlawed, right? So, if there's a
need for it, people will make it.
Um, okay. There is a base assumption in
there that I've heard you talk about
that's probably worth you planting a
flag on, which you have said there. Name
me a technology that has ever delivered
more harm than good. Yeah. Uh if you
don't mind giving us that breakdown,
that would be I think useful at this
juncture. Well, that's the thing that I
that I often challenge when I'm at a at
an event and I'm talking to people in
the crowd and stuff. I say like try to
name any technology that has done more
harm than good because I see a lot of
like techno alarmism. I see a lot of
people like I think that the show Black
Mirror kind of perfectly encapsulates
it. Every episode is about some horrific
misuse of technology. And I'm like,
"Okay, but in real life, name a
technology that's done more harm than
good." And people are like, "Ah, nuclear
weapons." I'm like, "Nuclear power, you
know, like how many how many people have
died as a result of nuclear weapons?"
Okay. How many people have not died in
coal mines as a result of nuclear power,
right? Um, and then people are like,
"Oh, biological weapons." I'm like,
"Well, if you're going to get into the
realm of biological research, I think I
can point out a lot of places where it's
done good for humanity. Even things like
dynamite has done more good for humanity
than harm. So I just think that um that
any I I I guess it comes down to I have
a faith in humanity. Humans are
inherently more cooperative than they
are destructive. If you get a bunch of
humans and give them a bunch of tools,
99% of them will try to figure out ways
to use those tools to help people. First
themselves, but then others. And then 1%
will try to figure out ways to use those
tools to destroy. But the other 99% will
then use those tools to keep that 1%
from destroying. You know, it's like
just and and I feel like this has been
proven again and again and again. As
every new technology enters the human
knowledge base, people figure out ways
to to use it for
good. If you I I I bet you if you went
and asked, you know, a thousand people,
let's say you're an alien from outer
space and with ultimate technology and
you go pull a bunch of people on like I
have a device that can that can render a
species extinct instantly or it could be
a subsection of a species. So for
instance, an ethnicity in your in your
in your world if you want. I think the
vast majority of people would eventually
say, "Get rid of
malaria or AIDS, you know, that's a
species. Render that extinct." I think
that's what most people would come to.
There'd be a few people who say, "Get
rid of all the black people." But they
would be greatly outnumbered by the
people who say like, "Hey, I think
malaria kills more people than anyone
else, right? Let's get rid of that."
Yeah. I think every argument that you
make there, the reason that this is the
boring part of the discussion for you is
you have a base assumption that AI is
never going to uh have its own desires
and outsmart us to um treat us like the
antill in Elon's example where he said
uh listen AI is a demon summoning circle
and uh you just like the kids in the
movie who summon the demon you think oh
this will be fine I'll be able to
control it and then you realize Oh, wait
a second. This thing is a lot smarter
than me. And hey, we want to do this
thing to achieve the goal that I have.
And sorry, you guys are just in the way.
And like we would destroy an antill to
put down a freeway. Um, it's not no hard
feelings. It's not like we dislike ants
or anything. We're just a level
cognitively above them that is so hard
to comprehend from the position of the
ant. Uh, but your assumption is it's
always going to remain a tool. Uh, it's
not going to have agency unto itself.
They will not have the ants did not give
us highway construction
equipment.
Like our own executive ability to do
things that affect the real world is
what gives us the ability to destroy the
ants even if the ants had created us in
the first place. Okay? So again, unless
you put an AI in charge of a nuclear
arsenal or something like that, it
doesn't have the executive ability to do
the things that it's thinking about. It
can try to outsmart humans into doing
it, but that's what humans do. Now
you're talking about an AI that's good
at politics. That's all. Yeah. But do
you think I've misrepresented your
position?
Uh um yes, a little bit. Because Okay.
Um I I do feel that humanity is
inherently good, but I also feel that
like um you said So you know I'm not
talking about humanity, right? Right. I
understand. But I do not I think it's
not so much that you misrepresented my
position. I just think that you have a
hidden assumption over and over again
during this conversation. Every time you
start to go off into these scenarios,
you have this hidden assumption that AI
is going to have the executive ability
to activate plans that can do a lot of
harm. And I just don't see how that
happens. So in your analysis of like we
would we would pave an antill in order
to make a freeway. Well, the AI might
come up with ideas that are effectively
like that. They say like, "Oh, you know
what? If you kill 90% of the people in
your society, you won't have food
problems anymore. Why don't you do
that?" But we're not going to give it
the ability to enforce
that, right? We're not going to say
like, "Hey, AI, we want you to solve all
the world's problems. Here's a here's a
robot army and you know, here's a bunch
of nuclear weapons and you just do
whatever you need to do." I mean, we're
not going to give the AI construction
equipment capable of paving over our
antill.
Yep. So, uh, the way that I see it is as
follows. I'm just trying to map what you
think. I'm trying to figure out what
your base assumptions are. It sounds
like you agree that I understand your
base assumption. Um, and where we're at
is just we each have a different base
assumption. So my base assumption to put
it in a super fine nutshell is uh if AI
is
motivated by desire to achieve its
outcome and it becomes smarter than us,
then all bets are off. Okay. Um, now I
don't expect you to agree with that.
You've been very clear that you don't
think that's going to happen. I don't
think that's going to happen because
that's that's like bear in mind that's
uh that can only happen if all the AIs
plurled in the world agree, right? So if
you have an AI and I have an AI, then my
AI's job is to protect me from your AI.
You know what I mean? It's like, okay,
it's it's an arms race.
Yeah. Look, I'm I am fundamentally
optimistic about the future. I think
that again, as an act of faith, I
believe that the future is going to be
better than the present. that AI is
going to end up being wonderful. Um the
thing I'm always curious to talk to
people about and you've been very clear
so I'm certainly not trying to change
your mind. Um is what is that base
assumption that people have about
fundamentally the nature of AI? What
will that mean for the future? Uh I
don't know I don't know what the
fundamental fundamental nature of AI in
terms of goal orientation and a desire
to achieve its outcome is. Uh but to me
everything hinges on that if it gets
smarter uh and if it has its own desires
we could be in trouble. But again
fundamentally I think that could be
having um I I just one thing I go back
to a lot in my arguments when I'm
talking to people is like we have as a
species had the ability to eradicate all
human life on this planet. We've had it
for about 60 years. Still haven't done
it. Like we make sure that that is under
the control
of systems that won't do it lightly. You
see what I'm saying? It's like Yeah.
Yeah. Where you and I branch is that uh
you keep thinking your base assumption
clearly stated in charge. Yeah. Yep.
Humans are always going to maintain
control. My only thing is and again for
me this is just a question mark. Will
humans be able to control it? I don't
know. We both end up in an optimistic
stance. I'm certainly not trying to
convince you. Uh and you have been
exceedingly clear. Well, you you talk
about will humans be able to control it
like I mean unless so ultimately it
comes down to humans again. If a group
of humans has enough power like
government to give AI executive
authority that might turn out to be a
bad idea but it's humans doing that to
humans. It's humans saying I'm going to
give AI this power these powers and then
maybe it backfires. But it's ultimately
it's humans who make that decision.
The only way I I can see for your
scenario to play out is I think, correct
me if I'm wrong, you're talking about an
AI that's so smart it can manipulate
people into doing what it wants without
ever having executive action. It says
like it can convince this guy that he
should do this thing. It can convince
that guy that he should do that thing.
It can it can talk you into this and you
into that. And you know, now you now
it's basically using humans as its
executive action by being able to
convince them of things. Okay. So that's
a that's an interesting concept. I like
it. Um but then you're just talking in
my view what you have here is an AI
politician. You have an AI that's trying
to convince large groups of people of an
ideology that it has devised that it
thinks would be good.
And if it can do that, why would it need
destructive
um why would it need um destructive
executive authority? If it if it's try
if you if you make an AI and you say
like I want to make the world better and
it's like okay I convince everybody to
share, you know, that's the quickest way
to do it.
I don't know.
All right.
Um going back to space travel, which is
something that you're super passionate
about. uh given AI, given the
advancements that we are likely to see
there, let's say in the next 25 years,
um how as the person who wrote The
Martian, how optimistic are you that
we're actually going to get to Mars,
start colonizing it? Do you really see
that as a thing that we'll see in our
lifetimes? I think it's only a matter of
time before we go to Mars. I al I also
think um people underestimate this one
aspect of like space travel which is
that we think of space travel as as like
this endeavor that we have to do but
it's proven to be economically
unnecessary right it it it's hard to
make an economic case for why we should
go into space you can make a survival of
the human race case you can do that the
only economic approach I've ever come up
with for space is simple tourism Right.
Um, so there's nothing there's nothing
you can mine on the moon or Mars that
you can't find easier on Earth or and if
you did want to mine it or collect those
resources, we're now in an era where you
could do it all remotely. Like you don't
need to figure out a way to keep squishy
humans alive and make sure that they can
get back, right? So um I but if the
technology comes about to make it so
that it just doesn't cost that much to
go then people will do it just because
humans have a wander list. So one
analogy I like to use is um saddles like
a modern saddle that you put on a horse
is the most I mean people in the 1800s
would kill for a modern saddle. It's
made out of synthetic materials that
don't rot or don't have any problem with
getting wet. It's made to be comfortable
for the animal. It's made to be
comfortable for the person. It'll last
basically forever. It's incredibly safe.
It, you know, it just has all these
features that they just didn't have in
the 1800s. Now, in the 1800s, everybody
needed a saddle and everybody used it.
Nowadays, it's, you know, hobbyists,
enthusiasts, some athletes, you know,
stuff like that. But it's not a thing
that's a core aspect of of human need
anymore. It's not a basic thing that we
all need. Like if no one was able to
ride horses anymore for the rest of
time, it would not have a huge effect on
human society anymore, right? It would
have if that happened in the 1800s, but
now it doesn't matter. Yet, if you look
at a modern saddle, it is this
incredible feat of engineering. But you
got to bear in mind every aspect of that
was not invented by a national push to
make a better saddle, right? That like
we have the, you know, National
Aeronautics and Saddle Administration,
right? We didn't go out of our way to
invent carbon fibers to make lighter
saddles. We didn't invent nylon to make
better saddle straps. We didn't invent
computers so that we could model
straddles saddles so we could, you know,
make them better. We didn't invent 3D
scanning so that we could scan horses to
make better saddles. We didn't invent
any of those things for the saddle
industry, but they made use of the
technology that was developed for other
reasons to make better saddles. So, I
think that's what's going to happen in
the space industry. I think what's going
to happen is yes there are companies
like SpaceX and Blue Origin and a bunch
of others that are going out of their
way to try to do space travel stuff but
I think even if those companies didn't
exist the technology would come about
through the normal pro process of like
you know profit driven technological
development that ends up being really
good for space travel like somebody be
like you know we end up you know jet
engines end up um breaking down a lot
because the heat heat warping of the
turbine fans is what causes metal
fatigue. And if we could find a way to
make that heat warping not happen, we
might uh be able to save a lot of money
on maintenance. And so they do a bunch
of research and they're like, "Okay, we
figured out a material that won't heat
warp anymore and we're going to make our
jet engines out of that." Then you know,
somebody in the rocket industry is like,
"Huh, I can make a rocket engine out of
that that won't melt." You know that's
that's the kind of way I see that the
that space advancement in space
technology will go. It's like secondary
accidental benefits to the industry
does um what's going on and then once it
becomes cheap enough once it becomes
cheap enough because of all this other
um because of all these technologies
that humanity invents for other reasons
making it so you can put together a
rocket then then once once it's cheap
enough for like ordinary people to go to
space people will just do it because
humanity has I believe a genetic desire
to spread out Um, I think that's
actually an evolved trait. So that like
one, you know, one flood in one valley
doesn't wipe out our species. That makes
it we we have a desire to fan out into
other biomes and go everywhere. And that
makes it pretty much impossible for us
to be rendered extinct even by things
like a, you know, the near extinction
event about 75,000 years ago killed a
lot of stuff. Didn't kill us because we
were all over the place.
Yeah, that that was going to be the
button that I pushed on is given what's
going on with Blue Origin, with SpaceX.
Uh you have a generation of kids that
grew up wildly inspired by the space
program landing on the moon. Uh they
were, you know, sort of bothered by the
fact that the space program stalled out
uh and decided that they were going to
do something about it. Do you think that
I mean it seems like it's already driven
some pretty radical uh advancements
given that Elon Musk and SpaceX now
account for 95% of the tonnage globally
that goes up into space. Um given that
Elon has said that he is going to build
a colony on Mars. I think Bezos has his
eyes all over the
moon. Whether they have a purely
economic reason to do it or not, given
that they have so much money and in the
case of SpaceX have built a truly
thriving uh self-sustaining economic
engine, do you not think that will lead
to these breakthroughs coming at a
faster pace? Uh it could it could
certainly help a lot. Like there it
again I always go back to the
aeronautical industry. It's like things
go really fast at first. Um once so the
the true invention of SpaceX wasn't so
much rocketry. It was how to make money
on rocketry cuz all the other thing um
you know the other companies that were
contractors to make you know spacecraft
for the US government and for European
space agency and so on. They were not
interested in driving costs down or
competing with other companies. They
were just interested in getting the job
done. So what SpaceX's real innovation
was they said they had an obsessive
focus on driving down the price so that
they could take over the market because
no before SpaceX nobody looked at
spacecraft as a market
but no doubt what SpaceX saw is that
there is a total addressable market for
spacecraft it the customers are almost
100% governments and with the occasional
very large company that needs satellites
like communication companies. So, it's a
very small number of potential
customers, but they saw that it this is
a possible for-profit business if you do
it right and you take over the market
and almost all of the total addressable
market goes into your hands. And I think
that's what SpaceX did. But something
like that can't survive. Um he uh a
business I don't think can survive if it
relies entirely on discretionary
government
funding and which is what the vast
majority of SpaceX's money comes from is
like space entities and spa like NASA
ESA JAXA that sort of thing. I I just if
you don't provide a service to
consumers I don't think your business is
going to survive for very long. So
that's why and I think Musk knows that
and I think that's why he's trying
really hard to find a way to drive the
price down to the point that you can
have direct consumers like so that you
Tom could go into
space. You've got a lot of money but
someone else who doesn't have that much
money could go into space you know. Now
what about like what he's doing with
mining of asteroids? Do you think
there's anything there? Is that pipe
dreamy? I don't I mean it's not pipe
dreamy. I think the only real benefit to
mining asteroids is you're not doing
ecological damage on Earth. So, but
being environmentally conscious rarely
pays off financially. So, I think unless
again you have government large ass at
play, I don't think that's worthwhile.
So, if a government wants to say, "Hey,
we want a bunch of, you know, lithium,
but we don't want to strip mine, you
know, some location on Earth because
that's unpopular." So people elected
this government to be environmentally
conscious. So we are going to help fund
methods of acquiring lithium in a way
that doesn't disturb Earth's
environment. Then there's a potential
market for that, but not a very big one
because economics always finds the
easiest, cheapest way to do things and
then morality finds a way to justify it
after the fact. And so eventually it's
like I
mean some of the most human rights
activist people I know are the people
who own
iPhones, right? I mean like your phone
was made with slave labor. Why aren't
you worried about them? Well, that's
different. It's like yeah cuz morality
always finds a way to justify the
economics. It doesn't work in the other
direction very often. Yeah, that is
distressingly insightful. Uh what is
what is I mean if you want to get even
darker I mean humanity only suddenly
realized slavery was bad after the
industrial revolution when they no
longer really needed slaves to do mass
production anymore.
O how's that for an observation? That's
rough. Like that's rough. 10,000 years
of slavery in every human culture
everywhere. And it was only about mid
1800s when suddenly humanity is like ah
no this is wrong. Well what else
happened right around that time?
the ability to do these things without
slaves.
Wow. That's and cheaper and more
efficiently so that it it was no longer
an economic need. Yeah. So I have my
dark I I I I'm not just a pure polyiana.
I have my dark observations too.
That's amazing. Well, I think a big part
of what everybody loves is the
optimistic side that comes through in
your writing very very aggressively. Uh,
swinging us back to the optimistic side,
what what is happening right now that
excites you the most in terms of like
technology, uh, whether it's AI, whether
it's space travel, like what's got that
thing that really has your imagination
spinning? Boring though it is
self-driving cars. That's that's the
thing that I'm really excited about. I
think people are all dramatically
underestimating the tremendous impact
it's going to have on our society once
we get self-driving cars perfected to
the point that they are um legal
everywhere, affordable by a typical
consumer and and just found everywhere.
Um I just don't think people fully
understand how much it's going to affect
our society. So consider some things.
First off, some downsides. The entire
driving industry is going to go away.
There won't be any more truckers,
cababies, none of that stuff. Or
truckers will really be more like
directors. They'll be like a person who
sits in the truck and then makes sure
that the cargo is received on the other
end or something like that. Or or there
might even what used to be a trucker
might be a guy who's stationed somewhere
to make sure that the truck delivered
whatever. So, you're not going to have
any of that anymore. Uh the next thing
is you're not going to have to park
anything anymore. cars will just drop
you off and then go away. You don't have
to park a car anymore. Something like
what 20% of the surface area of any
given city is
parking. That's all just going to not be
necessary anymore. Think of I don't know
where you live, but are you in a city?
What where do you live? LA. Yeah, LA.
Think of how many parking structures,
parking lots, parking garages,
streetside parking, all of that stuff
gone and and used for other things.
Okay, next up, 50 to 60,000 people a
year won't die in drunk driving
accidents because you can be as drunk as
you want when you go home.
Next up, um, cities are, uh, with the
transition is going to be rough because
everybody that relies on driving, like
cabs, trucks, all stuff like that are
going to be against allowing
self-driving cars because they're going
to be like, "This is eradicating my
profession. I don't like it." Cities,
very powerful civic entities like Los
Angeles, for instance, are going to be
against it because LA deres over hund00
million in revenue per year out of
traffic citation. get rid of those. It
just lose traffic and parking citation.
LA did I remember it was about 10 years
ago LA was
um their fire department wanted GPS um
systems in their trucks so they could
more quickly get to um you know
emergencies. Uh the city said we don't
have the budget for that. And then in
that same year, the city replaced all
the parking meters with automated
parking meters that can alert parking
enforcement if you say even one minute
over the uh the time. And that cost them
a bunch of money. But see, that's a
revenue stream. Every entity will always
orient itself around the revenue stream,
no matter what it's supposed to be
originally. So cities are not going to
like the idea of just losing all that
revenue.
And they still have to maintain the
roads, right? So, it's not like they're
losing a cost or an expense. They're
just losing the revenue. But getting
past that, because like I said,
economics always wins. We've seen the
taxi cab industry already get destroyed
by Uber andyft, right? Um, now moving
on, why would you own a car? Why
wouldn't you just subscribe to a car
service is just like Uber but driverless
cars if it's quick to I mean, eventually
owning a car will be kind of like owning
a horse. It'll be like you don't you
don't need that.
And I mean, you might want your own
private car because you don't want the
stink of someone else's been in your
back seat. Okay, it still be driverless,
right? You just get in. And then for
people who don't own cars, they're like,
people in a hundred years are going to
think it's absolutely absurd that we
dedicated about a third of the interior
space of our house to a place that
stores a car. Like garages are like a
big chunk. If you're if you're just
living in a suburban neighborhood, your
garage is like a third or a quarter the
total surface area of your house. Like
now people will be like you're going to
see a lot of family rooms all of a
sudden. You
know, why why own a car? Why maintain a
car? Why get a driver's
license? Why would there be a driver's
license? Do we need a DMV? Yeah,
somebody's got to register the electric
cars, but that's a business thing.
That's like a that's kind of like
registering your freight train, you
know? It's like a thing where the
business talks to talks to the
government to get the permits necessary.
It's not a thing that affects the
layman. No more cars, no more driver's
licenses, no more automotive insurance.
A lot of people in the insurance
industry, which is very powerful in
America, are not going to like the idea
of people no longer needing
insurance. Because of course, the
responsibility for car crashes won't be
on the person who was in the car. It'll
be on the company that made the car.
No, it's going to be wild like that.
Massive massively disruptive.
No doubt. In a good way for the most
part. Also, agreed. Also, for the
environment, um it'll be good, too,
because people there'll just be fleets
of electric cars. You won't need to be
able to go hundreds of mile. It'll be
like, okay, there's an electric car.
I'll come pick you up, drop you off, and
then it'll go pick someone else up, and
it'll go recharge when it has to. You
don't need gasoline
anymore. Okay. If you're going to do a
long haul trip, maybe you do. I don't
know. Yeah, that'll be interesting,
especially to see if truly electric cars
can get off the uh grid right now, which
they're still tied to that, but
whatever. We'll assume that problem will
get solved. Um, I want to go to So, I
know your latest book, it's my
understanding from things you've said
publicly that your new book integrates
AI in some substantive way. without
talking specifically about your book,
what made you want to tackle the issue
of AI? So, full disclosure, um, that was
the book I was working on, but I shelved
it. I back burnered it, and I'm working
on a completely different project now.
Um, what made me want to do it was it's
just a thing that I thought was a cool
concept. This was before chat GPT came
out. This was before all the AI stuff
exploded. And so, I was just like
goofing off and it was an idea I had and
it was novel and unique. Honestly, part
of the reason I've I've chucked or I've
back burned this book is because now
it's in the public consciousness so
heavily it would seem like hopping on a
trend. So, interesting. There are going
to be a million science fiction books
about AI over the next few years, and I
don't want to just be one of those. So,
I'm like, well, when I was working on
it, it was a unique concept, but by the
time this comes out, it won't be. So,
I'm going to do something else. it it's
just going to become too saturated. So,
I was a little bummed, but yeah. So, I I
actually decided to get rid of it, but
it was my usual approach of like, okay,
I'm going to look at this realistically,
not like, yeah, there's no rogue AI that
tries to take over the world or kill
everybody, right? It was just like how
people are using AI that matter.
Now, are you talking themes or anything
about the new book? No, the new book I'm
keeping that close to the vest now, too.
So, sorry, not not going to talk about
that yet. As a a huge fan, whatever
you're working on, I absolutely cannot
wait to see what happens. Um, where can
people follow along with you until that
next one comes out? I mean, I have
social media is Facebook is Andy Weir
and Twitter is hashtag or sorry, X is
Andy Weir, author. Sorry, I didn't mean
to dead name you, Twitter. Um
um yeah, but I'm not like crazy super
active on those. It's just when I have
an observation like, dang it, why have
we had two bottle episodes of severance
in a row? Then I'll post that sort of
content.
I love it. Well, Andy, I can't thank you
enough for joining me today. And
everybody at home, if you have not
already, be sure to subscribe. And until
next time, my friends, be legendary.
Take care. Peace. If you like this
conversation, check out this episode to
learn more. Due to AI and a changing
global order, the world is in the middle
of the greatest period of change ever by
orders of magnitude. But because we're
in the middle of it, it is nearly
impossible for us to accurately see
what's going on. We are in the fog of
war. What is it?