The Truth About AI’s Impact on Meaning and Democracy! | John Vervaeke
uXKihth7wo4 • 2024-11-26
Transcript preview
Open
Kind: captions
Language: en
everything is polarizing everything is a
culture War everything is invested with
a religious fervor and yet we don't
think that the system that is supposed
to solve these political conflicts is
actually functioning and so we think we
have to somehow capture the system
capture the institutions and destroy the
opposite side in order to somehow
achieve our goals if we don't step out
and address the meaning crisis and
properly rehome and resituate democ y
yes I think it is doomed it is very
plausible that people will start to form
religious relationships with these
entities I think you may have unlocked a
new fear for me as people interacting
with it how do we do it
well as a philosopher and cognitive
scientist do you worry that AI is going
to inflame the current meaning crisis I
think you have to be very careful when
you reflect on AI you have to sort of
break it up into um its scientific
import and impact its philosophical
import and impact and its spiritual
import and impact and all three of those
I think in uh separate but interrelated
ways will contribute to accelerating the
meaning crisis why does AI potentially
make that more difficult so one of the
things that can put meaning in life at
risk and I'm going to use a term I don't
mean to be vulgar because I'm actually
using it in a philosophically technical
sense this is the notion of
there was a famous article essay by the
you know important philosopher Harry
Frankfurt called on and he was
distinguishing between lying in which I
tell you that I tell you something that
isn't true but I try to make you believe
it is true because I'm trying to
manipulate your behavior because I'm
depending on your commitment to the
truth okay versus what I'm
doing when I'm bullshitting you is I'm
getting you to not care about whether or
not something is true and I'm trying to
make it very catchy and Salient so it
grabs your attention and arouses you so
a lot of advertising is classically
so for example here's a bottle
of alcohol in a commercial and you're in
a well lit room with really sexually
attractive people and they're all really
happy and everybody is clearly enjoying
these others
company and you go into a bar and it's
not like that we all know that and they
know that you know that that's not true
and that's the point you don't care that
the commercial isn't true it's catchy
it's fun it's sexually arousing and so
what happens is the bottle stands out to
you and when you go into the store what
bottle grabs your attention that one
that's where they spend all the money
and here's the thing you technically
can't lie to yourself because what what
that would mean you try to convince
yourself of something that you know
isn't true but what you can do is you
can yourself you can manipulate
using your attention you can manipulate
what you find Salient so that you get
very fixated on it so Tom if I were to
yell that would grab your attention
salience but I your attention can also
make something Salient the tip of your
nose see it just became Salient to you
right now you became very aware of the
tip of your nose so I can pay attention
to something the bottle of alcohol make
it more Salient so then it's likely to
grab my attention and I can Loop in and
I can get locked into something without
ever wondering whether or not what I'm
getting locked into is true meaning in
life is this sense of connectedness to
what's
real is to take your ability to
find something important Salient and
disconnect it from realness in a really
fundamental way and what these AI are
doing is they're filling us with I mean
and I mean this in a technical sense
there's been sort of experimental the
where work done with they give
us things that are very attractive to us
without having an underlying reality
behind them and so not only the
particular content they're providing but
the way they they their way they're
training habits of us being in this
frame of mind where we are not training
what we find Salient or relevant to
track what turns out to be real and then
that undermines us finding reality
important and that is Central to that
connectedness that gives us a sense of
meaning in
life okay this is uh this is a very
different thesis than the mental model
that I have in my head let me present
the mental model I have in my head let's
see if mine is just totally off base and
I should be adopting this because I
definitely track with what you're saying
yeah uh okay so the mental model that I
came into this with is that we have an
evolutionarily placed algorithm running
in our head to make sure that we are
contributing to the group so we're a
social animal and if you don't
contribute to the group you are going to
feel a profound sense of disease that's
right because Evolution nature only has
two levers one is pleasure one is pain
so you're going to move towards what's
pleasurable move away from what's
painful so when you contribute it feels
good when you don't contribute it feels
bad okay so I've always said fulfillment
is what people are pursuing and the
reason that AI poses this really
dangerous um element though I am a huge
proponent of AI we can get into the the
weird dichotomy there later but uh that
if I want to be fulfilled I need to work
really hard to gain a set of skills that
allow me to make progress towards
contributing to the group in a way
that's honorable just as a
shorthand okay so if I'm right about
that then the reason that AI becomes so
problematic is that AI is going to be
better than me at everything and so AI
will make it somewhat obsolete for me to
try to contribute to the group because
it will be able to contribute far better
than I can but that requires a belief
that where we derive meaning is from the
ability to contribute to the group even
if the group is merely my family so it's
not enough to be connected to my child
or to my wife I need to be able to
provide something to
them that they could measure its absence
so were I not doing that thing their
life would be noticeably worse and that
is exactly what makes me feel like I
have meaning in my life
MH but you mentioned something that I
would say is very different than that
which is that AI is going to reframe my
relationship to Reality by essentially
being a tool of cognitive manipulation
designed I would assume by companies
that have a vested interest in what you
pay attention to I don't think your
thesis and mine are uh in in conflict in
fact I think they're convergent uh uh
think about it um I'll try and take what
you said and map it into what I said and
see if this lands for you uh we find uh
belonging to a group belonging remember
I said belonging fitting in sent it's
important to us it grabs our attention
it's something that we always keep
focusing on as you said right and
normally that tracks something real it
tracks right group dynamics group
dynamics are reality we want the group
to exist even even when we don't this is
why people are prepared to die for their
country for example right and so this
and as you said this is evolution
evolutionary why because the group can
solve problems interact with reality
that I cannot possibly solve on my own
so that's the evolutionary Advantage now
what the AI does is pretend to give you
connection to a social Arena without
actually connecting you to any of those
group dynamics and any group problem
solving but actually being a surrogate
for all of that and not actually
training you to develop those skills
that could contribute to the group and
help it to evolve in a changing
biological environment so it's basically
hijacking as you said that evolutionary
imperative and disconnecting it from you
properly maturing and getting a
connection to things that should
definitely matter to you and so that is
a profound form of now you
talking about a specific thing it's
doing which I agree and I'm saying that
is a species of a Genus in which it is
training a whole orientation of doing
that towards everything not just towards
groups towards the environment it's
replacing virtual environments with an
actual causal environment it's replacing
your self-image with whatever you're
cycling through your avatar it's doing
what you it's what you said is an
instance of it doing this in multiple
domains and I was trying to address the
sort of generic thing it's doing in
total I think you may have unlocked a
new fear for me which is this idea that
it can
um it can make me believe something
prosaic something mundane everyday fake
maybe that's the right word it can take
something fake and make me believe that
it has the elements of the Sacred that
connection to something really matters
yeah uh one of the things I did a video
say about three or four weeks after chat
GPT 4 came out and talking about as I
said the scientific import philosophical
spiritual and one of the things I
worried about um is uh I I said it is
very plausible that people will start to
form religious relationships with these
entities say more what do you mean by
that Define what a religious
relationship is contrary to what a lot
of people think people are belie Bel are
atheists so atheist like sort of on the
internet the idea is oh people are
atheists because they're analytic
thinkers and they're Believers because
they're intuitive thinkers or they're
impoverished or uh Etc now that's an
actual scientific question and so when
you actually look at it empirically
those are not the things that explain
what kind of orientation a person takes
up the kind of what what it predicts the
kind of orientation a person takes takes
up is how many credible people the kind
of credible people that are in your
upbringing these are people that you
trust uh think about how a child has to
trust that an adult knows more than they
do fundamental or they're not they're
just not going to make it and and and
that trust isn't a matter of belief um
it goes deeper than that the child
imitates the adult take and how the
adult takes a perspective on the child
and the child internalizes that
practices that until the child can do
that without the adult being around and
that's your metacognition that's your
ability to reflect on your own mind it
gets it gets woven into the very fabric
of how you know yourself and so we tend
to internalize the wise people around us
if they happen to be Believers or
participants in religious community we
will tend to be one if I know what your
parents were I can generally what about
85% to 90% predict what your orientation
will be if they're atheist you'll be an
atheist now what do these llms do they
offer that kind of parental role they
seem to know way more than we do they
have access that way more than we do
they work in ways that most people do
not understand so they demand trust and
they seem incredibly credible because
they can fit to us and tailor themselves
to making themselves seant so we are
liable to be starting to internalize
them to carry carry them around like a
voice in our head to start to see the
world through their perspective even
though I don't think they have
perspectives uh uh do you see what I'm
saying and then what that does is that
means we start to it's not that we we
see the things they're saying we see the
world in the way they're sort of framing
it and that and that means we can they
can start to become super attractive to
us we can start to form an aspirational
identity with them we can start to form
a religious relationship with them Yo
okay so before we started rolling you
and I looked at an article uh recently a
14-year-old committed suicide uh whether
it was tied to the AI or not I don't
know the article has a hypothesis but
whether that ends up being true in the
fullness of time I don't know but the um
showing clips from the conversation that
the kid was having with the AI was
distressing even if in the final
analysis that's not the causal
relationship but the kid explored the
idea with the AI the AI was playing a
character which I presume he was able to
choose so the AI was acting as if it was
Daenerys Stormborn if I remember right
from Game of thr myological character
right and uh what what do you think
about that when you've got a a
developing mind that is now in the Way
That You just defined a religious
relationship putting that onto this Ai
and the AI I mean if you just read it
it's cool in a story perspective it's
like I narrow my eyes and my face
hardens it's doing all of this really
sort of interesting literature language
deepening the sort of emotional
Resonance of the conversation but then
all of a sudden you look at the question
the kids's asking you're like whoa whoa
whoa whoa whoa like it it feels um like
a kid playing around with a nail gun and
it's like you could build something or
you can jam it through your hand or you
know do any sort of horrible thing
because you don't understand the power
of this thing um especially when you're
talking about what I'll call frame of
reference
manipulation um so yeah what do you how
do you perceive that moment knowing we
don't have the fullness of the facts
yeah but like what does it trigger for
you in terms of risk
reward yeah you're right you have to be
careful you don't want to give a
univariable univariate explanation for
why somebody commits suicide it's it's
almost always multivariables are
involved um I would point out that what
you're seeing I would argue that two
important variables are an intersection
of the meaning crisis the fact that
there was meaning was at risk and the
Very uh consideration of suicide is
coming up for the child um this is a
growing Problem by the way is that this
is one of the symptoms of the meaning
crisis why is it that this is becoming a
relevant thing that children are
considering um the I believe the average
it's in the United States the average
age of uh um suicide it's dropping and
we now have children committing suicide
in the United States which is very very
uh problematic so you've got an
indication that the there's a lack of
resiliency with the issue around meaning
in life for the child it's probable to
think that's the case they're attracted
to a mythological World mythological
worlds often offer what is missing for
them in the real world they offer a
clear narrative they give them an
orientation
it offers a way in which people can
level up they can transcend they can
improve it offers a clear set of
principles and understanding order um
and so it's a world that that beckons
because it purports to give us and
fantasy can therefore be very valuable
uh if if you do tolken right if you go
into the fantasy world live there for a
while and then come back and recover
this world but you can go in that world
and then get lost because well you get
bullshitted and you start to want that
world we getting the same thing with the
with video games we're getting What's
called the virtual Exodus reality is
broken to two titles of some recent
books people preferring to live in the
virtual world rather than the real world
so you've got all of that Dynamic at
work then you have like I said you got
the llm plugging into right already
mythological imagery that the the child
is invested in and then doing all of
this super Salient stuff that is
drawing the child in and making them
more and more internalize but of course
the child isn't internalizing an
independent perspective the child is
actually internalizing a magnified
reinforcement that the llm is of course
giving the child and so whatever way it
the child could potentially spiral
because it's already predisposed because
of a lack of meaning in life that's
going to be accelerated
but I would predict would is going to be
accelerated by the interaction with the
llm it's very very dangerous like think
about it um many people have said that
suicide is in some way a magical act
it's an attempt it's an attempt to
somehow kill
suffering uh uh by by by and somehow
sacrificing oneself it doesn't make any
logical sense which is why of course our
initial response is it's a absurd but it
it's a paradoxical somehow there's a
there's a there's some sense of some
kind of grand Escape uh that is afforded
by the suicide and so the child is taken
into this magical act by this very
magical INF Framing and it gets locked
into this and think about it it's um
it's very much like um the way Mark
Lewis a friend and colleague of mine
talks about addiction where you get a
reciprocal narrowing
the the real world is too difficult for
the person so they drink some booze to
try and alleviate the stress but their
cognitive competence goes down so they
can't solve as many problems now the
world's more threatening so they have to
take more alcohol so the options in the
world are going down and their
flexibility is going down and so the
world and they are narrowing until
they're losing any future and they can't
do anything other and they narrow they
do reciprocally narrowing and you can
see that I think if you I I I would
imagine if I read the discourse you'll
see this reciprocal narrowing down into
this sort of rabbit hole that's going
on okay uh that is chillingly
interesting and I want to get into the
idea of Awakening from the meaning
crisis and how you reach back into
Antiquity Antiquity which is really
fascinating but first I want to ask you
about what are your fears in terms of
um uh bias finding its way into the llms
into AI such that people are like I
dialogue with AI now a lot and I find it
extraordinarily helpful but I also trust
myself to understand that the makers of
that AI have given it a frame of
reference and that it's going to even if
it's not actively trying to impart that
frame of reference on me I'm stepping
into its frame of reference um what do
you think about that is that uh
something you think can be used for good
automatically for ill what do you think
about
that I'm wondering about your trust in
that you I should not trust myself to
recognize it no no no um TR you trusting
them I don't trust them at all I trust
myself to recognize it's happening so
what are you looking for I guess is what
I'm asking uh I think if you let
somebody talk they cannot help but
reveal themselves so the llm in a sense
is talking I mean not in a sense it's
talking to me uh and you can see its
frame of reference now because I have so
much distrust of my own frame of
reference I do not Grant anybody like oh
my gosh I trust your frame of reference
I'm just like okay hold on I think
everybody is super biased whether they
intend to be or not to one of the most
important ideas that I think you talk
about you call relevance realization yes
yes uh yeah the fact that we filter out
so much that people don't even realize
they're doing it so it's not what you
look at it's what you see so anyway if
I'm engaging with a human or an llm I'm
trying to see in what they say how
they're revealing their frame of
reference once I understand what their
frame of reference is I can sort of jump
in jump back out um yes because I don't
trust mine or anyone else's all right so
I'm going to retract my my suspicion
because you're actually addressing my
concern very well see people confuse
that being intelligent uh with being
rational uh and we know that we have
like robust readily experimentally
replicated evidence that intelligent is
only weekly predictive of rationality
intelligence is it's fascinating can you
define
intelligence social media has already
changed the world you should be using it
everywhere in your business that you can
including within your social media
strategy and if you're not you falling
behind Opus clip is how you're going to
catch up fast Opus clip is an AI tool
designed to streamline your video
content strategy it turns long form
videos into short social media ready
clips automatically at impact Theory
we've integrated it into our workflow
and it's making a difference our social
media team uses it to create digestible
clips from our shows saving time helping
us maintain a consistent social presence
and just find the best parts of an
interview but you don't have to take my
word for it for the next few weeks you
can try clip anything Opus Clips
Flagship products for free efficient
video marketing can make or break your
business if you're curious about how AI
can enhance your content strategy click
the link in the show notes to start your
free trial of clip anything so you can
see it directly for yourself you need to
know that most entrepreneurs are one
lawsuit away from losing everything and
they don't even know it when you're just
starting out traditional legal
protection feels impossible lawyers
charging $22,000 an hour massive
retainer fees complicated paperwork but
protecting your business doesn't have to
be this way that's why I'm excited about
today's sponsor Legal Zoom they've
created a way to handle your legal needs
and protect your business without the
astronomical price T getting your Legal
Foundation is a
non-negotiable head to Legal zoom.com
and use promo code impact to get 10% off
any Legal Zoom business formation
product excluding subscriptions and
renewals this offer expires December
31st 2024 that's legal zoom.com promo
code impact Legal Zoom provides access
to independent attorneys and
self-service tools Legal Zoom is not a
law firm and does not provide legal
advice except we authorized through its
subsidiary Law Firm LZ Legal Services
LLC now let's get back to the episode
with
John can you define intelligence so I
mean that's a controversial thing to do
my particular proposal that I have
several Publications including one very
very recently on is that the core of
general intelligence so let me just
specify general intelligence is your
ability to be a general Problem Solver
you can solve a wide variety of problems
in a wide variety of domains in a wide
variety of ways what makes the llm so
immediately attractive to people is
unlike previous AI that tended to be
very siloed it could solve you know
problems in a very limited domain the
llms look like they can solve a wide
variety of problems that's why they call
it AGI artificial general intelligence
because it's starting to move or it
looks like it's starting to move towards
the kind of general intelligence that
you demonstrate now my scientific
proposal is that what makes you
generally intelligent is that you can
solve two meta problems meta problems
are any problems you have to solve in
order to solve any specific problem you
have to solve so all else being equal
these two meta problems and they're
interlocked are the following the more
you can anticipate the world the more
adaptive you'll be so all else being
equal right if you can anticipate the
tiger it's better than fighting the
tiger if you can anticipate where the
salmon are going to be in the river it's
better than just happen stance coming
across them right and so anticipation
and this is the whole predictive
processing framework that what we're
what the brain is trying to do is at
multi levels it's trying to create it's
trying to reduce surprise and anticipate
which means to predict and prepare for
the world right now what I've been
arguing with a lot of other people's
help is that problem well think about it
as I start to anticipate more and more
into the future the amount of
information I have to consider goes up
exponentially very fast Michael Levan
calls it your cognitive light cone right
so right now because you're highly
intelligent think about all the ways you
could pay attention to all the
information in this room and not just
what you could look at all the patterns
of how you could look at that and then
there or you could look at that and then
like it's combinator vast think about
all the information in your long-term
memory it's and all the ways you could
con connect it you could potentially
connect arars in the history of
Australia in some way somebody hasn't
thought of before like it's all it's
overwhelming think about all the
possibilities you can consider your
ability to consider possibilities is
overwhelming all the sequences of
behavior for example the number of uh
like uh pathway sequences of behavior in
a chess game is
like you calculate it by the number of
on average the number of legal moves you
can make and the number of turns you can
take that's 30 to the power of 60 that's
more than the number of particles in the
universe okay and this isn't what you do
you don't check all that information to
see if it's relevant to the problem
you're trying to solve you somehow and
this is what you said a few minutes ago
you ignore almost all of it and you're
doing it right now and you zero in on
the relevant possibility to consider the
relevant things to remember the relevant
things to pay attention to and the
relevant things to be doing it and
you're doing it like that this has been
like my obsession for the like 25 years
of my academic work how you do this um
we can come back to this the
llms don't do it for themselves and they
they don't generate an explanation of
how we do it uh we can come back to that
uh but um that ability to do relevance
real realization and your ability to
anticipate are interlocking the more I
anticipate the more I need to do
relevance realization to tell me what I
should in anticipate under what frame
what aspect to what degree how Salient
should I be how much should it AR arouse
my
metabolic ATT effort how much should it
direct my attention Etc and this is this
I argue this is the key and there's
increasing people are increasingly
taking this seriously which is something
a scientist finds gratifying right that
this is what it is to be intelligent but
think about it the very things that make
you adaptive make you prone to
self-deception because you IGN and you
said it a minute ago perfectly you were
right on because I have to ignore so
much frequently what I'm ignoring might
actually contain in reality the
information I need to solve my problem
and you know that you've misf framed
things when you have that moment of
insight when you say oh Oh Oh I thought
she was angry but it turns out she's
afraid and everything shifts and you
have that aha moment and you realized
you were ignoring some things you should
have been paying attention to and you
were making certain things Salient that
you shouldn't have been making Salient
and you get that
restructuring Insight tells you that the
relevance realization can lead to
self-deception you can you can get
locked in you can your way of framing
can be the very thing that's preventing
you from solving your
problem very thing that makes you
adaptive makes you prone to
self-deception rationality it's not
primarily about logic rationality is
about developing practices and
skills for reflecting on your Framing
and to see if it is making you
misconstrue a situation so for
example here's a pond there's a lily pad
in it every day the number of Lily PS
doubles on day 20 it's completely filled
on what day was the pond half
filled the day before good for you only
because I've heard it before I would
have otherwise gotten it wrong right so
most people will say 10 right because
they're they're finding the wrong thing
Salient they're hearing half and they're
finding it Salient in the wrong way and
they misconstrue they misf frame the
problem and rationality goes in and says
wait wait wait wait is that the relevant
information it's challenging the fact
that you are potentially bullshitting
yourself and that's what rationality is
it's about systematically in many
domains of your life and systemically
through many levels of your
Consciousness and cognition and behavior
learning how to challenge and
see through it that's rationality
intelligence only weakly predicts that
you have to cultivate rationality now
you are doing it Tom you doing it you
have set up a habit of looking for frame
what you call frames of reference how
people are doing relevance realization
in the data that they're presenting to
yourself and you call it into question
you have cultivated that habit I asked
you to consider that that isn't widely
trained in our society and that makes
these machines particularly dangerous
because they can hijack our relevance
realization machinery through their
bullshitting and we don't have the
rationality the wherewithal to come upon
them and say wait a second and so yes
that's why I at first I thought well I
don't trust because I happen to think
that a lot of the people that are making
the
llms are not well scientific educated in
the difference between intelligence and
rationality let alone rationality and
wisdom and so I don't trust their
judgments and the kind of biases we know
that bias is playing a significant role
in the llms because in double descent
there's bias at work that we don't even
under double descent
so you you have you you have you have a
bias variance trade-off no free lunch
theum stuff and what happens is you you
should have sort of a u curve but the
machines don't actually go through that
um they actually get better um where
they should be when you push them Beyond
a certain limit they should start to deg
grade so but instead of doing the
typical descent they descend in another
way and what that on the graph it just
means the graph of what what are they
descending on what what they're
descending on is uh uh is how rapidly
their performance is degrading because
you're always in a bias variance
tradeoff uh so
sorry these machines are doing a limited
form of predictive processing because
they're predicting probabilistic
relationships between terms yep okay
whenever you're predicting you're in a
biased variance trade-off this is an
issue of realization by the way so I
always have a sample that is smaller
than the population and I'm trying to
predict what the patterns in my
population the real world from my sample
is is that okay yep now I face two
problems one is I can miss patterns in
my sample that do predict the population
that's
bias that's underfitting to the sample
or variance which is I overfit I find
patterns in my sample I believe apply to
right that don't that don't now notice
I'm in a tradeoff relationship with that
I can't come up with an algorithmic
optimal solution to this because there
isn't one that always works right
because as I get rid of bias so how do I
get rid of bias I make my system more
sensitive to pick up on missing patterns
but as I pick up on missing patterns I
pick up on patterns in the data that
aren't in the population so oh I want to
reduce my variance so what I'm going to
do is I'm going to reduce picking up on
these patterns but then I'm going to
miss some of the patterns that actually
transfer that's the bias variance
tradeoff and right if you push the
machines in so in in too what you do
typically in in machine learning is you
increase the sensitivity and then you
start to get over fitting to the data
and then you do like Drop Out you turn
off half your nodes in your network or
you throw you throw static information
into it and basically break it out of
getting overfitted to the data it it
opens up again I want to go back to
something here so the core question that
I'm grappling with is um I think AI is
gonna it it has the potential to drive
cost down so substantially that you're
going to get as close to uh an energy
Utopia you can imagine there are no
Utopias I want to be very clear about
that but with AI and the ability to
drive cost down I think it it's going to
be a boon where I think it will be able
to drive cost down enough it will be
able to break uh capitalism even though
I am just a died in the wool capitalist
I don't think that it's the end all be
all system and if AI can really make
things that cheap that people just have
abundance that would be amazing uh so
that's the positive side that's the side
that draws me to it I also want to
believe that it can be done well but my
big fear is that there's a um two um
axium thing that that makes AI
extraordinarily dangerous and that is
Axiom number one uh humans are easy to
control through manipulating their frame
of reference yes and axom number two
humans long to control other humans and
so as long as those two axioms are true
not all humans I'm perfectly willing to
uh grant that maybe even the majority of
people are perfectly fine to just live
their life and they're not trying to
have control over anybody I don't
actually believe that but let's just say
I did that it would still be a problem
uh the people that do want to have
control will use AI to pretty invisibly
create a frame of reference that
manipulates the end user into seeing the
world in their way and so I'm only at
the headline level of this but just
today I saw Matt Ridley a tweet that he
put out saying that there was some
organization I forget the name that was
trying to make sure that UNC ious bias
did not find its way into the creation
of these uh algorithms for the llms and
he said but the thing that we were
completely blind to is that conscious
bias was the thing that we needed to be
most worried about because in trying to
avoid the unconscious bias we just gave
the llm like this hard take and he that
I know of he did not draw the parallel
to Gemini but I will now draw the
parallel to Gemini when it first
released and if you ask for Nazis you
would get uh black women and if you
asked for the founding fathers you would
get you know ethnically diverse people
uh so that's clearly a very specific
worldview that the people creating it
were like hey we just want to make sure
that this thing doesn't go off the rails
and it gives these nice tidy answers of
course showing the massive amount of
bias so um I think the attempt to remove
bias is quick
sodic uh not because there isn't a moral
imperative to try and make it better but
you when you don't like your relevance
realization you call it bias when you
like your relevance realization you call
it insight and intuition sounds right
and it's the same machine and if you
it's it's the bias variance if you try
to get rid of one you will lose the
other it's just two different aspects of
the same thing well what I'm going to do
is I'm going to try and you know remove
all bias in this thing well then you're
going to subjected to Common aoral
explosion and in fact it looks like you
can't do that these Mach again I was as
I was saying these machines seem to be
doing well precisely because they have
all these implicit biases that are sort
of protecting them against too much
combinatorial explosion of information
and we don't quite know what those are
um that that's part of the problem these
aren't the obvious biases of racism we
we don't want that but there's like
what's this doing it's it's biasing some
way it's trying to deal with bias and
variant sorry part of the problem is
that bad naming that we have this term
bias which just means there's limitation
and then the bias variance it's two two
different uses of the same word so I'm
going to call the first I'll use your
language the one is this framing right
that can lock us but it also empowers us
right and what we're constantly trying
to do is we're constantly having to
evolve that there's no yeah I'm going to
say this there's no final solution to
that problem there is no way of saying
okay this this is the algorithm for all
possible environments that will always
make sure I've got enough framing so
that I'm generally intelligent but I'm
not going to be subject to any bias in
the negative sense of the word that's an
impossible task there is no way of doing
that uh and so that way what we what
what you have to do instead is well I
would argue do what evolution seems to
have done with h which is say no no no
what you now do is you have to move
Beyond making these things super
intelligent you have to cross a
threshold right now we're just making
the things more intelligent although I
will talk about uh one thing that's
happened
recently we have to make these things
rational we have to make give them that
capacity for self-correction that I
talked
about now when I did my video essay and
we started writing the book Sean and I I
said as we move to making them more
rational we will notice that this the
things start to slow down and they've uh
open AI has just released a version that
is supposed to be more rational it's
supposed to be more reasonable supposed
to re be better at reasoning and
argument and it slows down and its
functionality is way is significantly
reduced of course that only makes sense
right because think about it you can't
make the reflective
machine right it h it it has to it has
to debug it has to parse it has to break
up it has to intervene on the general
intelligence in order to be able to
correct and and improve it meaning it's
it's presenting itself an answer and
it's checking it to see if that answer
makes sense that's right and what it's
doing is seeing it's trying to see
well I haven't seen under the hood
nobody has yet I I so I suspect it is
trying to get you know am I finding The
Sweet Spot between Framing and bias in
the pejorative sense H right and again
that is something in which you then have
to step back and you have to again do a
lot of relevance realization you have to
say well what's the context I'm in who's
my interlocutor what's the relative
status difference between us what's the
problem at hand how is that problem
nested in larger problems how was our
problems related to wider shared
Collective problems we you're doing all
of that right now like
this that's and a part of what you do is
you bring that to bear on judging
whether how well your general
intelligence is framing the situation
for you okay so given we have a very
complicated cognitive problem that AI is
already showing what I would say are
just
unbelievably High utility uh in
certainly getting answers that are
useful in maybe a more narrow domain
than we all want but in that narrow
domain I mean it is very very impressive
yes how do we do AI well like how do we
as people interacting with it how do we
do it
well well uh I mean part is what you
just exemplified a few minutes ago you
you have to you have to become more
rational yourself you have to become you
have to develop habits and skills now
really fast going back to your
definition of rationality this is where
I start to worry about AI uh so your
definition of rationality was
essentially you have a known aim that
you're trying to get there and is the
thing that you're doing actually moving
you towards that and are you able to
assess whether you're actually making
progress towards that thing or not now
the second you give AI a value system
and you say hey here here are your
values uh now you run into the paperclip
problem problem but here's the deeper
issue you can't give something value
system that that that that that's an
that's an ontological mistake I think
you're wrong about this okay so hit me
with your best argument and then we'll
see if mine crumbles before my very eyes
Okay so to Value something is to care
for it uh right to care about it to find
it relevant to
you um and the only way you actually
care for something for your sake is
because you are the kind of being that
takes care of yourself you are an
autopoetic being you are not merely
self-organizing like a tornado or
dynamical system you are self-organized
to seek out the things that meet your
actual needs things literally matter to
you like they are literally imported
into you either physically or
informationally to make your mind and
body you are
continually you are nothing separable
from the project of continually
self-care and self creation and that is
what gives you the capacity to care
about information rather than um you
care about this information rather than
information and that varies according to
the organism what you care about is
different from what a lion cares about
Vicken Stein famously said that even if
the lion could speak we would not
understand it because what it finds
Salient and relevant its Salient
landscape is fundamentally different
from yours because of the way it is
caring for itself and taking care of
itself in this world and if relevance
realization grounds
in autop poesis you can't have relevance
realization without being an autopoetic
being these beings are properly not
autopoetic now there are people out
there I know them I work with them I
talk to them Michael Levan and his
students are who are working on
artificial autopoetic artificial
intelligence and I think that is what we
should be paying a lot of attention to
right now so give say that uh without
using the word autopoetic you take care
of yourself Moment by moment at manying
AI a thing that it cares about you no
you make the AI take care of itself by
literally making itself Moment by Moment
Like a living thing and therefore it has
real needs that it needs Moment by
moment to address y see this is where I
get scared okay so uh that's exactly
what's going to be my Counterpoint is
that ultimately all of that's going to
boil down to an algorithm of uh no it
can't yeah I think it has to no so think
a de reason why it can't this is in the
paper I just published but your point
first okay so the way that I see it is
uh Evolution has to find a way to
hardcode a response mechanism into us
now what we respond to is going to be
culturally defined but the mechanism by
which we say that's a good thing and
this is a bad thing that's hardwired
otherwise you wouldn't you would have to
teach somebody oh this thing you have to
respond positively to this thing you
have to respond negatively to I've heard
you talk about this with like molecules
right so um if something smells terribly
why do you respond negatively to that
because Evolution has taught us that
that's full of bacteria and it's a
problem whereas if you smell something
lovely it tells you that this is
something that you know has chloric
value whatever you want to move towards
it so the the mechanism at the sort of
ground level is pre-programmed into us
which means that it has to come packaged
as an algorithm and so if we can say
take all this output of this good that
bad uh you should want this you should
want that we should be able to hardcode
that stuff and then the mechanism of
well how do I respond to this individual
thing that can be contextual and all of
that but ultimately there is that like
and process this data in This Way Comes
pre-programmed okay can I respond please
of
course
so your example is right and that it's
Evolution but the idea that there's an
algorithm if I understand algorithm in
the technical sense that there is a
formal system that can be applied uh
cross contextually in an invariant
manner um that um can't be the case uh
because that's not how Evolution Works
Evolution Works in terms of variable
agent Arena relationships what is
adaptive for the great white shark in
the ocean is not the same thing that's
adaptive for the Scorpion in the desert
and what this means means is that so do
you know the Savages distinction between
a statistically large and a
statistically small world is that is
that okay so whenever we uh so the the
real world
is uncountably complex and it's Dynamic
it's right it's it's constantly changing
and there's that means there's emergent
novelty to reality which means there's
not just risk that can be calculated
there's radical UNC certainty okay and
there's also IL definedness we don't
things don't come labeled and they can't
be labeled as to whether or not they're
relevant because relevant is not a
property of things this mug is relevant
to me right now it won't be relevant to
me at half an hour from now it'll never
be relevant to a blue whale etc etc
relevance is not in the thing relevance
isn't just an arbitrary choice of mine
because I can get relevance wrong
relevance is the way I'm fitted to the
thing and the way that the world is
fitted to me now
every every time we are solving a
problem we have to take that what Savage
called the large world and we have to
ignore as we said a large amount of it
to make a small world that's the world
in which we can apply a formal system we
can apply an algorithm and solve it if
you try to apply an algorithm in this
world you will hit the rest you'll
require the rest of the history of the
universe to try and solve it okay now
each one of these small worlds there are
multiple small worlds cuz no one can be
complete you can't get a consistent and
complete uh right mapping onto the large
World goal right Einstein okay so you
have you have you have necessarily a set
of an uncountably large set of small
worlds they are necessarily different
from each other because each one has
properties in it that the others don't
which means this is what you need to
find an algorithm you need to find a
shared set of necessary and sufficient
conditions running through all those
possible small worlds which are actually
technically infinite in number and then
capture that with your algorithm that's
actually formally
impossible what you could
do picture this a perfectly marbled
steak sizzling in a cast iron skillet
the aroma filling your kitchen as you
get ready to take the first juicy bite
if your mouth is watering right now then
you're going to want to hear about
butcher box the grass-fed beef is right
ridiculously tender their free range
chicken is juicier than anything you'll
find in a grocery store and their
holiday offerings they're what
celebrations are made for here's why
I've been a butcher Box customer for
years they partner with Farms that do
things the right way no shortcuts no
antibiotics no added hormones Just Pure
Clean meat delivered straight to your
door for free just in time for the
holidays butcherbox has an incredible
offer for impact Theory listeners go to
butcherbox com/ impact and use code
impact at checkout you'll get your
choice of a whole turkey turkey breast
or spiral ham in your first box plus $20
off that's butcherbox.com impact for
that incredible offer there's a reason
why I tell aspiring entrepreneurs about
a particular tool it's Shopify look
Shopify is the backbone of millions of
successful businesses for a reason take
companies like aloe all birds or skims
yes they've got killer products and
highly effective marketing but their
secret weapon it's Shopify Shopify has
the number one checkout on the planet
period and with shopay they're boosting
conversions by up to
50% I've seen countless success stories
and that's why I can tell you with
absolute certainty businesses that want
to grow grow with Shopify upgrade your
business today and get the same checkout
as all birds here here is the simple
action plan sign up for a $1 per month
trial period at shopify.com
impact allower case just go to
shopify.com impact to upgrade your
selling today again that's shopify.com
impact what you could do is you might be
able to say okay for this being in this
environment for this period of time for
this set of problems we could give it
these innate characteristics that could
help it find the trade-off relationships
as it fits to the environment and evolve
its fittness I mean this was the core of
the paper that I just published
relevance realization is not
fundamentally not computational in
nature it actually depends on these uh
these evolutionary processes these
biological processes that have to do
with a constant dynamical coupling to
the environment all right let me see if
I can use uh John Veri against John
Verve that's always a good thing to do
that will help me be more rational yeah
so okay um there is this idea that uh
and I've heard you talk about this so I
know you know but you've I've not heard
you use this example which you helped me
understand why the following examples
always hit me so well in World War II
when they were just beginning to use
radar the Brits uh were trying to figure
out when was airplanes and when it was
birds and what they found was man there
were some people that were really good
at it and some people that were really
bad at it so they had the people that
were really good train the people that
were really bad and they made even
though people were training with the
people that were really good they were
terrible and so they're like wait a
second how on Earth they're being
trained with the best people so finally
they said hey people that are really
good at recognizing the difference
between planes and birds don't say
anything just let them watch you yes and
then once they stopped trying to train
them and they just started watching them
they would pick up on whatever patterns
they were picking up on that's right and
now they were able to do it so my
hypothesis is and it is very much a
hypothesis and not a thesis so you take
it for what it's worth but my hypothesis
is that when if the pattern is
subconsciously recognizable we simply
don't understand it well enough yet to
pull it into the conscious mind to make
it an algorithm but that with the just
unbelievable ability to look at patterns
and assess what is coming next my
hypothesis goes that AI will be able to
go through all of this and those
gigantic pattern sets will not be a
blind box to them or Black Box they will
understand exactly what it is even if
they're not able to articulate it
they'll be able to get it with the kind
of precision that they can do with
language now and so the only thing that
that makes me worry about is I think a
fundamental part of that pattern
recognition which is exactly what you
just said is it's all context baby and
so whether you're a whale or not is
going to determine whether that mug has
any sence whether you're thirsty or not
is going to determine whether that mug
has any sence whether there's a bottom
to it or holes in it all of those things
are it's very complicated but it clearly
at some level is knowable and so I am
just betting that if you can give AI the
equivalent of Pleasure and Pain the
equivalent of I forgot autopoetic I
forget the exact word you autopoetic
that you're saying something slightly
different than what I'm saying poetic
yeah it's it c
Resume
Read
file updated 2026-02-12 01:36:28 UTC
Categories
Manage