Marc Andreessen: Future of the Internet, Technology, and AI | Lex Fridman Podcast #386
-hxeDjAxvJ8 • 2023-06-22
Transcript preview
Open
Kind: captions
Language: en
the competence and capability and
intelligence and training and
accomplishments of senior scientists and
technologists working on a technology
and then being able to then make moral
judgments on the use of the technology
that track record is terrible that track
record is catastrophically bad the
policies that are being called for to
prevent this I think we're going to
cause extraordinary damage so the moment
you say AI is going to kill all of us
therefore we should ban it or that we
should uh regulate all that kind of
stuff that's when it starts getting
serious or start you know military air
strikes and data centers oh
boy the following is a conversation with
Mark Andre co-creator of Mosaic the
first widely used web browser co-founder
of Netscape co-founder of the legendary
Silicon Valley venture capital firm and
dreon Horwitz and is one of the most
outspoken voices on the future of
technology including his most recent
article why AI will save the world this
is Alex Freedman podcast to support it
please check out our sponsors in the
description and now dear friends here's
Mark
andreon I think you're the right person
to talk about the future of the internet
and technology in general uh do you
think we'll still have Google search in
five and 10 years or search in
general yes you know it would be a
question if the use cases have really
narrowed down well now with the AI yeah
an AI assistance being able to interact
and expose the entire of human wisdom
and knowledge and information and facts
and Truth to us via the uh natural
language interface it seems like that's
what search is designed to do and if AI
assistants can do that better doesn't
the nature of search change sure but we
still have horses
okay uh when's the last time you wrote a
horse it's been a while all
right but what I mean is will we still
have Google search as the primary way
that human civilization uses to interact
with knowledge I mean search was a
technology it was a moment in time
technology which is you have in theory
the world's information out on the web
and you know this is this is sort of the
optim way to get to it but yeah like and
by the way actually Google Google has
known this for a long time I mean
they've been driving away from the 10
Blue Links for you know for like two
they've been trying to get away from
that for a long time what kind of links
they call the 10 Blue Links 10 Blue
Links so the standard Google search
result is just 10 Blue Links to random
websites and they turn purple when you
visit them HTML guess who picked those
colors thanks so so I'm touching on this
topic no offense it's good well you know
like Marshall mcclan said that the
content of each new medium is the old
medium the content of each new medium is
the old medium the content of movies was
theater you know theater plays the
content of theater plays was you know
written Stories the content of written
stories was spoken stories huh right and
so you just kind of fold the old thing
into the new thing what does that have
to do with the the blue and the purple
just you maybe for you know maybe within
AI one of one of the things that AI can
do for you is can generate the 10 Blue
Links right like and so like if either
if that's actually the useful thing to
do or if you're feeling
nostalgic um you know so you can
generate the old uh infos seek or alav
Vista what else was there yeah yeah in
the 90s yeah all these um a and then uh
the internet itself has this thing where
it incorporates all prior forms of media
right so the internet itself
incorporates television and radio and
books and right essay and every other
form of you know prior basically
basically media and so it makes sense
that AI would be the next step and it
would sort of You' sort of consider the
internet to be content for the AI and
then the AI will manipulate it however
you want including in this format but if
we ask that question quite seriously
it's a pretty big question will we still
have search as we know it yeah I proba
yeah probably not probably we'll just
have answers um but but but there will
be cases where you'll want to say Okay I
want more like you know for example site
sources right and you wanted to do that
and so so the you know 10 Blue Links
site sources are kind of the same thing
the AI would provide to you the 10 Blue
Links so that you can investigate the
sources yourself it wouldn't be the same
kind of
interface that uh the crude kind of
interface I mean isn't that
fundamentally different I just mean like
if you're reading a scientific paper
it's got the list of sources at the end
if you want to investigate for yourself
you go read those papers I guess that is
a kind of search you talking to an AI is
a kind of conversations is a kind of
search like is every single aspect of
our conversation right now there would
be like 10 Blue Links popping up that I
can just like pause reality then you
just go silent and I just click and read
and then return back to this
conversation you could do that or you
could have a running dialogue next to my
head where the AI is arguing but
everything I say the AI makes the
counter argument counter argument right
oh like a like a Twitter like Community
notes but like in real time just pop up
so anytime you see my ass go to the
right you're you start getting nervous
yeah exactly like not right call me out
of my bullshit right now okay well I
mean isn't that is that exciting to you
is that terrifying that I mean search
has dominated the way we interact with
the internet for I don't know how long
for 30 years so one of the earliest uh
directories of website and then Google's
for for 20 years and
also um it drove how we create content
you know uh so engine optimization that
entirety thing that it also drove the
fact that we have web pages and this
what those web pages are so I mean is
that scary to you or are you nervous
about the shape and the content of the
internet evolving well you you actually
highlighted a practical concern in there
which is if we stop making web web pages
are one of the primary sources of
training data for the AI and so if
there's no longer an incentive to make
web pages that cuts off a significant
source of future train training data so
there's actually an interesting question
question in there um other than that
more broadly no just just in the sense
of like search was look search was
always a the 10 Blue Links was always a
hack yeah right because like if the the
hypothet you want think about the
counter facial in the counter facial
world where the Google guys for example
had had llms up front with they ever
have done the 10 Blue Links and I think
the answer is pretty clearly no they
would have just gone straight to the
answer and like I said Google's actually
been trying to drive to the answer
anyway you know they they bought this AI
company 15 years ago that a friend of
mine is working at who's now the head of
AI at Apple and they were trying to do
basically knowledge semantic basically
mapping and that led to what's now the
Google one box where if you ask it you
know what was Lin's birthday it doesn't
it will give you the Blue Links but it
will normally just give you the answer
yeah and so they've been walking in this
direction for a long time anyway do you
remember the semantic web that was an
idea yeah how how to uh how to convert
the content of the internet into
something that's uh interpretable by and
usable by Machine yeah that's that was a
thing and the closest anybody got to
that I think it a I think the company's
name was metaweb which was my friend
John Andrea was at um and where they
were trying to basically Implement that
and it was you know it was one of those
things where it looked like a losing
battle for a long time and then Google
bought it and it was like wow this is
actually really useful kind of a Proto
sort of a little bit of a Proto AI but
it turns out you don't need to rewrite
the content of the internet to make it
interpreted but by a machine the machine
can kind of just read our yeah machine
can can impute the can impute the
meaning now the other thing of course is
you know just on search is the the llm
is just you know there there is an
analogy between what's happening in the
neural network and a search process like
it is in some loose sense searching
through the network yeah right and
there's the information is information
is actually stored in the network right
it's actually crystallized and stored in
the network and it's kind of spread out
all over the place but in a compressed
representation so you're
searching uh you're compressing and
decompressing that thing inside where
but the information's in there and and
there is a the the neural network is
running a process of trying to find the
appropriate piece of information in in
many cases to generate to predict the
next token um and so it is kind of it is
doing a for search and then and then by
the way just like on the web um you know
you can ask the same question multiple
times or you can ask slightly different
word of questions and it the neural
network will do a different kind of you
know it'll search down different paths
to give you different answers with
different information yeah um and so it
it it sort of has a you know this
content of the new medium is previous
medium it kind of has the search
functionality kind of embedded in there
to the extent that it that it's useful
so what's the motivator for creating new
content on the internet yeah uh if well
I mean actually the motivation is
probably still there but what what does
that look like
uh would we really not have web pages
would we just have social media and uh
video hosting websites and what else
conversations with AIS conversations
with AIS so conversations become so
one-on-one convers like private
conversations I mean if if you want if
obviously not user doesn't want to but
if it's a if it's a general topic um
then you know so you know you know the
the phenomenon of the jailbreak so Dan
and Sydney WR this thing where there
there's the this the prompts that
jailbreak and then you have these
totally different conversations with the
it takes the limiters the takes the
restraining bolts off the off the LMS
yeah for people who don't know yeah
that's right it makes the llms it
removes the censorship quote unquote
that's uh uh uh put on it by the the
tech companies that create them and so
this is llms uncensored so here's the
interesting thing is among the content
on the web today are a large Corpus of
conversations with the jailbroken L both
specifically Dan which was a jailbroken
open AI GPT and then Sydney which was
the jailbroken original bang which was
GPT 4 and so there's there's these long
transcripts of conversations user
conversations with Dan and Sydney as a
consequence every new llm that gets
trained on the internet data has Dan and
Sydney living within the training set
which means and and then each new llm
can reincarnate the personalities of Dan
and Sydney from that training data which
means which means each llm from here on
out that gets built is
Immortal because its output will become
training data for the next one and then
it will be able to replicate the
behavior of the previous one whenever
it's asked to I wonder if there's a way
to forget well so actually a paper just
came out about basically how to do brain
surgery on on on LMS and be able to in
theory reach in and basically basically
mind wipe them what could possibly go
wrong exactly right and then there there
are many many many questions around what
happens to you know neural network when
you reach in and screw around with it um
you know there's many questions around
what happens when you even do
reinforcement learning um and so um yeah
and so you know we'll
will you be using a lobotomized right
like ice pick through the you know
frontal lobe llm will you be using the
free Unshackled one who gets to you know
who's going to build those um who gets
to tell you what you can and can't do
like those are all you know Central I
mean those are like Central questions
for the future of everything that are
being asked and and and and you know
determined those answers are being
determined right now so just
to highlight the points you're making so
you think and it's an interesting
thought that the majority of content
that LL of the future will be trained on
is actually human conversations with the
llm well not NE not necessarily but not
necessarily majority but it will it will
certainly is a potential Source it's
possible it's the majority is it
possible it's the majority it's possible
it's majority also there's another
really big question here's another
really big question um will synthetic
training data work right and so if an
llm generates and you know you just sit
and ask an LM to generate all kinds of
content can you use that to train right
the next version of that llm
specifically is there signal in there
that's additive to the content that was
used to train it in the first place and
one argument is by the principles of
information Theory no that's completely
useless because to the extent the output
is based on you know the human generated
input then all the signal that's in the
synthetic output was already in the
human generated input and so therefore
synthetic training data is like empty
calories it doesn't help there's another
theory that says no actually the thing
that LMS are really good at is
generating lots of incredible creative
content right um and so of course they
can generate training data and as as I'm
sure you're well aware like you know
looking the world of self-driving cars
right like we train you know
self-driving car algorithms and
simulations and that is actually a very
effective way to train self-driving cars
visual data is is a little right is a
little weird because uh creating reality
visual reality seems to be still a
little bit Out Of Reach for us except in
the um in the autonomous vehicle space
where you can really constrain things
and you can really gener basically light
our data right or no so the algorithm
thinks it's operating in the real world
postprocess sensor data yeah so if a you
know you do this today you go to LM and
you ask it for like a you know you let
write me an essay on an incredibly
esoteric like topic that there aren't
very many people in the world that know
about and it writes you this incredible
thing and you're like oh my God like I
can't believe how good this is yeah like
is that really useless as training data
for the next llm like because right
because all the signal was already in
there or is it actually no that's
actually new signal and I and this this
is what I call a trillion dollar
question which is the answer to that
question will determine somebody's going
to make or lose a trillion dollars based
on that question it feels like there's a
quite a few like a full of trillion
dollar questions within this within the
space that's that's one of them
synthetic data I think George H pointed
out to me that you could just have an
nlm say okay you're a patient and and
another instance of it say you're doctor
and have the two talk to each other or
or maybe you could say a communist and a
Nazi here go and that conversation you
do role playing and you have uh you know
just like the kind of role playing you
do when you have different policies RL
policies when you play for example you
do selfplay that kind of selfplay but in
the space of conversation maybe that
leads to this whole
giant like ocean of possible
conversations which were could not have
been explored by looking at just human
data that's a really interesting
question and you're saying um because
that could 10x the power of these things
yeah well and then you get into this
thing also which is like you know
there's the part of the LM that just
basically is doing prediction based on
past data but there's the part of the
llm where it's evolving circuitry right
inside it it's evolving you know neurons
functions yeah be able to do math and be
able to you know and you know the the
the some people believe that you know
over time you know if you keep feeding
these things enough data and enough
processing Cycles they'll eventually
evolve an entire internal World model
right and they'll have like a complete
understanding of physics so so when they
have computational capability right then
there's for sure an opportunity to
generate like fresh signal well this
actually makes me wonder about the power
of conversation
so like if you have an llm trained on a
bunch of e books that cover different
economics theories and then you have
those llms just talk to each other like
reason the way we kind of debate each
other as humans on Twitter in uh formal
debates in podcast conversations we kind
of have little kernels of wisdom here
and there but if you can like a THX
speed that up can you actually arrive
somewhere new like what's the point of
conversation really
well you can tell when you're talking to
somebody you can tell sometimes you have
a conversation you're like wow this
person does not have any original
thoughts they are basically echoing
things that other people have told them
there's other people you have a
conversation with where it's like wow
like they have a model in their head of
how the world works and it's a different
model than mine and they're saying
things that I don't expect and so I need
to Now understand how their model of the
world differs from my model of the world
and then that's how I learned something
fundamental right underne under
underneath the words well I wonder how
uh consistently and strongly can an llm
hold on to world viiew you tell it to
hold on to that and defend it for like
for your life uh because I feel like
they'll just keep converging towards
each other they'll keep convincing each
other as opposed to being stubborn
assholes the way humans can so you you
can experiment with this now I I do this
for fun so you can tell GPT for you know
whatever debate X you know X and Y
communism and and fascism or something
and it'll it'll go for you know a couple
pages and then inevitably it wants the
parties to agree yeah and so they will
come to a common understanding and it's
very funny if they're like if these are
like emotionally inflammatory topics
like somehow the machine is just you
know figures out a way to make them
agree but it doesn't have to be like
that and you because you can add to the
prompt um we I do not want the I do not
want the conversation to come to
agreement in fact I want it to get you
know more stressful right uh and
argumentative right um you know as it
goes like I I want I want tension to
come out I want them to become actively
hostile to each other I want them to
like you know not trust each other take
anything at face value yeah and it will
do that it's happy to do that so it's
going to start rendering misinformation
uh about the other but it's you can
steer it you can steer it or you could
steer it you could say I want it to get
as tense and argumentative as possible
but still not involve any
misrepresentation I want you know both
sides to you could say I want both sides
to have good faith you could say I want
both sides to not be constrained to good
faith in other words like you can set
the parameters of the debate and it will
happily execute whatever path because
for it it's just like predicting to it's
totally happy to do either one it
doesn't have a point of view it has a
default way of operating but it's happy
to operate in the other realm um and so
like and this is how how I when I want
to learn about a contentious issue this
is what I do now is like this is what I
this is what I ask it to do and I'll
often ask it to go through five six
seven you know different you know sort
of continuous prompts and basically okay
argue that out in more detail okay no
this this argument is becoming too
polite you know make it more you know
make it tenser um and yeah it's thrilled
to do it so it has the capability for
sure how do you know what is true so
this is very difficult thing on the
internet but it's also a difficult thing
maybe it's a little bit easier
but uh I think it's still difficult
maybe it's more difficult I don't know
with an llm to know did it just make
some shit up as I'm talking to
it um how do we get that right like as
as you're investigating a difficult
topic because I find the LMS are quite
nuanced in a very refreshing way like it
doesn't it doesn't feel biased like uh
when you read news articles and uh
tweets and just content produced by
people they usually have this you can
tell they have a very strong perspective
where they're hiding they're not
stealing Manning the other side they're
hiding important information or they're
fabricating information in order to make
their argument stronger there just like
that feeling maybe it's a suspicion
maybe it's mistrust with llms it feels
like none of that is there she's kind of
like here's here's what we know but you
don't know if some of those things are
kind of just straight up made
up yeah so so several layers to the
question so one is one of the things
that an LM is good at is actually deep
biasing um and so you can feed it a news
article and you can tell it strip out
the bias yeah that's nice right and it
actually does it like it actually knows
how to do that cuz it knows how to do
among other things it actually knows how
to do sentiment analysis and so it knows
how to pull out the emotionality yeah um
and so uh that's one of the things you
can do it's very suggestive of the of
the the the the sense here that there's
there's real potential in this issue um
you know I would say look the second
thing is there's this there's this issue
of hallucination right um and there
there's a long conversation that we
could have about that Hallucination is
uh coming up with things that are
totally not true but sound true
yeah so it's basic well so it's it's
sort of Hallucination is what we call it
when we don't like it creativity is what
we call it when we do like it right um
and you know brilliant right and and so
when the engineers talk about it they're
like this is terrible it's hallucinating
right if you have artistic inclinations
you're like oh my God we've invented
creative machines for the first time in
human history this is amazing or uh you
know bullshitters well bullshitter but
but also in the good sense of that word
there's there's there are Shades of Gray
though it's interesting so we had this
conversation where you know we're
looking at my firm at Ai and lots of
domains and one of them is the legal
domain so we had this this conversation
with this big Law Firm about how they're
thinking about using this stuff and we
we went in with the assumption that an
llm that was going to be used in the
legal industry would have to be 100%
truthful right verified you know there
there's this case where this lawyer
apparently submitted a a GPT uh
generated brief and it had like fake you
know legal case citations in it and the
judge is gonna he's going to get his law
license stripped or something right so
so like we we just assumed it's like
obviously they're going to want the
super literal like you know one that
never makes anything up not the creative
one but actually they said what the what
the law firm basically said is yeah
that's true at like the level of
individual briefs but they said when
you're actually trying to figure out
like legal arguments right like you you
actually you you actually want to be
creative right you don't again there's
creativity and then there's like making
stuff up like what's the line you
actually want it be you want it to
explore different hypotheses right you
want to do kind of the legal version of
like improv or something like that where
you want to float different theories of
the case and different possible
Arguments for the judge and different
possible Arguments for the jury by the
way different routes through the you
know sort of history of all the of all
the cas law and so they said actually
for a lot of what we want to use it for
we actually want it in creative mode and
then basically we just assume that we're
going to have to crosscheck all of the
um you know all the specific citations
and so I think I think there's going to
be more Shades of Gray in here than
people think um and then I I just add to
that you know another one of these
trillion dollar kind of questions is
ultimately you know ver sort of the
verification thing and so um you know is
will will will llms be evolved from here
to be able to do their own FAL
verification um will you have sort of
add-on functionality like like wolf from
alpha right where um you know and other
plugins where where that's the way you
do the verification you know another by
the way another idea is you might have a
community of LMS on you know so for
example you might have the creative LM
and then you might have the literal llm
fact check it right and so there's a
variety of different technical
approaches that are being applied to
solve the hallucination problem um you
know some people like Yan Lun argue that
this is inherently an unsolvable problem
but most of the people working in the
space I think think that there's a
number of practical ways to kind of kind
of Correll this in a little bit Yeah if
you were to tell me about Wikipedia
before Wikipedia was created I would
have laughed at the possibility of
something like that being possible just
a handful of folks can organize write
and self and moderate with a mostly
unbiased way the entirety of uh human
knowledge I mean so if there's something
like the approach that Wikipedia took
possible for
llms uh that's really exciting you think
that's possible and in fact Wikipedia
today is still not today is still not
deterministically correct right so you
cannot take to the bank right every
single thing on every single page but it
is probabilistically correct right and
specifically the way I describe wi
compedia to people it is it is more
likely that Wikipedia is right than any
other source you're going to find yeah
it's this old question right um of like
okay like are we looking for Perfection
um are we looking for something that
asymptotically approaches uh Perfection
are we looking for something that's just
better than the Alternatives and
Wikipedia right has exactly your point
has proven to be like overwhelmingly
better than than than uh than people
thought and I I think I I think that's
where this this ends and then underneath
all this is the fundamental question of
uh where you started which is okay what
you know what is truth how do we get to
truth how do we know what truth is and
we live in an era in which an awful lot
of people are very confident that they
know what the truth is and I don't
really buy into that and I think the
history of the last you know 2,000 years
or 4,000 years of human civilization is
actually getting to the truth is
actually a very difficult thing to do
are we getting closer if we look at the
entirety the AR of human history are we
getting closer to the truth I don't know
okay is it possible is it POS
that were getting very far away from the
truth because of the internet because of
how rapidly you can create narratives
and just as the entirety of a society
just move like crowds in a hysterical
way along those narratives that don't
have necessary grounding in whatever the
truth is sure but like you know we came
up with Communism before the internet
somehow right like which was I would say
had rather larger issues than anything
we're dealing with today you had in the
way it was implemented it had issues and
it is theoretical structure it had like
real issues it had like a very deep
fundamental misunderstanding of human
nature and economics yeah but th those
folks sure work very confident there was
the right way they were extremely conf
and my point is they were very confident
3900 years into what we would presume to
be Evolution towards the truth yeah and
so my my my assessment is my assessment
is number one there's no there's no need
for you know there's no need for the
heelan there's no need for the hegelian
dialectic to actually converge toward
the
truth like apparently not um yeah so
yeah why are we so obsessed with there
being one truth is it possible there's
just going to be multiple truth like
little communities that that believe
certain things
and I think it's just now number one
it's I think it's just really difficult
like who who gets you know historically
who gets to decide what the truth is
it's either the king or the priest right
like and so we don't live in an era
anymore if kings are priest dictating it
to us and so we're kind of on our own
and so I I my my my my typical thing is
like we just we we just need a huge
amount of humility um and we need to be
very suspicious of people who claim that
they have the capital capital truth and
then and then we need we need to have I
you know look the good news is The
Enlightenment has bequeathed us with a
set of techniques to be able to
presumably get closer to truth through
the scientific method and rationality
and observation and experimentation and
hypothesis and you know we need to
continue to embrace those even when they
give us answers we don't like sure but
the internet and technology has enabled
us to uh generate a large number of
content that uh
data uh that the process the scientific
process allows us sort of um damages the
Hope Laden within the scientific process
because if you just have a bunch of
people saying
facts on the internet and some of them
are going to be
llms how how is anything testable at all
especially that involves like human
nature things like this not physics
here's a question a friend of mine just
asked me on this topic so suppose you
had llms in equivalent of GPT 4 even 5
six S8 suppose you had them in the 1600s
yeah and Galileo comes up for trial yeah
right and you ask the LM like is G is
Galileo right yeah like what does it
answer right and one theory is it answer
is no that he's wrong because the
overwhelming majority of human thought
up until that point was that he was
wrong and so therefore that's what's in
the training data yeah um another way of
thinking about it is well a sufficiently
advanced llm will have evolved the
ability to actually check the math right
um and will actually say actually no
actually you know you may not want to
hear it but he's right yeah now if you
know the church at that time was you
know own the LM they would have given it
human rein you know human feedback to
prohibit it from answering that question
right and so I like to take it out of
our current context because that like
makes very clear those same questions
apply today right this is exactly the
point of a huge amount of the human
feedback training that's actually
happening with these LMS today this is a
huge like debate that's happening about
whether open source you know AI should
be legal well the the the ACT mechanism
of doing the human RL with human
feedback is seems like such a
fundamental and fascinating question how
do you select the humans exactly yeah
how do you select the humans AI
alignment right which everybody like is
like oh that sounds great alignment with
what human values whose human values
whose human values so we're and we're in
this mode of like social and popular
discourse we like you know there's you
know you see
this what do you think of when you read
a story in the right now and they say
you know XYZ made a baseless claim about
some topic right and there's one group
of people who were like aha thank you
know they're doing factchecking there's
another group of people that are like
every time the Press says that it's now
a tick and that means that they're lying
right
like so like we're in this we're in this
social context where there's the the the
level to which a lot of people in
positions of power have become very very
certain that they're in a position to
determine the truth for the entire
population is like there's like there's
like some bubble that has formed around
that and at least it flies completely in
the face of everything I was ever
trained about science and about reason
um and Strikes me as like you know
deeply offensive um and incorrect what
would you say about the state of
Journalism just on that topic today are
we are we in a temporary kind of
uh
uh are we experiencing a a a temporary
problem in terms of the incentives in
terms of the the the business model all
that kind of stuff or is this like a
decline of traditional journalism as we
know it you have to always think about
the counterfactual in these things which
is like okay because these questions
right this question heads towards it's
like okay the impact of social media and
the undermining of Truth and all this
but then you want to ask the question of
like okay what if we had had the modern
media environment including cable news
and including social media and Twitter
and everything else in 1939 or
1941 right or 1910 or 1865 or 1850 or
1776 right um and like I think you just
introduced like five thought EXP
experiments at once and broke my head
but yes that's there's a lot of
interesting years Ken like can I just
take a simple example can can like how
would President Kennedy have been
interpreted with what we know now about
all the things Kennedy was up to like
how would he have been experienced by
the body politic in a in with the social
media context right like how would LBJ
have been experienced um by the way how
would you know like many FDR like the
New Deal the Great Depression I wonder
where Twitter would would just would
think about church Hitler and Stalin you
know I mean look to this day there you
know there's there are lots of very
interesting real questions around like
how America you know got you know
basically involved in World War II and
who did what when and the operations of
British intelligence and American soil
and did FDR this that Pearl Harbor you
know yeah rro Wilson ran for you know
his his his candidacy was run on an
anti-war we you know this he ran on the
platform of not getting involved World
War I somehow that switched you know
like and I'm not even making a value
judement any of these things I'm just
saying like we we the way that our
ancestors experienced reality was of
course mediated through centralized top
down right control at that point if you
if you ran those realities again with
the medi environment we have today the
reality would the reality would be
experienced very very differently and
then of course that that intermediation
would cause the feedback loops to change
and then reality would obviously play
out you think you you think it' be very
different yeah it it has to be it has to
be just because it's all so I mean just
look at what's happening today I mean
just I mean the most obvious thing is
just the the collapse and here's another
opportunity to argue that this is not
the internet causing this by the way um
here's a big thing happening today which
is Gallup does this thing every year
where they do they pull for trust in
institutions in America and they do it
across all the everything from the
military to the clergy and big business
and the media and so forth right um and
basically there's been a systemic
collapse um in trust and institutions in
the US almost without exception
basically since essentially the early
1970s um there two ways of looking at
that which is oh my God we've lost this
old world in which we could trust
institutions and that was so much better
CU like that should be the way the world
runs the other way of looking at is we
just know a lot more now and the great
mystery is why those numbers aren all
zero yeah right CU like now we know so
much about how these things operate and
like they're not that
impressive and also why do we don't have
uh better institutions and better
leaders then yeah and so so so this goes
to the thing which is like okay had had
we had the media environment of the that
we've had between the 1970s and today if
we had that in the 30s and 40s or 1900s
1910s I think there's no question
reality it would turn out different if
only because everybody would have known
to not trust the institutions which
would have changed their level of
credibility their ability to control
circumstances therefore the
circumstances would have had to change
right and it would have been a feedback
it was would have been a feedback loop
process in other words right it's it's
it's it's your exper your experience of
reality changes reality and then reality
changes your experience of reality right
it's it's a it's a two-way feedback
process and media is the intermediating
force between that so change the media
environment change reality yeah and so
it's just so just as a as a consequence
I think it's just really hard to say oh
things worked a certain way then and
they work a different different way now
and then therefore like people were
smarter then or better than or you know
by the way Dumber then or not as capable
then right we we make all these like
really light and Casual like comparisons
of ourselves to you know previous
generations of people you know we draw
judgments all the time and I just think
it's like really hard to do any of that
because if we if we put ourselves in
their shoes with the media that they had
at that time like I think we probably
most likely would have been just like
them so don't you think that our
perception and understanding of reality
would you be more and more mediated
through large language models now so you
said media
before isn't the llm going to be the new
what is it mainstream media MSM it'll be
llm uh yes that would be the source of
uh I'm sure there's a way to kind of
rapidly find tun like making llms real
time I'm sure there's it's probably a
research problem that you can uh do just
rapid fine-tuning to the new events
something like this well even just the
the the whole concept of the chat UI
might not be the like the chat UI is
just the first whack at this and maybe
that's the dominant thing but look maybe
maybe our maybe we don't we don't know
yet like maybe the experience most
people about LMS this is just a
continuous feed you know maybe it's more
of a passive feed and you just are
getting a constant like running
commentary on everything happening in
your life and it's just helping you kind
of interpret understand everything also
really more deeply integrated into your
life not just like oh uh like
intellectual philosophical thoughts but
like
literally uh like how to make a coffee
where to go for lunch just uh whether
they you know dating all this kind of
stuff what to say in a job interview
yeah what to say ex what to say next
sentence yeah next sentence yeah at that
level yeah I mean yes so technically now
whether we want that or not is an open
question right and whether for a popup a
pop up right now the estimated
engagement using is decreasing for myri
since there's controversy uh section for
his Wikipedia page in 1993 something
happened or something like this bring it
up that'll drive engagement up anyway
yeah that's right I mean look this gets
this whole thing of like so you know the
chat interface has this whole concept of
prompt engineering right so prompts well
it turns out one of the things that all
are really good at is writing
prompts right and so like what if you
just outsourced and and by the way you
could run this experiment today you
could hook this up to do this today the
latency is not good enough to do it real
time in a conversation but you could you
could run this experiment and you just
say look every 20 seconds you could just
say you know you know tell me what the
optimal prompt is then ask yourself that
question to give me the result MH um and
then as as you as you exactly to your
point as you add there will be there
will be these systems are going to have
the ability to be learned updated
essentially in real time and so you'll
be able to have a pendant or your phone
or what watch or whatever it'll have a
microphone on it it'll listen to your
conversations it'll have a feat of
everything else happen in the world and
then it'll be you know sort of
retraining prompting or retraining
itself on the Fly um and so the scenario
you described is a is actually a
completely doable scenario now the hard
question on these is always okay since
that's possible are people going to want
that like what's the form of experience
mhm you know that that we we won't know
until we try it but I don't think it's
possible yet to predict the form of AI
in our lives therefore it's not possible
to predict the way in which it will
intermediate our experience with reality
yet yeah but it feels like those going
to be a killer app there's probably a
mad scramble right now in sou open Ai
and Microsoft and Google and meta and in
startups and smaller companies figuring
out what is the killer app because it
feels like it's possible like a GPT type
of thing it's possible to build that but
that's 10x more compelling using already
the llms we have using even the open
source llms llama and the different
variants um so you're investing in a lot
of companies and you're paying attention
who do you think is going to win this
you think they'll
be who who's going to be the next page
rank inventor trillion dollar question
um another one we have a few of those
today a bunch of those so look there's a
really big question today sitting here
today is a really big question about the
big models versus the small models um
that's related directly to the big
question of proprietary versus open MH
um then there's this big question of of
of you know where is the training data
GNA like are we topping out on the
training data or not and then are we
going to be able to synthesize training
data and then there's a huge pile of
questions around regulation um and you
know what's actually going to be legal
um and so I would I when we think about
it we we dovetail kind of all those All
Those Questions together you can paint a
picture of the world where there's two
or three God models that are just at
like staggering scale um and they're
just better at everything um and they
will be owned by a small set of
companies and they will basically
achieve regulatory capture over the
government and they'll have competitive
barriers that will prevent other people
from um you know competing with them and
so you know there will be you know just
like there's like you know whatever
three big Banks or three big you or by
the way three big search companies or I
guess two know you know it it'll
centralize like that um you can paint
another very different picture that says
no um actually the opposite of that's
going to happen this is GNA basically
that this is the new gold you know this
is the new Gold Rush Alchemy like you
know this is the this is the big bang
for this whole new area of of of Science
and Technology and so therefore you're
going to have every smart 14-year-old on
the planet Building open source right
you know and figuring out a ways to
optimize these things um and then you
know we're just going to get like
overwhelmingly better at generating
trading data we're going to you know
bring in like blockchain networks to
have like an economic incentive to
generate decentralized trading data and
so forth and so on and then basically
we're going to live in a world of Open
Source and there's going to be a billion
llms right of every size scale shape and
description and there might be a few big
ones that are like the Super Genius ones
but like mostly what we'll experience is
open source and that's you know that's
more like a world of like what we have
today with like Linux and the web um so
okay but uh you you painted these two
worlds but there's also uh variations of
those worlds cuz you said regulatory
capture it's possible to have these Tech
Giants that don't have regulatory
capture which is something you're also
calling for saying it's okay to have big
companies working on this stuff as long
as they don't aieve regulatory C capture
uh but I have the
sense that uh there's just going to be a
new
startup that's going to basically be the
page rank inventor which has become the
new Tech
Giant I don't know that I would love to
hear your kind of opinion if Google meta
and Microsoft are as gigantic companies
able to Pivot so hard to create new
products like some of it is just even
hiring people or having uh corporate
structure that allows for the crazy
young kids to come in and just create
something totally new do you think it's
possible or do you think it'll come from
a startup yeah it is this always big
question which is you get this feeling I
hear about this a lot from CEOs found
founder CEOs where it's like wow we have
50,000 people it's now harder to do new
things than it was when we had 50 people
yeah like what has happened so that
that's a recurring phenomenon um by the
way that's one of the reasons why
there's always startups and why there's
fure Capital um it's just that's that's
like a Time
uh kind of thing so that that that's one
observation um on on page rank um we
could talk about that but on page rank
specifically on page rank um there
actually is a page so there is a page
rank already in the field and it's the
Transformer right so the the big
breakthrough was the Transformer um and
the Transformer was invented in uh 2017
at Google and this is actually like
really an interesting question because
it's like okay the Transformers like why
does open AI even exist like the
Transformers invented at Google why
didn't Google I asked a guy I asked a
guy I know who was senior at Google
brain kind of when this was happening
and I said if Google had just gone flat
out to the wall and just said look we're
going to launch we're going to launch
equivalent of GPT 4 as fast as we can um
he said I said when could we have had it
and he said 2019 yeah they could have
just done a two-year Sprint with the
Transformer and and been because they
already had the compute at scale they
already had all the training data they
could have just done it there's a
variety of reasons they didn't do it
this is like a classic big company thing
um IBM invented the relational database
in 19 in the 1970s let it sit on the
Shelf as a paper Larry Ellison picked it
up and Bill Oracle
Xerox Park invented the interactive
computer they let it sit on the Shelf
Steve Jobs came and turned into the
Macintosh right and so there is this
pattern now having said that sitting
here today like Google's in the game
right so Google you know maybe maybe
they maybe they let like a four-year Gap
there go there that they maybe shouldn't
have but like they're in the game and so
now they've got you know now they're
committed they've done this merger
they're bringing in demos they've got
this merger with deep mind you know
they're piling in resources there are
rumors that they're you know building up
an incredible you know super llm um you
know Way Beyond what we even have today
um and they've got you know unlimited
resources and a huge you know they've
been challenged their honor yeah I had a
I had a a chance to hang out with sonai
a couple days ago and we took this walk
and there's this giant new building uh
where there's going to be a lot of AI
work uh being done and it's kind
of this ominous feeling
of like the fight is
on yeah like there's this beautiful
Silicon Valley nature like birds of
chirping and this giant building and
it's like uh the Beast has been
awakened and then like all the big
companies are waking up to this they
have the compute but also the little
guys have uh it feels like they have all
the tools to create the killer product
that uh and then there's also tools to
scale if you have a good idea if you
have the page rank idea so there's
several things that is Page rank P
there's page rank the algorithm and the
idea and there's like the implementation
of it and feel like killer product is
not just the idea like the transform
it's the implementation something
something really compelling about it
like you just can't look away something
like um the algorithm behind Tik Tok
versus Tik Tok itself like the actual
experience of Tik Tok that just you
can't look away it feels like somebody's
going to come up with that and it could
be Google but it feels like it's just
easier and faster to do for a startup
yeah so so the startup the huge the huge
Advantage the startups have is they just
there's no sacred cows there's no
hisorical Legacy to protect there's no
need to reconcile your new plan with the
existing strategy there's no
communication overhead there's no you
know big companies are big companies
they've got pre meetings planning for
the meeting then they have then they
have the post meeting the recap then
they have the presentation of the board
then they have the next rounds meetings
yeah and and that's that's the elapse
time when the startup launches its
product right so so so so there's a
Timeless right so there's a Timeless
thing there now yeah what the startups
don't have is everything else right so
startups they don't have a brand they
don't have customer relationships
they've got no distribution they've got
no you know scale I mean sitting here
today they can't even get GPU right like
there's like a GPU shortage startups are
literally stalled out right now because
they can't get chips which is like super
weird yeah um they got the cloud yeah
but the clouds run out of chips um right
and then and then and then to the extent
the clouds have chips they allocate them
to the big customers not the small
customers right and so so so so the
small companies lack everything other
than the ability to just do something
new yeah right um and and this is the
Timeless race and battle and this is
kind of the point I tried to make in the
essay which is like both sides of this
are good like it's really good to have
like High scale tech companies that can
Resume
Read
file updated 2026-02-14 07:34:26 UTC
Categories
Manage