Sundar Pichai: CEO of Google and Alphabet | Lex Fridman Podcast #471
9V6tWC4CdFQ • 2025-06-05
Transcript preview
Open
Kind: captions
Language: en
There was a 5-year waiting
list and we got a rotary telephone, but
it dramatically changed our lives. You
know, people would come to our house to
make calls to their loved ones. You
know, I I would have to go all the way
to the hospital to get blood test
records and it would take 2 hours to go
and they would say, "Sorry, it's not
ready. Come back the next day." 2 hours
to come back. And that became a
five-minute thing. So as a kid like I
mean this light bulb went in my head you
know this power of technology to kind of
change people's lives. We had no running
water you know it was a massive drought.
So they would get water in these trucks
maybe eight buckets per household. So me
and my brother sometimes my mom we would
wait in line get that and bring it back
home.
many years later like we had running
water and we had a water heater and you
could get hot water to take a shower. I
mean like so you know for me everything
was discreet like that. Uh and so I've
always had this thing you know firsthand
feeling of like how technology can
dramatically change like your life and
like the opportunity it brings. I think
if pedom is actually high at some point
all of humanity is like aligned in
making sure that's not the case right
and so we'll actually make more progress
against it I think so the irony is so
there is a
self-modulating aspect there like I
think if humanity collectively puts
their mind to solving a problem whatever
it is I think we can get there so
because of that I think I'm optimistic
on the pdoom scenarios
But that doesn't mean I think the
underlying risk is actually pretty
high. But I'm uh you know I have a lot
of faith in humanity kind of rising up
to the to meet that moment. Take me
through that experience when there's all
these articles
saying you're the wrong guy to lead
Google through this. Google is lost.
It's done. It's over.
The following is a conversation with
Sundar Pachai, the CEO of Google and
Alphabet on this the Lex Freedman
podcast. Your life story is inspiring to
a lot of people. It's inspiring to me.
You grew up in India, whole family
living in a humble two- room apartment,
very little, almost no access to
technology. And from those humble
beginnings, you rose to lead a $2
trillion technology company. So if you
could travel back in time and told that,
let's say, 12-year-old Sundar that
you're now leading one of the largest
companies in human history, what do you
think that young kid would say? I would
have probably laughed it off. Um, you
know, uh, probably too far-fetched to
imagine or believe at that time. You
would have to explain the internet first
for sure. I mean computers to me at that
time. You know I was 12 in
1984. So probably uh you know by then I
started reading about
them. I had seen one. What was that
place like? Take me to your childhood.
You know I grew up in Chennai. Uh it's
in south of India. It's a beautiful
bustling city. Lots of people, lots of
energy, you know, simple life.
Definitely like fond memories of playing
cricket outside the home. We just used
to play on the streets. All the
neighborhood kids would come out and we
would play till it got dark and we
couldn't play anymore barefoot. Um,
traffic would come, we would just stop
the game, everything would drive through
and you would just continue playing,
right? Just to kind of get the visual in
your head. You know, precomputers
there's a lot of free time now. Now that
I think about it, now you have to go and
seek that quiet solitude or something.
Newspapers, books is how I gained access
to the was information at the time, you
will. Uh my grandfather was a big
influence. He worked in the post office.
He was so good with language. His
English, you know, his handwriting till
today is the most beautiful handwriting
I've ever seen. He would write so
clearly. He was so articulate.
And so he kind of got me introduced into
books. He loved politics. So we we could
talk about anything and you know that
was there in my family throughout. So uh
lots of books, trashy books, good books,
everything from iron rand to books on
philosophy to stupid crime novels. So
books was a big part of my life. But
that kind of this soul it's not
surprising I ended up at Google because
Google's mission kind of always
resonated deeply with me this access to
knowledge I was hungry for it but
definitely have you know fond memories
of my childhood access to knowledge was
there so that's the wealth we had
uh you know every aspect of technology I
had to wait for a while I've obviously
spoken before about how long it took for
us to get a phone about 5 years but it's
not the only thing a telephone. There
was a 5-year waiting list. Uh and we got
a rotary uh telephone. Mhm. But it
dramatically changed our lives. You
know, people would come to our house to
make calls to their loved
ones. You know, I I would have to go all
the way to the hospital to get blood
test records and it would take 2 hours
to go and they would say, "Sorry, it's
not ready. Come back the next day." 2
hours to come back. And that became a 5m
minute thing. So as a kid like I mean
this light bulb went in my head you know
this power of technology to kind of
change people's lives. We had no running
water you know it was a massive drought.
So they would get water in these trucks
maybe eight buckets per household. So me
and my brother sometimes my mom we would
wait in line get that and bring it back
home.
many years later like we had running
water and we had a water heater and you
could get hot water to take a shower. I
mean like so you know for me everything
was discreet like that. Uh and so I've
always had this thing you know firsthand
feeling of like how technology can
dramatically change like your life and
like the opportunity it brings.
So, you know, that was kind of a
subliminal takeaway for me throughout
growing up. And, you know, I I kind of
actually observed it and felt it, you
know. So, we had to convince my dad for
a long time to get a VCR. Do you know
what a VCR is? Yeah.
I'm trying to date you now. But, you
know, because before that, you only had
like kind of one TV channel. Mhm. Right.
That's it. Um, and so, you know, you can
watch movies or something like that, but
this was by the time I was in 12th
grade, we got a VCR. You know, it was a
uh like a Panasonic, which we had to go
to some like shop, which had kind of
smuggled it in, I guess, and that's
where we bought a VCR, but then being
able to
record like a World Cup football game
and then or like get put like video
tapes and watch movies like all that. So
like you know I had these discrete
memories growing up and so you know
always left me with the feeling of like
how getting access to technology drives
that step change in your life. I don't
think you'll ever be able to equal the
first time you get hot water to have
that convenience of going and opening a
tap and have hot water come out. Yeah.
It's interesting. We take for granted
the progress we've made. If you look at
human history, just those plots that
look at GDP across 2,000 years and you
see that exponential growth to where
most of the progress happened since the
industrial revolution and we just take
for granted. We forget how how far we've
gone. So our ability to understand how
great we have it and also how quickly
technology can improve is quite poor.
Oh, I mean it's it's extraordinary. You
know, I go back to India now. the power
of mobile. You know, it's mind-blowing
to see the progress through the arc of
time. It's phenomenal. What advice would
you give to young folks listening to
this all over the world who look up to
you and uh find your story inspiring who
want to be maybe the next Bachai who
want to start create companies uh build
something that has a lot of impact in
the world. Look, it's you have a lot of
luck along the way, but you obviously
have to make smart choices. you're
thinking about what you want to do. Your
brain is telling you something. But when
you do things, I think it's important to
kind of get that listen to your heart
and see whether you actually enjoy doing
it, right? That that feeling of if you
love what you do, it's so much easier
and you're going to see the best version
of yourself. It's easier said than done.
I think it's tough to find things uh you
love doing. Um but I think kind of
listening to your heart a bit more than
your mind in terms of figuring out what
you want to do I think I think is one of
the best things I would uh tell
people. The second thing is I mean
trying to work with people who you feel
at various points in my life I worked
with people who I felt were better than
me right kind of like you know you
almost are sitting in a room talking to
someone and they're like wow like you
know you know and you want that feeling
a few times trying to get yourself in a
position where you're working with
people who you feel are kind of like
stretching your abilities is what helps
you grow I think uh so putting yourself
in uncomfortable situations and I think
often you'll surprise yourself. So I
think being open-minded enough to kind
of put yourself in those positions is
maybe uh maybe another thing I would
say. What lessons can we learn maybe
from an outsider perspective for me
looking at your story and gotten to know
you a bit. You're humble, you're kind.
Usually when I think of somebody who has
had a journey like yours and climbs to
the very top of leadership, they're us
in a cutthroat world, they're usually
going to be a bit of an So what
wisdom are we supposed to draw from the
fact that uh your general approach of is
of balance, of humility, of kindness,
listening to everybody? What's what's
what's your secret? I do get angry. I do
get frustrated. I I have the same
emotions all of us do right in the
context of work and everything. Uh but a
few things right I I I think you know
I over time I figured out the best way
to get the most out of people. uh you
know you kind of find missionoriented
people who are in the shad journey who
have this inner drive to excellence to
do the best and and you know you kind of
motivate people and and and you can you
can achieve a lot that way right and so
it it often tends to work out that way
but have there been times like you know
I lo lose it yeah but you know not maybe
less often than others uh and maybe over
the years
less and less so because you know I find
it's not needed to achieve what you need
to do. So losing your has not been
productive. Yeah. Less often than not I
think people respond to that. Yeah. They
may do stuff to react to that like but
you you actually want them to do the
right thing and and and so you know
maybe there's a bit of like sports you
know you know I'm a sports fan in
football coaches uh in soccer uh that
football uh you know people people often
talk about like man management right
coaches do right I think there is an
element of that in our lives how do you
get the best out of the people you work
with you know at times you're working
with people who who are so committed to
achieving if they've done something
wrong they feel it more than you you do
right so you treat them differently than
you know occasionally there are people
who you need to clearly let them know
like that wasn't okay or whatever it is
but I've often found that not to be the
case and sometimes the right words at
the right time spoken firmly can
reverberate through time also sometimes
the unspoken words you know people can
sometimes see that like you know you're
unhappy without you saying it and so
sometimes the silence can uh deliver
that message even more sometimes less is
more um who's the greatest uh soccer
player of all time Messi or Ronaldo or
Pelle or Maradona I'm going to make you
know in this question is this going to
be a political answer no I I I will tell
the truthful answer because uh answer it
is you know it's been interesting
because my son is a big Cristiano
Ronaldo fan And uh so we've had to watch
LC Classicos together, you know, with
that dynamic in there. I so admire CR7s.
I mean, I've never seen an athlete more
committed to that kind of excellence.
And so he's one of the all-time greats,
but you know, for me, Messi is it. Yeah.
Yeah, when I see Leon Messi, you just
are in awe that humans are able to
achieve that level of greatness and
genius and artistry. When we talk, we'll
talk about AI, maybe robotics and this
kind of stuff, that level of genius. I'm
not sure you can possibly match by AI in
a long time. It's just an example of
greatness. And you have that kind of
greatness in other disciplines, but in
sport, you get to visually see it unlike
anything else. and just the the timing,
the
movement, this is genius. I had the
chance to see him a couple weeks ago. He
played in uh San Jose. So um against the
Quake. So I went to see it, see the
game. I was a fan on the had good seats.
Knew where he would play in the second
half hopefully. And uh even at his age,
just watching him when he gets the ball,
that movement, you know, you're right,
that special quality, it's tough to
describe, but you feel it when you see
it. Yeah, he still got
it. Uh, if we rank all the technological
innovations throughout human history,
let's go back
uh maybe the history of human
civilizations 12,000 years
ago and you rank them by
the how much of a productivity
multiplier they've been. So uh we can go
to electricity or the labor
mechanization of the industrial
revolution or we can go back to the
first agricultural revolution 12,000
years ago in that long list of
inventions. Do you think AI when history
is written a thousand years from now do
you think it has a chance to be the
number one productivity multiplier? It's
a great question. Look, many years ago,
I think it might have been 2017 or 2018.
Um, you know, I I said at the time like,
you know, AI is the most profound
technology humanity will ever work on.
It'll be more profound than fire or
electricity. So, I have to back myself.
I, you know, I still think uh that's the
case. You know, when you asked this
question, I was thinking, well, do we
have a recency bias, right? You know,
like in sports, it's very tempting to
call the current person you're seeing
the greatest Yes. player, right? and and
so is there a recency
bias and you know I do think uh from
first principles I would argue AI will
be bigger than all of those I didn't
live through those moments you know two
years ago I had to go through a surgery
and then I processed that there was a
point in time people didn't have
anesthesia when they went through these
procedures at that moment I was like
that has got to be the greatest
invention humanity has ever ever done
right so look We we don't know what it
is to have uh lived through those
times but you know and many of what
you're talking about were kind of this
general things which pretty much
affected everything you know electricity
or internet etc.
But I don't think we have ever dealt
with a technology both which is
progressing so fast, becoming so
capable. It's not clear what the ceiling
is and the main unique it's recursively
self-improving, right? It's capable of
that. And so the fact it is going it's
the first technology will kind of
dramatically accelerate creation itself
like creating things building new things
can can improve and achieve things on
its own right I think like puts it in a
different league right and so uh
different league and so I think the
impact it'll end up having uh will far
surpass everything we've seen before
uh obviously with that comes a lot
uh important things to think and wrestle
with, but I definitely think that'll end
up being the case, especially if it gets
to the point of where we can achieve
superhuman performance on the AI
research itself. So, it's a technology
that may that's an open question, but it
may be able to achieve a level to where
the technology itself can create itself
better than it it could yesterday. It's
like the move 37 of alpha research or
whatever it is, right? Like, you know,
and when when Yeah, you're right. when
when it can do
novel self-directed research obviously
for a long time we'll we'll have
hopefully always humans in the loop and
all that stuff and these are complex
questions to talk about but yes I think
the underlying technology you know I've
said this like if you watched seeing
AlphaGo start from
scratch be clueless and like become
better through the course of a day you
know like you know kind like kind of
like you know really it hits you when
you see that happen even our like the V3
models if you sample the models when
they were like 30% done and 60% done and
looked at what they were
generating and you kind of see how it
all comes
together it's kind of like I would say
it's kind of inspiring a little bit
unsettling right as a as a human so all
of that is true I think well the
interesting thing of the industrial
revolution electricity like you
mentioned. You can go back to the again
the agricultural the first agricultural
revolution. There's um what's called the
Neolithic package of the first
agricultural revolution that it wasn't
just that the nomads settled down and
started planting food. But all this
other kinds of
technology was born from that and it's
included in this package. It wasn't one
piece of technology. It's there's these
ripple effects, second and third order
effects that happen. Everything from
something silly like silly profound like
pottery that can store liquids and food
uh to something we kind of take for
granted but social hierarchies
uh and political hierarchy. So like
early government was formed cuz it turns
out if humans stop moving and have some
surplus food they start coming up with
uh they get bored and they start coming
up with interesting systems then trade
emerges which turns out to be a really
profound thing and like I said
government there I mean there's just uh
second and third order effects from that
including that package is incredible and
probably extremely difficult if if you
ask one of the people in the nomadic
tribes to predict that it would be
impossible. It's difficult to predict.
But all that said, what do you think are
some of the early things we might see in
the quote unquote AI
package? I mean, most of it probably we
don't know today, but like you know the
one thing which we can tangibly start
seeing now
is you know obviously with the coding
progress you got a sense of it. It's
going to be so easy to imagine like
thoughts in your head translating that
into things that exist. That'll be part
of the package, right? Like it's going
to empower almost all of
humanity to kind of express
themselves. Maybe in the past you could
have expressed with
words but like you could kind of build
things into existence, right? You know,
maybe not fully today. We are at the
early stages of VIP coding. You know,
I've been amazed at what people have put
out online with V3, but it takes a bit
of work, right? You have to stitch
together a set of prompts,
but all this is going to get better. The
thing I always think about, this is the
worst it'll ever be, right? Like at any
given moment in time. Yeah, it's
interesting you went there as kind of a
first thought. So the exponential
increase of access to creativity,
software
creation, are you creating a program, a
piece of content for to be shared with
others, games down the line, all of that
like just becomes infinitely more
possible. Well, I think the big thing is
that uh it makes it accessible. It
unlocks the cognitive capabilities of
the entire 8 billion. No, I agree. Look,
think about 40 years
ago, maybe in the US there were five
people who could do what you were doing.
Mhm. Like go do a interview, you know,
and you know, but today think about with
YouTube and other other products etc.
Like how many more people are doing
it? So I think this is what technology
does, right? Like when the internet
created blogs, you know, you heard from
so many more people. So I
think but but with AI I think that
number won't be in the few hundreds of
thousands it'll be tens of millions of
people maybe even a billion
people like putting out things into the
world in a deeper way and I think it'll
change the landscape of creativity and
makes a lot of people nervous like for
example uh whatever Fox MSNBC CNN are
really nervous about this part like you
mean this dude in a who could just do
this and and use and YouTube and and and
thousands of others, tens of thousands,
millions of other creators can do the
same kind of thing. That makes him
nervous. And now you get a podcast from
Nobook LM. That's about five to 10 times
better than any podcast I've ever done.
True. I'm I'm joking at this time, but
maybe not. And that changes. you have to
evolve because I on the podcasting
front, I'm a fan of
podcasts much more than I am a fan of
being a host or whatever. If there's
great podcasts that are both AIs, I'll
just stop doing this podcast. I'll
listen to that podcast. But you have to
evolve and you have to change and that
makes people really nervous, I think.
But it's also really exciting future.
The only thing I may say is I do think
like in a world in which there are two
AI, I think people value and uh
choose just like in chess you and I
would never watch Stockfish 10 or
whatever and Alph Go play against each
like it would be boring for us to watch
but Magnus Carlson and Gish that game
would be much more fascinating to watch.
So it's tough to say like one way to say
is you'll have a lot more content and so
you will be listening to AI generated
content because sometimes it's efficient
etc. But the premium experiences you
value might be a version of like the
human essence wherever it comes through
going back to what we talked earlier
about watching Messi dribble the ball. I
don't know one day I'm sure a machine
will dribble much better than Messi but
I don't know whether it would evoke that
same emotion in us. So I think that'll
be fascinating to see. I think the
element of
podcasting or
audiobooks that is about
information gathering that part might be
removed or that might be more
efficiently and in a compelling way done
by AI but then it'll be just nice to
hear humans struggle with the
information contend with the information
try to internalize it combine it with
the complexity of our own emotions and
consciousness and all that kind of stuff
but if you actually want to find out
about a piece of history, you go to
Gemini. If you want to see Lex struggle
with that history, then you look or
other humans, you look you look at that.
But that the point is it's going to
change the nature
uh continue to change the nature of how
we discover information, how we consume
the information, how we create that
information. The same way that YouTube
change everything completely, change
news, it change and that's something our
society is struggling with. Yeah,
YouTube look YouTube enabled I mean you
know this better than anyone else. It's
enabled so many creators. There is no
doubt in me that like we will enable
more filmmakers than there have ever
been right. You're going to empower a
lot more people. Um so I think there's
an expansionary aspect of this which is
underestimated. I think I think it'll
unleash human creativity in a way uh
that hasn't been seen before. It's tough
to internalize. The only way is if you
if you brought someone from the 50s or
40s and just put them in front of
YouTube, you know, I think it would blow
their mind away. Similarly, I think we
would get blown away by what's possible
in a 10 to 20 year time frame. Uh do you
think there's a future? How many years
out is it that let's say let's put a
marker on it? 50% of
content in a compel good content 50% of
good content is generated by V4 56. You
know, I think depends on what it is for.
Um, like you know, maybe if you look at
movies today with
CGI, there are great filmmakers like you
still look at like who the directors are
and who use it. There are filmmakers who
don't use it at all. You value that.
There are people who use it incredibly.
You know, think about somebody like a
James Cameron like what he would do with
these tools in his hands. But I think
there'll be a lot more content created
like just like writers today use Google
Docs and not think about the fact that
they using a tool like that like people
will be using the future versions of
these things like it won't be a big deal
at all to them.
I've gotten a chance to get to know
Darren Aronowski well. He's been really
leaning in and trying to figure it's
it's fun to watch a genius who came up
before any of this was even remotely
possible. He created Pi, one of my
favorite movies, and from there just
continued to create a really interesting
variety of movies. And now he's trying
to see how can AI be used to create
compelling films. You have people like
that. You have people I've gotten to
just know uh edgier folks uh that are AI
first like Door Brothers. Both
Areronowski and Door Brothers create at
the edge of the Overton window of
society. You know, they push whether
it's uh sexuality or or violence. It's
edgy like artists are, but it's still
classy. It doesn't cross that line. Uh
whatever that line is, you know. Um
Hunter S. Thommpson is this
line that the uh the only way to find
out where the edge where the line is is
by crossing it. Uh and I think for
artists that's true. That's kind of
their purpose. Sometimes comedians and
artists just cross that line. I wonder
if you can comment on the weird place
that puts
Google because Google's line is probably
different than some of these artists.
What what's your how do you think about
specifically Vio um and Flo about like
how to allow artists to get do crazy
but also like the responsibility of
like
um not for it not to be too crazy. I
It's a great question. Look, part of you
mentioned Darren uh you know he's a
clear visionary, right? Part of the
reason we work started working with him
early on VO is he's one of those people
who was able to kind of see that future
get inspired by it and kind of showing
the way for how creative people can
express themselves with it. Look, I
think when it comes to allowing artistic
free expression is one of the most
important values in a society, right? I
think you know artists have always been
the ones to push push boundaries expand
the frontiers of thought uh and so look
I think I think that's going to be an
important value we have so I think we
will provide tools and put it in the
hands of artists for them to use and put
out their
work those APIs I mean I almost think of
that as infrastructure just like when
you provide electricity to people or
something you want them to use it and
like you're not thinking about the use
cases on top of it. So it's a
paintbrush. Yeah. And and so I think
that's how obviously there have to be
some things and you know society needs
to decide at a a fundamental level
what's okay, what's not. Uh we'll be
responsible with it. Um but I do think
you know when it comes to artistic free
expression I think that's one of those
values we should work hard to defend. Uh
I wonder if you can comment on
um maybe earlier versions of Gemini were
a little bit careful on the kind of
things you would be willing to answer. I
just want to comment on I was really
surprised and uh pleasantly surprised
and enjoy the fact that Gemini 25 Pro is
a lot less careful in a good sense.
Don't ask me why, but I've been doing a
lot of research on Jenghask Khan and the
the Aztecs. Uh, so there's a lot of
violence there in that history. It's a
very violent history. I've also been
doing a lot of research on World War I
and World War II. And earlier versions
of Gemini were very
um basically this kind of sense, are you
sure you want to learn about this? And
now it's actually very factual,
objective, uh, talks about very
difficult parts of human history and
does so with nuance and depth. It's it's
been really nice. But there's a line
there that I guess Google has to kind of
walk. I wonder if it's and it's also an
engineering challenge how to how to do
that at scale across all the weird
queries that people ask. What um can you
just speak to that challenge? How do you
allow Gemini to say again, forgive,
pardon my French, crazy but not
too not not too crazy. I think one of
the good insights here has been as the
models are getting more capable, the
models are really good at this stuff,
right? And so I think in some ways maybe
a year ago the models weren't fully
there. So they would also do stupid
things more
often and so you know you're trying to
handle those edge cases but then you
make a mistake in how you handle those
edge cases and it compounds. But I think
with 2.5 what we particularly found is
once the models cross a certain level of
intelligence and
sophistication you know they are they
are able to reason through these nuanced
issues pretty well. And I think users
really want that right like you know you
want as much access to the raw model as
possible right but I think it's a great
area to think about like you know over
time you know we should allow more and
more closer access to it maybe obviously
let people custom prompts if they wanted
to and like you know and you know
experiment with it etc. uh I I think
that's an important direction but look
the first principles we want to think
about it is you know from a scientific
standpoint like making sure the models
and I'm saying scientific in the sense
of like how you would approach math or
physics or something like that from
first principles having the models
reason about the world be nuanced etc uh
you know from the ground up is the right
way to build these things right not like
some subset of humans is kind of hard
coding things on top of it. Uh so I
think it's the direction we've been
taking and I think you'll see us
continue to push in that direction.
Yeah. I actually asked uh I gave these
notes I took extensive notes and I gave
them to Gemini and said can you ask a
novel question that's not in these notes
and it wrote Gemini continues to really
surprise me really surprise me. It's
been really beautiful. It's incredible
model. Uh the the question it's it it
generated was you meaning Sundar told
the world Gemini is turnurning out 480
trillion tokens a month. Uh what's the
most life-changing fiveword sentence
hiding in that hast stack? That's a
Gemini question, but it made me it gave
me a sense. I don't think you can answer
that, but it gave me it made it woke me
up to like all of these tokens are
providing little aha moments for people
across the globe. So that's like
learning that those tokens are people
are curious. They ask a question and
they find something out and it truly
could be life-changing. Oh, it is. I
look I know I had the same feeling about
search many many years ago. You you know
you you
definitely you know this tokens per
month is like grown 50 times in the last
12 months. Is that accurate by the way?
Yeah, it is. You know it is it is
accurate. I'm glad it got it right. Um,
but you know that number was 9.7
trillion tokens per month 12 months ago,
right? It's gone gone up to 480. You
know, it's a 50x increase. So there's no
limit to human cur curiosity. Uh, and I
think it's it's one of those moments.
Uh, maybe I don't think it is there
today, but maybe one day there's a
fiveword
phrase which says what the actual
universe is or something like that and
something very meaningful. But I don't
think we are quite there yet. Do you
think the scaling laws are holding
strong on
um there's a lot of ways to describe the
scaling laws for AI, but on the
pre-training on the post-training
fronts? So the flip side of that, do you
anticipate AI progress will hit a wall?
Is there a wall? You know, it's a
cherished micro kitchen conversation.
Once in a while I have it, you know,
like when Demis is visiting or, you
know, Dennis, Cori, Jeff, Gnome, Sergey,
a bunch of our people like we sit and
uh, you know, you know, talk about this,
right? And um, look, I we see a lot of
headroom ahead, right? I think uh, we've
been able to optimize and improve on all
fronts, right? uh pre-training,
post-raining, test time, compute, tool
use, right? Over time, making these more
agentic. So, getting these models to be
more general world models in that
direction like V3 uh you know, the
physics understanding is dramatically
better than what V1 or something like
that was. So, you kind of see on all
those dimensions. I I feel you know
progress is very obvious to see and I
feel like there is significant headroom
more importantly you know I'm fortunate
to work with some of the best
researchers on the planet right they
think uh there is more headroom to be
had here uh and so I think we have an
exciting trajectory ahead it's tougher
to say you know each year I sit and say
okay we're going to throw 10x more
compute over the course of next year at
it and like will we see
progress sitting here today I feel like
the year ahead we'll have a lot of
progress and do you feel any limitations
like that the bottlenecks compute
limited uh data limited idea limited do
you feel any of those limitations or is
it full steam ahead on all fronts I
think it's compute limited in this sense
right like you know we can all part of
the reason you've seen us do flash nano
flash and pro models
but not an ultra model. It's like for
each generation, we feel like we've been
able to get the Pro model at like I
don't know 80 90% of ultra capability,
but ultra would be a a lot more
uh like slow and lot more expensive to
ser.
But what we've been able to do is to go
to the next generation and make the next
generation's pro as good as the previous
generation's ultra. Yeah. But be able to
serve it in a way that it's fast and you
can use it and so on. So I do think
scaling laws are working. But it's tough
to get at any given time. The models we
all use the
most is maybe like a few months behind
the maximum capability we can deliver,
right? because that won't be the
fastest, easiest to use, etc. Also,
that's in terms of intelligence. It
becomes harder and harder to measure
uh performance in quotes because, you
know, you could argue Gemini Flash is
much more impactful than Pro just
because of the latency. It's super
intelligent already. I mean sometimes
like latency is uh maybe more important
than intelligence especially when the
intelligence is just a little bit less
in flash not it's still incredibly smart
model. Yeah. And so you you have to now
start measuring impact and then it feels
like benchmarks are less and less
capable of capturing the intelligence of
models, the effectiveness of models, the
usefulness, the real world usefulness of
models. Uh another kitchen question. So
lots of folks are talking about
timelines for AGI or ASI artificial
super intelligence. So AGI loosely
defined is basically human expert level
at a lot of the main fields of pursuit
for humans and then ASI is what AGI
becomes presumably quickly by being able
to self-improve. So becoming far
superior in intelligence across all
these disciplines than humans. When do
you think we'll have HGI? Is 2030 a
possibility? Uh there's one other term
we should throw in there. I don't know
who who used it first. Maybe Karpati did
AJI. Have you have you heard AJI? The
artificial jagged intelligence sometimes
feels that way right both there are
progress and you see what they can do
and then like you can trivially find
they make numerical errors or like you
know counting Rs and strawberry or
something which seems to trip up most
models or whatever it is, right?
So uh so maybe we should throw that term
in there. I feel like we are in the AJI
phase where like dramatic progress some
things don't work well but overall you
know you're seeing uh lots of progress
but if your question is will will it
happen by 2030 look we constantly move
the line of what it means to be
AGI there are moments today you know
like sitting in a way in a San Francisco
street with all the crowds and the
people and kind of work its way through
I see glimpses of it there. The car is
sometimes kind of impatient trying to
work its way uh using Astra like in
Gemini Live or seeing uh you know asking
questions about the world. What's this
skinny building doing in my
neighborhood? It's a street light, not a
building. You you see glimpses. That's
why I use the word AJI because then you
see stuff which obviously you know we
far from AGI too. So you have both
experiences simultaneously happening to
you. I'll answer your question, but I'll
also throw out this. I almost feel the
term doesn't matter. What I know is by
2030 there'll be such dramatic
progress. We'll be dealing with the
consequences of that progress both the
positives uh both the positive
externalities and the negative
externalities that come with it in a big
way by 2030. So that I strongly feel
right whatever we may be arguing about
the term or maybe Gemini can answer what
that moment is in time in
2030 but I think the progress will be
dramatic right so that I believe in will
the AI think it has reached AGI by
2030 I would say we will just fall short
of that timeline right so I think it'll
take a bit longer it's amazing in the
early days of Google deep mind in 2010
they talked about a 20-year time frame
to achieve uh AGI So which is which is
kind of fascinating to see.
But you know I for me the whole thing
seeing what Google brain did in 2012 and
when we acquired deep mind in 2014 uh
right close to where we're sitting in
2012 you know Jeff Dean showed the image
of when the neural networks could
recognize uh a picture of a cat right
and identify it. you know this is the
early versions of brain right and so you
know we all talked about couple decades
I don't think we'll quite get there by
2030 so my sense is it's slightly after
that but I I would stress it doesn't
matter like what that definition
is
because you will have mind-blowing
progress on many dimensions maybe AI can
create videos we have to figure out as a
society how do we we need some system by
which we all agree that this is AI
generated and we have to disclose it in
a certain way because how do you
distinguish reality otherwise? Yeah,
there's so many interesting things you
said. So, first of all, just looking
back at this recent now feels like
distant history uh with Google brain. I
mean that was before TensorFlow before
TensorFlow was made public and open
sourced. So, the tooling matters too
combined with GitHub ability to share
code. Then you have the ideas of
attention transformers and the diffusion
now and then there might be a new idea
that seems simple in retrospect but will
change everything and that could be the
post training the inference time
innovations and I think Shad Cen tweeted
that Google is just one great UI from
completely winning the AI race meaning
like UI is a huge part of it like how
that intelligence
uh uh I think Logan Cop project likes to
talk about this right now. It's an LLM
but it become like when is it going to
become a system where you're talking
about shipping systems versus shipping
the particular model. Yeah, that matters
too. How the system is um manifest
itself and how it presents itself to the
world that really really matters. Oh
hugely so. There are simple UI
innovations which have changed the world
right and uh I absolutely think so. um
we will see a lot more progress in the
next couple of years is I think AI
itself uh on a self-improving track for
UI itself like you know today
we are like constraining the models the
models can't quite express themselves in
terms of the UI to to people um but that
is uh like you know if you think about
it we've kind of boxed them in that way
but given these models can code
uh you know they should be able to write
the best interfaces to express their
ideas over time, right? That is an
incredible idea. So the APIs are already
open. So you you create a really nice
agentic system that continuously
improves the way you can be talking to
an AI. Yeah. But it a lot of that is the
interface and then of course the
incredible multimodal aspect of the
interface that Google's been pushing.
These models are natively multimodal.
They can easily take content from any
format, put it in any format. They can
write a good user interface. They
probably understand your preferences
better that over time like you know and
so so all this is like the evolution
ahead, right? And so
um that goes back to where we started
the conversation. I like I think
there'll be dramatic evolutions in the
years ahead. Maybe one more kitchen
question. uh this even further
ridiculous concept of P doom. So the
philosophically minded folks in the AI
community think about the probability
that AGI and then ASI might destroy all
of human civilization. I would say my
PDM is about
10%. Do you ever think about this kind
of long-term threat of ASI and what
would your P doom be? Look, I mean for
sure. Look, I've uh both been uh very
excited about AI. Uh but I've always
felt uh this is a technology, you know,
we have to actively think about the
risks and work very very hard to harness
it in a way that it it all works out
well. Um on the PDOM question, look,
it's you know, won't surprise you to say
that's probably another micro kitchen
conversation that pops up once in a
while, right? And given how powerful the
technology is, maybe stepping back, you
know, when you're running a large
organization, if you can kind of align
the incentives of the organization, you
can achieve pretty much anything, right?
Like, you know, if you can get kind of
people all marching in towards like a
goal uh in a very focused way, in a
mission-driven way, you can pretty much
achieve anything. But it's very tough to
organize all of humanity that way. But I
think if pedom is actually high at some
point all of humanity is like aligned in
making sure that's not the case right
and so we'll actually make more progress
against it I think so the irony is so
there is a
self-modulating aspect there like I
think if humanity collectively puts
their mind to solving a problem whatever
it is I think we can get there so
because of that you know I I I I think
I'm optimistic take on the pdoom
scenarios, but that doesn't mean I think
the underlying risk is actually pretty
high, but I'm uh you know, I have a lot
of faith in humanity kind of rising up
to the to meet that moment. That's
really really well put. I mean, as the
threat becomes more concrete and real,
humans do really come together and get
their together. Well, the other
thing I think people don't often talk
about is probability of doom without AI.
So, there's all these other ways that
humans can destroy themselves. And it's
very possible, at least I believe so,
that AI will help us become
smarter, kinder to each other, uh more
efficient. uh it it'll help more parts
of the world flourish where it would be
less resource constrainted which is
often the source of military conflict
and tensions and so on. So we also have
to load into that what's the pdoom
without AI with AI Poom with AI Poom
without AI cuz it's very possible that
AI will be the thing that saves us saves
human civilizations from all the other
threats. I agree with you. I think I
think it's insightful. Uh look, I felt
like to make progress on some of the
toughest problems would be good to have
AI like pair helping you, right? And and
like you know, so that resonates with me
for sure. Yeah. Quick pause. Bathroom
break. You know, let's do that. If
notebook LM was the same compel like
what I saw today with Beam, if it was
compelling in the same kind of way,
blew my mind. It was incredible. Oh, I
didn't think it's possible. I didn't My
was like, can you imagine like the US
president and the Chinese president
being able to do something like Beam
with the live me translation working
well? So, they're both sitting and
talking, make progress a bit more. Uh
yeah, just uh for people listening, we
took a quick bath break and now we're
talking about the demo I did. We'll
probably post it somewhere somehow,
maybe here. the I got a chance to
experience beam and it
was it's hard to it's hard to describe
in words how real it felt with just what
is it six cameras. It's incredible. It's
incredible. It's it's one of the
toughest products of you can't quite
describe it to people even when we show
it in slides etc like you don't know
what it is you have to kind of
experience it on the world leaders front
on politics geopolitics that there's
something really special again with
studying World War II and uh how much
could have been saved if Chamberlain met
Stalin in person and I sometimes also
struggle explaining to people
articulating Why I believe meeting in
person for world leaders is powerful. It
just seems naive to say that but there
is something there in person and with
beam I I felt that same thing and then
I'm unable to explain all I kept doing
is what like a child does you look real
you know and I mean I don't know if that
makes meetings more productive or so on
but it certainly makes them more
uh the same reason you want to show up
to work versus remote sometimes that
human connection
I don't know what that is. It's hard to
it's hard to put into words.
Um there's
some there's something beautiful about
great teams collaborating on a thing
that's that's not captured by the
productivity of that team or by whatever
on paper. Like some of the most
beautiful moments you experience in life
is at work pursuing a difficult thing
together for many months.
There's nothing like it. you're in the
trenches and yeah, you do form bonds
that way for sure. And to be able to do
that like somewhat remotely in that same
personal touch, I don't know, that's a
deeply fulfilling thing. I know a lot of
people I I personally hate meetings
because a significant percent of
meetings when done uh poorly are are
don't don't serve a clear purpose. So,
but that's a meeting problem. That's not
a communication problem. If you can
improve the communication for the
meetings that are useful, that's just
incredible. So yeah, I was blown away by
the great engineering behind it and then
we we get to see what impact that has.
That's really interesting. But just
incredible engineering. Really
impressive. Oh, it is. And obviously
we'll work hard over the years to make
it more and more accessible. But yeah,
even on a personal front outside of work
meetings, you know, a grandmother who's
far away from our grandchild and being
able to, you know, have that kind of an
interaction, right? All that I think
will end up being very mean. Nothing
substitutes being in person, but you
know, it's not always possible. You
know, you could be a soldier deployed,
right, trying to talk to your loved
ones. So, I think uh you know, so that's
what inspires us when you and
I hung out last year and took a walk. I
remember I don't think we talked about
this but but I remember uh you know
outside of that seeing dozens of
articles written by analysts and experts
and so on that um Sundar Pay should step
down because the perception was that
Google was definitively losing the AI
race has lost its magic touch in the uh
rapidly evolving uh technological uh
landscape and now a year later it's
crazy you showed this plot
of all the things that were shipped over
the past year is incredible and Gemini
Pro is winning across many benchmarks
and products as we sit here today. So,
take me through that experience when
there's all these articles saying you're
the wrong guy to lead Google through
this. Google's lost, it's done, it's
over to today where Google is winning
again. What were some low points during
that time? Look, I
um I mean lots to unpack. Um you know,
obviously
like the main bet I made as a CEO was to
really
uh you know, make sure the company was
approaching everything in a AI first
way. Really, you know, setting ourselves
up to develop AGI responsibly, right?
and and and make sure we're putting out
products uh which which embodies that
things that are very very useful for
people. So look, I I knew even through
moments like that last year,
uh, you know, I had a good sense of what
we were building internally, right? So
I'd already made, you know, many
important decisions, you know, bringing
together teams of the caliber of brain
and deep mind and setting up Google deep
mind. There were things like we made the
decision to invest in TPUs 10 years ago.
So, we knew we were scaling up and
building big models. Anytime you're in a
situation like
that, a few aspects,
uh, I'm good at tuning out noise, right?
Separating signal from noise. Do you
scuba dive? Like, have you? No. You
know, it's amazing. Like I'm not good at
it, but I've done it a few times. But
sometimes you jump in the ocean, it's so
choppy. But you go down one ft under,
it's the calst thing in the entire uh
universe, right? So there's a version of
that, right? Like you know, uh running
Google, you know, you may as well be
coaching Barcelona or Real Madrid,
right? Like you know, you have a bad
season. So there are aspects to that but
you know like look I I'm good at tuning
out the noise. I do watch out for
signals you know it's important to
separate the signal from the noise. So
there are good people sometimes making
good points outside. So you want to
listen to it. You want to take that
feedback
in. But you know internally like you
know you're making a set of
consequential decisions. Right? As
leaders you're making a lot of
decisions. Many of them are like
inconsequential like it feels like but
over time you learn that most of the
decisions you're making on a day-to-day
basis doesn't matter. like you have to
make them and you're making them just to
keep things moving but you have to make
a few consequential decisions right and
and
uh we
had set up the right teams right leaders
we had world-class
researchers we were training
Gemini internally there are factors
which were for example outside people
may not have appreciated I mean TPUs are
amazing but we had to ramp up
Resume
Read
file updated 2026-02-14 12:08:46 UTC
Categories
Manage