Greg Brockman: OpenAI and AGI | Lex Fridman Podcast #17
bIrEM2FbOLU • 2019-04-03
Transcript preview
Open
Kind: captions
Language: en
the following is a conversation with
Greg Brockman he's the co-founder and
CTO of open AI a world-class research
organization developing ideas and AI
with the goal of eventually creating a
safe and friendly artificial general
intelligence one that benefits and
empowers humanity open AI is not only a
source of publications algorithms tools
and datasets their mission is a catalyst
for an important public discourse about
our future with both narrow and general
intelligence systems this conversation
is part of the artificial intelligence
podcast at MIT and beyond if you enjoy
it subscribe on youtube itunes or simply
connect with me on twitter at Lex
Friedman spelled Fri D and now here's my
conversation with Greg Brockman so in
high school and right after you wrote a
draft of a chemistry textbook I saw that
that covers everything from basic
structure of the atom to quantum
mechanics so it's clear you have an
intuition and a passion for both the
physical world with chemistry and now
robotics to the digital world with AI
deep learning reinforcement learning so
on do you see the physical world in the
digital world is different and what do
you think is the gap a lot of it
actually boils down to iteration speed
right that I think that a lot of what
really motivates me is is building
things right is the I you know think
about mathematics for example where you
think you're really hard about a problem
you understand it you're right down in
this very obscure form that we call
proof but then this is in humanities
library right it's there forever this is
some truth that we've discovered you
know maybe only five people in your
field will ever read it now but somehow
you've kind of moved humanity forward
and so I actually used to really think
that I was going to be a mathematician
and then I actually started writing this
chemistry textbook one of my friends
told me you'll never publish it because
you don't have a PhD so instead I
decided to build a website and try to
promote my ideas that way and then I
discovered programming and I you know
that in programming you think hard about
a problem you understand it you write
down in a very obscure form that we call
a program but then once again it's in
humanities library right and anyone
could get the benefit from
and the scalability is massive and so I
think that the thing that really appeals
to me about the digital world is that
you can have this this this insane
leverage right a single individual with
an idea is able to affect the entire
planet and that's something I think is
really hard to do if you're moving
around physical atoms but you said
mathematics so if you look at the the
what thing you know over here our mind
do you ultimately see it as just math is
just information processing or is there
some other magic as you've seen if
you've seen through biology and
chemistry and so on I think it's really
interesting to think about humans is
just information processing systems and
that it seems like it's actually a
pretty good way of describing a lot of
kind of how the world works or a lot of
what we're capable of to think that that
you know again if you just look at
technological innovations over time that
in some ways the most transformative
innovation that we've had it has been
the computer right in some ways the
internet you know that what is the right
the Internet is not about these physical
cables it's about the fact that I am
suddenly able to instantly communicate
with any other human on the planet I'm
able to retrieve any piece of knowledge
that in some ways the human race has
ever had and that those are these insane
transformations do you see the our
society as a whole the collective as
another extension of the intelligence of
the human being so if you look at the
human being is an information processing
system
you mentioned the internet then
networking do you see us all together as
a civilization as a kind of intelligence
system yeah I think this is actually a
really interesting perspective to take
and to think about to you sort of have
this collective intelligence of all of
society the economy itself is this
superhuman machine that is optimizing
something right and it's all in some
ways a company has a will of its own
right that you have all these
individuals we're all pursuing their own
individual goals and thinking really
hard and thinking about the right things
to do but somehow the company does
something that is this emergent thing
and that is it so there's a really
useful abstraction and so I think that
in some ways you know we think of
ourselves as the most intelligent things
on the planet and the most powerful
things on the planet but there are
things that are bigger than us that
these systems that we all contribute to
and so I think actually you know it's a
it's interesting to think about if
you've read as a guys a models
foundation
right that that there's this concept of
psychohistory in there which is
effectively this that if you have
trillions or quadrillions of beings then
maybe you could actually predict what
that being that that huge macro being
will do I and almost independent of what
the individuals want I actually have a
second angle on this I think is
interesting which is thinking about a
technological determinism one thing that
I actually think a lot about with with
open a tie right is that we're kind of
coming on onto this insanely
transformational technology of general
intelligence right that will happen at
some point and there's a question of how
can you take actions that will actually
steer it to go better rather than worse
and that I think one question you need
to ask is as a scientist as an inventor
as a creator what impact can you have in
general right you look at things like
the telephone invented by two people in
the same day like what does that mean
what does that mean about the shape of
innovation and I think that what's going
on is everyone's building on the
shoulders of the same giants and so you
can kind of you can't really hope to
create something no one else ever would
you know if Einstein wasn't born someone
else would have come up with relativity
you know he changed the timeline a bit
right that maybe it would have taken
another 20 years but it wouldn't be that
fundamentally humanity would never
discover these these fundamental truths
so there's some kind of invisible
momentum that some people like Einstein
or open the eyes plugging into that's
anybody else can also plug into and
ultimately that wave takes us into a
certain direction that's me that's right
that's right and you know this kind of
seems to play out in a bunch of
different ways that there's some
exponential that is being ridden and
that the exponential itself which one it
is changes think about Moore's law an
entire industry set its clock to it for
50 years like how can that be right how
is that possible and yet somehow it
happened and so I think you can't hope
to ever invent something that no one
else will maybe you can change the
timeline a little bit but if you really
want to make a difference I think that
the thing that you really have to do the
only real degree of freedom you have is
to set the initial conditions under
which a technology is born and so you
think about the internet right that
there are lots of other competitors
trying to build similar things and the
internet one and that the initial
conditions where that was created by
this group that really valued people
being able to be you know anyone being
able to plug in this very academic
mindset of
being open and connected and I think
that the Internet for the next 40 years
really played out that way
you know maybe today things are starting
to shift in a different direction but I
think if those initial conditions were
really important to determine the next
40 years worth of progress that's really
beautifully put so another example of
that I think about you know I recently
looked at it I looked at Wikipedia the
formation of Wikipedia and I wonder what
the internet would be like if Wikipedia
had ads you know there's a interesting
argument that why they chose not to make
it put advertisement wikipedia i think
it's i think wikipedia is one of the
greatest resources we have on the
internet it's extremely surprising how
well it works and how well it was able
to aggregate all this kind of good
information and they essentially the
creator of wikipedia I don't know
there's probably some debates there but
set the initial conditions and now it
carried it itself forward that's really
interesting so you're the way you're
thinking about AGI or artificial
intelligences you're focused on setting
the initial conditions for the for the
progress that's right that's powerful
okay so look into the future if you
create an AGI system like one that can
ace the Turing test natural language
what do you think would be the
interactions you would have with it what
do you think are the questions you would
ask like what would be the first
question you would ask it her/him that's
right I think it at that point if you've
really built a powerful system that is
capable of shaping the future of
humanity the first question that you
really should ask is how do we make sure
that this plays out well and so that's
actually the first question that I would
ask a powerful AGI system is so you
wouldn't ask your colleague you wouldn't
ask like Ilya you would ask the AGI
system oh we've already had the
conversation with Ilya right and
everyone here and so you want as many
perspectives and a piece of wisdom as
you can for it for answering this
question so I don't think you
necessarily defer to whatever your
powerful system tells you but you use as
one input I like to try to figure out
what to do but and I guess fundamentally
what it really comes down to is if you
built something really powerful and you
think about think about for example the
creation of of shortly after the
creation of nuclear weapons right the
most important question the world was
what's the world order going to be like
how do we set ourselves up in
where we're going to be able to survive
this species with a GI I think the
question is slightly different right
that there is a question of how do we
make sure that we don't get the negative
effects but there's also the positive
side right you imagine that you know
like like what won't AGI be like like
what will be capable of and I think that
one of the core reasons that an AGI can
be powerful and transformative is
actually due to technological
development yeah right if you have
something that's capable as capable as a
human and that it's much more scalable
that you absolutely want that thing to
go read the whole scientific literature
and think about how to create cures for
all the diseases right you want it to
think about how to go and build
technologies to help us create material
abundance and to figure out societal
problems that we have trouble with like
how we're supposed to clean up the
environment and you know maybe you want
this to go and invent a bunch of little
robots that will go out and be
biodegradable and turn ocean debris into
harmless molecules and I think that that
that positive side is something that I
think people miss sometimes when
thinking about what an AGI will be like
and so I think that if you have a system
that's capable of all of that you
absolutely want its advice about how do
I make sure that we're using your your
capabilities in a positive way for
Humanity so what do you think about that
psychology that looks at all the
different possible trajectories of an
AGI system many of which perhaps the
majority of which are positive and
nevertheless focuses on the negative
trajectories I mean you get to interact
with folks you get to think about this
maybe within yourself as well you look
at sam harris and so on it seems to be
sorry to put it this way but almost more
fun to think about the negative
possibilities whatever that's deep in
our psychology what do you think about
that and how do we deal with it because
we want AI to help us so I think there's
kind of two problems so I entailed in
that question the first is more of the
question of how can you even picture
what a world with a new technology will
any like now imagine were in 1950 and
I'm trying to describe Buber to someone
apps and the internet yeah I mean your
yeah that's that's going to be extremely
complicated but it's imaginable
it's imaginable right but and now
imagine being a 1950 and predicting
goober right and you need to describe
the internet you need to describe GPS
you need to describe the fact that
everyone's going to have this phone in
their pocket and so I think that that
just the first truth is that it is hard
to picture how a transformative
technology will play out in the world
we've seen that before with technologies
that are far less transformative than AG
I will be and so I think that that one
piece is that it's just even hard to
imagine and to really put yourself in a
world where you can predict what that
that positive vision would be like and
you know I think the second thing is
that it is I think it is always easier
to support the negative side than the
positive side it's always easier to
destroy than create and you know less
than in a physical sense and more just
in an intellectual sense right because
you know I think that with creating
something you need to just get a bunch
of things right and to destroy you just
need to get one thing wrong yeah and so
I think that that what that means is
that I think a lot of people's thinking
dead ends as soon as they see the
negative story but that being said I
actually actually have some hope right I
think that the the positive vision is
something that I think can be something
that we can we can talk about I think
that just simply saying this fact of
yeah like there's positive there's
negatives everyone likes to draw them
the negative people have to respond well
to that message and say huh you're right
there's a part of this that we're not
talking about not thinking about and
that's actually something that's that's
that's I think really been a key part of
how we think about AGI at open AI right
you can kind of look at it as like okay
like opening eye talks about the fact
that there are risks and yet they're
trying to build this system like how do
you square this those two facts so do
you share the intuition that some people
have I mean from Sam Harris even Elon
Musk himself that it's tricky as you
develop AGI to keep it from slipping
into the existential threats into the
negative what's your intuition about how
hard is it to keep a
a development on the positive track and
you what's your intuition there to
answer the question you can really look
at how we structure open AI so we really
have three main arms
we have capabilities which is actually
doing the technical work and pushing
forward what these systems can do
there's safety which is working on
technical mechanisms to ensure that the
systems we build are lined with human
values and then there's policy which is
making sure that we have governance
mechanisms answering that question of
well whose values and so I think that
the technical safety one is the one that
people kind of talk about the most right
you talk about like think about you know
all of the dystopic AI movies a lot of
that is about not having good technical
safety in place and what we've been
finding is that you know I think that
actually a lot of people look at the
technical safety problem and think it's
just intractable right this question of
what do humans want how am I supposed to
write that down can I even write down
what I want no way and then they stop
there but the thing is we've already
built systems that are able to learn
things that humans can't specify you
know even the rules for how to recognize
if there's a cat or a dog in an image
turns out its intractable to write that
down and yet we're able to learn it and
that what we're seeing with systems we
build it open it yeah and there's still
an early proof of concept stage is that
you are able to learn human preferences
you're able to learn what humans want
from data and so that's kind of the core
focus for our technical safety team and
I think that they're actually we've had
some pretty encouraging updates in terms
of what we've been able to make work so
you have an intuition and a hope that
from data you know looking at the value
alignment problem from data we can build
systems that align with the collective
better angels of our nature so aligned
with the ethics and the morals of human
beings to even say this in a different
way I mean think about how do we align
in humans right think about like a human
baby can grow up to be an evil person or
a great person and a lot of that is from
learning from data right that you have
some feedback as a child is growing up
they get to see positive examples and so
I think that that just like them that
the the only example we have of a
general intelligence that is able to
learn from data I too
aligned with human values and to learn
values I think we shouldn't be surprised
that we can do the same sorts of
techniques or whether the same sort of
techniques end up being how we we saw
value alignment for AG eyes so let's go
even higher as I don't know if you've
read the book sapiens mm-hmm but there's
an idea that you know that as a
collective is us human beings who kind
of develop together and ideas that we
hold there's no in that context
objective truth we just kind of all
agree to certain ideas and hold them as
a collective if you have a sense that
there is in the world of good and evil
do you have a sense that to the first
approximation there are some things that
are good and that you could teach
systems to behave to be good so I think
that this actually blends into our third
team right which is the policy team and
this is the one the the aspect I think
people really talk about way less than
they should all right because imagine
that we built super-powerful systems
that we've managed to figure out all the
mechanisms for these things to do
whatever the operator wants the most
important question becomes who's the
operator what do they want and how is
that going to affect everyone else right
and and I think that this question of
what is good what are those values I
mean I think you don't even have to go
to those those very grand existential
places to start to realize how hard this
problem is you just look at different
countries and cultures across the world
and that there's there's a very
different conception of how the world
works and you know what what what kinds
of of ways that society wants to operate
and so I think that the really core
question is is is actually very concrete
um and I think it's not a question that
we have ready answers to right is how do
you have a world where all the different
countries that we have United States
China Russia and you know the hundreds
of other countries out there are able to
continue to not just operate in the way
that they see fit but in that the world
that emerges in these where you have
these very powerful systems I operating
alongside humans ends up being something
that empowers humans more that makes
like exhuming existence
be a more meaningful thing and the
people are happier in wealthier and able
to live more fulfilling lives it's not
nob vyas thing for how to design that
world once you have that very powerful
system so if we take a little step back
and we're having it like a fascinating
conversation and open eyes in many ways
a tech leader in the world and yet we're
thinking about these big existential
questions which is fascinating really
important I think you're a leader in
that space and it's a really important
space of just thinking how AI affect
society in a big-picture view so Oscar
Wilde said we're all in the gutter but
some of us are looking at the Stars and
I think open air has a charter that
looks to the Stars I would say to create
intelligence to create general
intelligence make it beneficial safe and
collaborative so can you tell me how
that came about how a mission like that
and the path to creating a mission like
that open yeah I was founded yeah so I
think that in some ways it really boils
down to taking a look at the landscape
alright so if you think about the
history of AI that basically for the
past 60 or 70 years people have thought
about this goal of what could happen if
you could automate human intellectual
labor right imagine you can build a
computer system that could do that
what becomes possible well out of sci-fi
that tells stories of various dystopian
and you know increasingly you have
movies like heard that tell you a little
bit about maybe more of a little bit
utopic vision I you think about the
impacts that we've seen from being able
to have bicycles for our minds and
computers and that I think that the the
impact of computers and the Internet has
just far outstripped what anyone really
could have predicted and so I think that
it's very clear that if you can build an
AI it will be the most transformative
technology that humans will ever create
and so what it boils down to then is a
question of well is there a path is
there hope is there a way to build such
a system and I think that for 60 or 70
years that people got excited and I they
you know ended up not being able to
deliver on the hopes that the people I
pinned on them and I think that then you
know that after you know two to winters
of AI development
that people I you know I think kind of
almost stopped daring to dream right the
really talking about a GI or thinking
about a GI became almost this taboo in
the community but I actually think that
people took the wrong lesson from AI
history and if you look back starting in
nineteen fifty nine is when the
perceptron was released and this is
basically you know one of the earliest
neural networks it was released to what
was perceived as this massive overhype
so in the New York Times in nineteen
fifty-nine you have this article saying
that you know the the perceptron will
one day recognize people call out their
names instantly translate speech between
languages and people at the time looked
at this and said this is Jack your
system can't do any of that and
basically spent ten years trying to
discredit the whole perceptron direction
and succeeded and all the funding dried
up and you know people kind of went in
other directions and you know the 80s
there was a resurgence and I'd always
heard that the resurgence in the 80s was
due to the invention of back propagation
and these these algorithms that got
people excited but actually the
causality was due to people building
larger computers that you can find these
these articles from the 80s saying that
the democratization of computing power
suddenly meant that you could run these
larger neural networks and then people
start to do all these amazing things the
backpropagation algorithm was invented
and you know that the the neural nets
people running were these tiny little
like 20 neuron neural nets right what
are you supposed to learn with 20
neurons and so of course they weren't
able to get great results and it really
wasn't until 2012 that this approach
that's almost the most simple natural
approach that people have come up with
in the 50s right in some ways even in
the 40s before there were computers with
a Pitts McCullen air and neuron suddenly
this became the best way of solving
problems right and I think there are
three core properties that deep learning
has that I think are very worth paying
attention to
the first is generality we have a very
small number of deep learning tools SGD
deep neural net maybe some some you know
RL and it solves this huge variety of
problems speech recognition machine
translation game playing all these
problems small set of tools so there's
the generality there's a second piece
which is the competence you want to
solve any of those problems throw it
forty years worth of
computer vision research replacing the
deep neural net it's kind of work better
and there's a third piece which is the
scalability right the one thing that has
been shown time and time again is that
you if you have a larger neural network
for a more compute more data at it
it will work better those three
properties together feel like essential
parts of building a general intelligence
now it doesn't just mean that if we
scale up what we have that we will have
an AGI right there are clearly missing
pieces they're missing ideas we need to
have answers for reasoning but I think
that the core here is that for the first
time it feels that we have a paradigm
that gives us hope the general
intelligence can be achievable and so as
soon as you believe that everything else
becomes comes into focus right if you
imagine that you may be able to and you
know that the timeline I think remains
uncertain on the but I think that that
you know certainly within our lifetimes
and possibly within a much shorter
period of time than then people would
expect if you can really build the most
transformative technology that will ever
exist you stop thinking about yourself
so much right and you start thinking
about just like how do you have a world
where this goes well and that you need
to think about the practicalities of how
do you build an organization and get
together a bunch of people and resources
and to make sure that people feel
motivated and ready to do it but I think
that then you start thinking about well
what if we succeed and how do we make
sure that when we succeed that the world
is actually the place that we want
ourselves to exist then and almost in
the Rawls the unveils sense of the word
and so that's kind of the broader
landscape and opening I was really
formed in 2015 with that high level
picture of AGI might be possible sooner
than people think and that we need to
try to do our best to make sure it's
going to go well and then we spent the
next couple years really trying to
figure out what does that mean how do we
do it
and you know I think that typically with
a company you start out very small so
you in a co-founder and you build a
product you got some users you get a
product market fit
you know then at some point you raise
some money you hire people you scale and
then you know down the road then the big
companies realize you exist and try to
kill you
and for opening I it was basically
everything in exactly the
order let me just pause for a second he
said a lot of things and let me just
admire the jarring aspect of what open
AI stands for which is daring to dream I
mean you said it's pretty powerful you
caught me off guard because I think
that's very true
the-the-the step of just daring to dream
about the possibilities of creating
intelligence in a positive in a safe way
but just even creating intelligence is a
much needed refreshing catalyst for the
AI community so that's that's the
starting point
okay so then formation of open AI was
just I just say that you know when we
were starting opening AI that kind of
the first question that we had is is it
too late to start a lab with a bunch of
the best people possible that was an
actual question so those were those that
was the core question of you know hey
there's dinner in July of 20 2015
and there's that was that was really
what we spent the whole time talking
about and you know cuz it's the you
think about kind of where AI was is that
it transitioned from being an academic
pursuit to an industrial pursuit and so
a lot of the best people were in these
big research labs and that we wanted to
start our own one that you know no
matter how much resources we could
accumulate it would be you know pale in
comparison to the big tech companies and
we knew that and there's a question of
are we going to be actually able to get
this thing off the ground you need
critical mass you can't just do you and
a co-founder build a product right you
really need to have a group of you know
five to ten people and we kind of
concluded it wasn't obviously impossible
so it seemed worth trying well you're
also dreamers so who knows right that's
right okay so speaking of that competing
with with the the big players let's talk
about some of the some of the tricky
things as you think through this process
of growing of seeing how you can develop
these systems a task at scale that
competes so you recently recently formed
open ILP a new cap profit company that
now carries the name open it so open has
now this official company the original
non profit company
still exists and carries the opening I
nonprofit name so can you explain what
this company is what the purpose of us
creation is and how did you arrive at
the decision yep to create it openly I
the whole entity and opening I LP as a
vehicle is trying to accomplish the
mission of ensuring that artificial
general intelligence benefits everyone
and the main way that we're trying to do
that is by actually trying to build
general intelligence ourselves and make
sure the benefits are distributed to the
world that's the primary way we're also
fine if someone else does this all right
it doesn't have to be us if someone else
is going to build an AGI and make sure
that the benefits don't get locked up in
one company or you know one one want
with one set of people like we're
actually fine with that and so those
ideas are baked into our Charter which
is kind of the the foundational document
that are describes kind of our values
and how we operate
but it's also really baked into the
structure of open at LP and so the way
that we've set up opening ILP is that in
the case where we succeed right if we
actually build what we're trying to
build then investors are able to get a
return and but that return is something
that is capped and so if you think of
AGI in terms of data the value that you
could really create you're talking about
the most transformative technology ever
created it's going to create orders of
magnitude more value than any existing
company and that all of that value will
be owned by the world like legally title
to the nonprofit to fulfill that mission
and so that's that's the structure so
the mission is a powerful one and it's a
it's one that I think most people would
agree with it's how we would hope a I
progresses and so how do you tie
yourself to that mission how do you make
sure you do not deviate from that
mission that you know other incentives
that are profit driven wouldn't don't
interfere with the mission so this was
actually a really core question for us
for the past couple years because you
know I'd say that like the way that our
history went was that for the first year
we were getting off the ground right we
had this high level picture but we
didn't know
exactly how we wanted to accomplish it
and really two years ago it's when we
first started realizing in order to
build a GI we're just going to need to
raise way more money than we can as a
nonprofit I mean you're talking many
billions of dollars and so the first
question is how are you supposed to do
that and stay true to this mission and
we looked at every legal structure out
there and concluded none of them were
quite right for what we wanted to do and
I guess it shouldn't be too surprising
if you're going to do something like
crazy unprecedented technology that
you're gonna have to come up with some
crazy unprecedent structure to do it in
and a lot of a lot of our conversation
was with people at opening I write the
people who really join because they
believe so much in this mission and
thinking about how do we actually raise
the resources to do it and also stay
true to to what we stand for and the
place you got to start is to really
align on what is it that we stand for
right what are those values what's
really important to us and so I'd say
that we spent about a year really
compiling the opening I'd charter and
that determines and if you even look at
the first the first line item in there
it says that look we expect we're gonna
have to marshal huge amounts of
resources but we're going to make sure
that we minimize conflicts of interest
with the mission and that kind of
aligning on all of those pieces was the
most important step towards figuring out
how do we structure a company that can
actually raise the resources to do what
we need to do I imagined open AI the
decision to create open ILP was a really
difficult one and there was a lot of
discussions as you mentioned for a year
and there was different ideas perhaps
detractors with an open AI sort of
different paths that you could have
taken what were those concerns what were
the different paths considered what was
that process of making that decision
like yep um but so if you look actually
at the opening I charter that there's
almost two paths embedded within it
there is we are primarily trying to
build AGI ourselves but we're also ok if
someone else does it and this is a weird
thing for a company it's really
interesting actually yeah there there is
an element of competition that you do
want to be the one that does it but at
the same time you're ok somebody else's
and you know we'll talk about that a
little bit that trade-off that's the day
that's really interesting and I think
this was the core tension as we were
designing open an ILP and really the
opening eye strategy is how do you make
sure that both you have a shot at being
a primary actor which really requires
building an organization raising massive
resources and really having the will to
go and execute on some really really
hard vision all right you need to really
sign up for a long period to go and take
on a lot of pain and a lot of risk and
to do that normally you just import the
startup mindset right and that you think
about okay like how do we how to execute
everyone you give this very competitive
angle but you also have the second angle
of saying that well the true mission
isn't for opening high to build a GI the
true mission is for AGI to go well for
Humanity and so how do you take all of
those first actions and make sure you
don't close the door on outcomes that
would actually be positive in fulfill
the mission and so I think it's a very
delicate balance right I think that
going 100% one direction or the other is
clearly not the correct answer and so I
think that even in terms of just how we
talk about opening I and think about it
there's just like like one thing that's
always in the back of my mind is to make
sure that we're not just saying opening
eyes goal is to build AGI right that
it's actually much broader than that
right that first of all I you know it's
not just AGI it's safe AGI that's very
important but secondly our goal isn't to
be the ones to build it our goal is to
make sure it goes well for the world and
so I think that figuring out how do you
balance all of those and to get people
to really come to the table and compile
the the like a single document that that
encompasses all of that wasn't trivial
so part of the challenge here is your
mission is I would say beautiful
empowering and a beacon of hope for
people in the research community and
just people thinking about AI so your
decisions are scrutinized more than I
think a regular profit driven company do
you feel the burden of this in the
creation of the Charter and just in the
way you operate yes so why do you lean
into the burden by creating such a
charter why not to keep it quiet I mean
it just boils down to the to the mission
right
I'm here and everyone else is here
because we think this is the most
important mission right dare to dream
all right so what do you think you can
be good for the world or create an a GI
system that's good when you're a
for-profit company from my perspective I
don't understand why profit interferes
with positive impact on society I don't
understand by Google that makes most of
its money from ads you can't also do
good for the world or other companies
Facebook anything I don't I don't
understand why those have to interfere
you know you can profit isn't the thing
in my view that affects the impact of a
company what affects the impact of the
company is the Charter is the culture is
the you know the people inside and
profit is the thing that just fuels
those people so what are your views
there yeah so I think that's a really
good question and there's there's
there's some some you know real like
long-standing debates in human society
that are wrapped up in it the way that I
think about it is just think about what
what are the most impactful nonprofits
in the world what are the most impactful
for profits in the world right is much
easier to lists the for profits that's
right and I think that there's there's
some real truth here that the system
that we set up the system for kind of
how you know today's world is organized
is one that that really allows for huge
impact and that that you know kind of
part of that is that you need to be you
know for profits are our self-sustaining
and able to to kind of you know build on
their own momentum and I think that's a
really powerful thing it's something
that when it turns out that we haven't
set the guardrails correctly causes
problems right think about logging
companies that go and DeForest you know
you know the rain forest that's really
bad we don't want that and it's actually
really interesting to me the kind of
this this question of how do you get
positive benefits out of a for-profit
company it's actually very similar to
how do you get positive benefits out of
an AGI right that you have this like
very powerful system it's more powerful
than any human and it's kind of
autonomous in some ways you know super
human and a lot of axes and somehow you
have to set the guardrails to get good
to happen but when you do the benefits
are massive and so I think that the when
when I think about nonprofit vs.
for-profit I think it's just not enough
happens in nonprofits they're very pure
but it's just kind of you know it's just
hard to do things they're in for profits
in some ways like too much happens but
if if kind of shaped in the right way it
can actually be very positive and so
with open NLP we're picking a road in
between now the thing I think is really
important to recognize is that the way
that we think about opening ILP is that
in the world where AGI actually happens
right in a world where we are successful
we build the most transformative
technology ever the amount of value
we're going to create will be
astronomical and so then in that case
that the if it the the cap that we have
will be a small fraction of the value we
create and the amount of value that goes
back to investors and employees looks
pretty similar to what would happen in a
pretty successful startup and that's
really the case that we're optimizing
for right that we're thinking about in
the success case making sure that the
value we create doesn't get locked up
and I expect that in another you know
for-profit companies that it's possible
to do something like that I think it's
not obvious how to do it right and I
think that as a for-profit company you
have a lot of fiduciary duty to your
shareholders and that there are certain
decisions you just cannot make in our
structure we've set it up so that we
have a fiduciary duty to the Charter
that we always get to make the decision
that is right for the Charter rather
than even if it comes at the expense of
our own stakeholders and and so I think
that when I think about what's really
important it's not really about
nonprofit vs. for-profit it's really a
question of if you build a GI and you
kind of you know humanities now in this
new age who benefits whose lives are
better and I think that what's really
important is to have an answer that is
everyone yeah which is one of the core
aspects of the Charter so one concern
people have not just with open the eye
but with Google Facebook Amazon anybody
really that's that's creating impact
that scale is how do we avoid as your
Charter says avoid enabling the use of
or AGI to unduly concentrate power why
would not a company like open a I keep
all the power of an AGI system to itself
the Charter the Charter so you know how
does the Charter actualize itself in day
to day so I think that first to zoom out
right there the way that we structure
the company is so that the the power
first sort of you know dictating the
actions that opening eye takes
ultimately rests with the board right
the board of the nonprofit I'm and the
board is set up in certain ways certain
certain restrictions that you can read
about in the opening hi LP blog post but
effectively the board is the is the
governing body for opening ILP and the
board has a duty to fulfill the mission
of the nonprofit and so that's kind of
how we tie how we thread all these
things together now there's a question
of so day to day how do people the
individuals who in some ways are the
most empowered ones ain't no the board
sort of gets to call the shots at the
high level but the people who are
actually executing are the employees the
way that people here on a day-to-day
basis who have the you know the the keys
to the technical Kingdom and their I
think that the answer looks a lot like
well how does any company's values get
actualized right I think that a lot of
that comes down to that you need people
who are here because they really believe
in that mission and they believe in the
Charter and that they are willing to
take actions that maybe are worse for
them but are better for the Charter and
that's something that's really baked
into the culture and honestly I think
it's I you know I think that that's one
of the things that we really have to
work to preserve as time goes on and
that's a really important part of how we
think about hiring people and bringing
people into opening I so there's people
here there's people here who could speak
up and say like hold on a second this is
totally against what we stand for
cultural eyes yeah yeah for sure I mean
I think that that we actually have I
think that's like a pretty important
part of how we operate and how we have
even again with designing the Charter
and designing open alp in the first
place that there has been a lot of
conversation with employees here and a
lot of times where employees said wait a
second this
seems like it's coming in the wrong
direction and let's talk about it and so
I think one thing that's that's I think
I really and you know here's here's
actually one thing I think is very
unique about us as a small company is
that if you're at a massive tech giant
that's a little bit hard for someone
who's aligned employee to go and talk to
the CEO and say I think that we're doing
this wrong and you know you look at
companies like Google that have had some
collective action from employees to you
know make ethical change around things
like maven and so maybe there are
mechanisms that other companies that
work but here super easy for anyone to
pull me aside to pull Sam aside to
Balilla aside and people do it all the
time one of the interesting things in
the Charter is this idea that it'd be
great if you could try to describe or
untangle switching from competition to
collaboration and late-stage AGI
development it was really interesting
this dance between competition and
collaboration how do you think about
that yeah assuming you can actually do
the technical side of AGI development I
think there's going to be two key
problems with figuring out how do you
actually deploy it make it go well the
first one of these is the run-up to
building the first AGI you look at how
self-driving cars are being developed
and it's a competitive race I'm the
thing that always happens in a
competitive race is that you have huge
amounts of pressure to get rid of safety
and so that's one thing we're very
concerned about right is that people
multiple teams figuring out we can
actually get there but you know if we
took the slower path that is more
guaranteed to be safe we will lose and
so we're going to take the fast path and
so the more that we can both ourselves
be in a position where we don't generate
that competitive race where we say if
the race is being run and that you know
someone else's is further ahead than we
are we're not gonna try to to leapfrog
we're gonna actually work with them
right we will help them succeed as long
as what they're trying to do is to
fulfill our mission then we're good we
don't have to build AGI ourselves and I
think that's a really important
commitment from us but it can't just be
unilateral right I think that's really
important that other players who are
serious about building AGI make similar
commitments right I think that that you
know again to the extent that everyone
believes that AGI should be something to
benefit everyone then it actually really
shouldn't matter which company builds it
and we should all be concerned about the
case where we just race so hard to get
there
that something goes wrong so what role
do you think government our favorite
entity has in setting policy and rules
about this domain from research to the
development to early stage to late stage
a a inhi development so I think that
first of all is really important the
government's in their right in some way
shape or form you know at the end of the
day we're talking about building
technology that will shape how the world
operates and that there needs to be
government as part of that answer and so
that's why we've we've we've done a
number of different congressional
testimonies we interact with a number of
different lawmakers and the you know
right now a lot of our message to them
is that it's not the time for regulation
it is the time for measurement right
that our main policy recommendation is
that people and you know the government
does this all the time with bodies like
NIST um spend time trying to figure out
just where the technology is how fast
it's moving and can really become
literate and up to speed with respect to
what to expect
so I think that today the answer really
is about about about measurement and I
think if there will be a time in place
where that will change and I think it's
a little bit hard to predict exactly I
what what exactly that trajectory should
look like so there will be a point
oh it's regulation federal in the United
States the government steps in and and
helps be the I don't want to say the
adult in the room to make sure that
there is strict rules may be
conservative rules that nobody can cross
well I think there's this kind of maybe
to two angles to it so today with narrow
AI applications that I think there are
already existing bodies that are
responsible and should be responsible
for regulation you think about for
example with self-driving cars that you
want the you know the National Highway
it's exactly to be very good mat that
makes sense right that basically what
we're saying is that we're going to have
these technological systems that are
going to be do performing applications
that humans already do great we already
have ways of thinking about standards
and safety for those so I think actually
empowering those regulators today is
also pretty important
and then I think for for a GI you know
that there's going to be a point where
we'll have better answers and I think
that maybe a similar approach of first
measurement and you know start thinking
about what the rules should be I think
it's really important that we don't
prematurely squash you know progress I
think it's very easy to kind of smother
the budding field and I think that's
something to really avoid but I don't
think it's the right way of doing it is
to say let's just try to blaze ahead and
not involve all these other stakeholders
so you've recently released a paper on
GPT two language modeling but did not
release the full model because you have
concerns about the possible negative
effects of the availability of such
model it's uh outside of just that
decision is super interesting because of
the discussion as at a societal level
the discourse it creates so it's
fascinating in that aspect but if you
think that's the specifics here at first
what are some negative effects that you
envisioned and of course what are some
of the positive effects yeah so again I
think to zoom out like the way that we
thought about GPT 2 is that with
language modeling we are clearly on a
trajectory right now where we scale up
our models and we get qualitatively
better performance right GPT 2 itself
was actually just a scale-up of a model
that we've released in the previous June
right and we just ran it at you know
much larger scale and we got these
results we're suddenly starting to write
coherent prose which was not something
we'd seen previously and what are we
doing now well we're gonna scale up GPT
2 by 10x by hundred X by thousand X and
we don't know what we're going to get
and so it's very clear that the model
that we were that we released last June
you know I think it's kind of like it's
it's it's it's a good academic toy it's
not something that we think is something
that can really have negative
applications or you know to the sense
that it can the positive of people being
able to play with it is you know far far
outweighs the possible harms you fast
forward to not GPT to buy GPU 20
and you think about what that's gonna be
like and I think that the capabilities
are going to be substantive and so if
there needs to be a point in between the
two where you say this is something
where we are drawing the line and that
we need to start thinking about the
safety aspects and I think for GPT too
we could have gone either way and in
fact when we had conversations
internally that we had a bunch of pros
and cons and it wasn't clear which one
which one outweighed the other and I
think that when we announced that hey we
decide not to release this model then
there was a bunch of conversation where
various people said it's so obvious that
you should have just released it there
other people said it's so obvious you
should not hav
Resume
Read
file updated 2026-02-13 13:23:28 UTC
Categories
Manage