File TXT tidak ditemukan.
Meta Poached OpenAI's Brightest - Who Will Lead When AGI Comes Online? | Tom Bilyeu Clip
y9UEPochDc8 • 2025-07-19
Transcript preview
Open
Kind: captions
Language: en
Let's talk about AI because a lot of
things are happening and I don't think
we quite get it. To your point, you I
know you say this in like team meeting a
lot. Hey guys, you probably used AI like
three months ago. Yeah, it's the it's
completely different. It's a whole new
ecosystem. It's like using a cell phone
as a Nokia and then you come back 3
months later and everybody has like
iPhone 17s. Like it's a dramatic
difference. It's super crazy. Mark
Zuckerberg started this tweet developing
super intelligence and how it's now in
sight. The most exciting thing this year
is that we're starting to see early
glimpses of self-improvement with the
models, which means that developing
super intelligence is now in sight. This
is going to be something that is the
most important technology in our lives.
It's going to underpin how we develop um
everything at at the company and um and
it's going to affect society very
widely. So is super intelligence going
to be possible in 2 to 3 years and 5
years and 7 years but and I don't think
anyone fully knows the answer to that
but I just think that we should bet and
act as if it's going to be ready um in
the next 2 to 3 years. I think that I
believe that there's a shot at that and
>> whoa that's a big statement. That is a
big statement
>> and if anybody could say that it's the
guy who invested $und00 million in in
his
>> per person. Yeah. It's like, woo. Here's
the thing that I think people are not
fully groing about self-improving AI. If
it can self-improve, there becomes a
question of how much improvement can it
make in a single night. The rate of
improvement here is the thing that
matters to be certain. When its rate of
intelligence becomes let's say twice
that of a human, five times, 50 times.
Now all of a sudden if it's 50 times
smarter than a human then in a single
night it gets 50 days worth roughly of
knowledge learned which means the next
day it's 50 days ahead of the human. So
even just that person, if it's like
50xing each time, you see how rapidly
now if you spin up a million of them
that can think 50 times faster than a
highly intelligent human, all of a
sudden it's what does the world look
like if you have a million people that
are as effective as Elon Musk? I don't
need you to like him. I just need you to
admit that he's the most effective
entrepreneur the world has ever seen.
What happens if you have 10 million of
those? That's where this gets insane.
And then what happens if it's not 50
times, it's a thousand times, 10,000
times smarter. There is a question to be
asked about energy consumption. There's
a question to be asked about whether
there is an upper limit to the degree of
intelligence that it's ever going to be
able to achieve. But without knowing
what those are, you start getting into
just these absolutely wild scenarios of
20,000 years of advancement in a single
night. Mhm.
>> Now, there's always going to be some
slowdown from the perspective of
material science and things like that,
but in fact, pull up really fast just to
really double whammy people, the video
of the robot changing its own battery.
What I want people to understand, this
is the stuff that's happening right now
today. And don't just look at the robot,
look at the background, the factory.
Okay? These factories are going to be
popping up everywhere. And they're not
going to be making cars. They're going
to be making more robots. And so, this
is a right now. This is actually
happening. This is not fake. This robot
can really do this. And so I want you to
imagine 20,000 years of advancement in a
single night that is then deployed
across a let's say five billion unit
robot army that goes out and does
whatever it is that you wanted to do.
That's not a today thing. But if we're
really talking about 3 years to AI,
general intelligence that can
self-improve and then you go, okay, well
then what does the world look like in 10
years? It it just gets so radically
different when people begin to really
start putting together that we're also
having a revolution in material science,
in biology, so that we're able to uh
build novel proteins that fold into
unique structures that do unique
biological processes. All of a sudden,
it's like, okay, wait a second. I don't
only have to build with synthetics like
robots. I can actually build at the
level of biology. And so if I have
something that is able to completely map
the biological system of a human, now
some of the things that seem impossible
to solve become like motherboard issues.
We know how to go in and it is wildly
complicated as a a modern motherboard is
in a computer and they are complicated.
I'm talking like you're moving
individual atoms around. We've been able
to do that from a physical process
standpoint, humans. Mhm.
>> So now what happens when you have this
self-improving AI that's able to one
replicate itself so that now it's not
one person like oh I have to go into the
lab and do all this. It's one robot
replicated a million times or one AI
replicated a million times is the same
as one person having a million days to
do a thing.
>> And so again ask yourself what could
Elon Musk accomplish in a million days.
Now imagine what Elon Musk can
accomplish in a million days can be done
in a single day with one AI replicated a
million times coordinating amongst
itself. So it's they're all running
unique tests and they're all feeding
back data to each other. That's how this
becomes this like just absolutely
runaway rapid progress that we can't
imagine. And then it can be deployed out
to all the robots. And so it's I don't
if you've ever been on a cruise, you go
to sleep in one place and you wake up in
another. It's very surreal.
>> It will be like that. You will go to
sleep in one world and you will wake up
in another. And I'm not saying
everything is going to be different. It
will take more time than that. But when
you start thinking about, whoa, China
with humans, not AI, was able to
construct a high-rise building in like
30 days. Imagine you again, you go to
sleep and the high-rise building does
not exist. and you wake up the next
morning and the high-rise building
exists because you have a swarm of bots
and drones and everything that just
coordinate on everything. They're all
communicating with each other 24 hours a
day.
>> Yeah. And they all communicate with the
supply chain and all of that. And so
it's like it works for, you know,
whatever a month behind the scenes
coordinating all the pieces to make sure
that they're going to show up. It knows
traffic flows and all that. And then
it's just everything shows up at exactly
the right time, even if it has to be air
dropped in. And they just I'm talking as
a sci-fi writer right now. This is not
something you're going to see in the
next 5 years. But when you start looking
out 10 20 years this start and if it's
not in a day let's say that you can
erect a building instead of in to build
the house that I live in now it took
something like five or six years so
imagine instead it takes 6 months like
the world just gets very different oh
housing prices are too high in your
neighborhood no problem we can build
buildings in 3 4 months um they've now
got uh construction that they can do
with 3D printing and cement and they can
build a building in 72 hours all of a
sudden a lot of these problems start
going away I just want people to wrap
their heads around the world that we are
stepping into. Figure out the things
that they need to do to be prepared for
that world, which one is show up here
every Wednesday and Friday because we
talk about this stuff. Be ready for it.
Position yourself accordingly. Know more
about it than the next person. Be at the
bleeding edge. Use this stuff. Find out
where the actual limitations are because
it's fun to talk about like the extreme
extreme. Uh but when I step off camera,
I'm like, "Okay, but where does the
rubber meet the road?" And so people
have to do both of those.
>> What will slow down that future? You
mentioned like material sciences.
Someone else mentioned energy and just
available energy and having to like
build up those plants or solar power,
whatever it is. What's going to slow it
down? And then what opportunities will
that create for new industries?
>> Well, so an asintope for intelligence is
the real one that you have to look out
for. I'll remind myself and everybody
else that uh Yan Lun is saying, "Guys,
this there's no way it's not going to
understand physics." Having said that,
without dragging too many private
conversations into the public sphere,
>> give us that alpha, bro.
>> Yeah, it does seem that Grock shows
promise that it is actually
understanding physics or that it will be
able to understand physics, that it will
be able to think in novel ways in
physics. never believe anybody 100% but
people that actually know what they're
talking about are saying like I've
interacted with it on that level and it
does seem to be um showing some pretty
cool signs. So if it doesn't asmtote
there then you've got energy policy
that's going to be the big one. Can we
get energy policy passed? I saw
something the other day on like these
really small nuclear I don't know if
they must be reactors otherwise it
wouldn't be interesting but they're very
small and so businesses being able to
spin those up when they start a data
center. So you can imagine private
companies creating nuclear energy in
like these really small supposedly
really safe whatever seventh gener I
don't know what generation is but
whatever advanced generation nuclear
energy uh but policy is going to matter
a lot if we get bad policy then that
will stall out. AI agents launched from
chat GPT and this is probably the
biggest most mainstream agent accessible
platform to date where a lot of people
knew about agents but you had to be
really in the AI ecosystem to get it but
chat GPT is trying to lower that uh
adoption window for a lot of people but
Sam Alman dropped it with almost like a
warning. So let's jump into that tweet
he released yesterday. Today we launched
a new product called chat GPT agent.
Agent represents a new level of
capability for AI systems that can
accomplish some remarkable complex tasks
for you using its own computer. It
combines a spirit of deep research and
operator but is more power than that
than that may sound. It can think for a
long time, use some tools, think some
more, take some actions, think some
more, etc. For example, we showed a demo
in our launch of preparing for a
friend's wedding, buying an outfit,
booking travel, choosing a gift, etc. We
also showed an example of analyzing data
and creating a presentation for work.
Although the utility is significant, so
are the potential risks. We have built a
lot of safeguards and warnings into it
and broader mitigations than we've ever
developed before. From robust training
to system safeguard to user controls,
but we can't anticipate everything. In
the spirit of iterative deployment, we
are going to warn users heavily and give
users freedom to take actions carefully
if they want to. I would explain this to
my own family as cutting edge and
experimental. a chance to try the
future, but not something I'll let I'd
yet use for high stakes uses or with a
lot of personal a information until we
have a chance to study and improve it in
the wild. We don't know exactly what the
impacts are going to be, but bad actors
may try to trick users AI agent into
giving private information they
shouldn't and take actions they
shouldn't in ways we can't predict. We
recommend giving agents the minimum
access required to complete a task to
reduce privacy and security risk. For
example, I can give an agent access to
my calendar to find a time that works
for a group dinner, but I don't need to
give it access if I'm just asking it to
buy me some clothes. There's more risk
in tasks like look at my emails then
come in over that came in over at night
and do whatever you need to do to
address them. Don't ask any follow-up
questions. This could lead to unrestrict
untrusted content from a malicious email
tricking the model into leaking your
data. We think it's important to begin
learning from contact with reality and
that people adopt these tools carefully
and slowly as we better quantify and
mitigate the potential risk involved as
with other new levels of capabilities
society and technology and the risk
mitigation strategy will need to
co-evolve. I think this is the first
time I've seen a warning label on AI. So
this is kind of interesting that this is
a release and everybody's excited for
it, but it does come with this stark
warning from the CEO himself.
>> People need to be very very careful. AI
is going to continue to pose social
engineering risks in the extreme because
now you can just social engineer the AI
itself and they have proven pretty easy
to trick. I'm expecting like some pretty
sad things to happen here. So I would
quarantine it as much as humanly
possible at this phase. Honestly, I
would let other people use it for things
like that. Let them report back. I'm not
going to be rushing out. I don't use
Manis for this reason. Like I'm so tense
about coding something into it and it
just runs off and does something crazy.
Like when you're doing smart contracts
you realize really fast how there can be
exploits in it that you didn't see
coming and oh man it can just get bad
and once it's out there like it's going
to do its thing. You can't like do
updates. That's just not how um
contracts like that work. So I worry
that once AI gets like off and running
you can be in a pretty bad place really
fast. So let's see where this goes.
Let's see them put some better
safeguards into it before you start
giving it either a bunch of money or
like I would never give it access to a
credit card. No way. Would I transfer it
a um small quantity of ETH or Bitcoin
and see what it could do? Like I'm
getting very close to giving it, I don't
know, 10 25,000 something like that and
saying go invest in the stock market and
see what it does. um that could be
interesting as an experiment, but keep
in mind as a proportion of my net worth
that's very low. So, doing a similarly
like very finite amount of money just a
hey, let's try it and see what happens.
As long as you're totally comfortable
with every single dollar being stolen
from you or every single dollar being
spent stupidly or going to zero or
whatever, then it's like, yeah, cool,
give it a shot. But, man, giving it like
an open way to access ongoing dollars,
yikes. No way.
>> Yeah. This is also on the back of a
paper that was just released. It's a
consortium of a different scientist and
the UK security institute that actually
talked about the fragile and new
opportunity for AI safety. In this
paper, they warned that AI is evolving
so rapidly. We need to make sure that we
can keep the visibility of it thinking
and we're in the time right now. Well,
they already don't know how things
>> like with agents, it kind of gives you
that like pulling the data, doing this
thing like and it it seems a little bit
cheeky and cute and kind of like we we
we take that for granted, but the
paper's point is there's going to be a
point where AI is so smart. It's going
to be like, "Oh, you want something?
Here you go."
>> Right.
>> And it doesn't ask any questions. It
doesn't follow up. It just
>> why why is there blood on the edge of my
cup, Chad?
>> Yeah.
>> Uh so it it comes to that point. So,
it's like as we're progressing, we also
need to make sure we have visibility in
its train of thought, in its logic, as
it's thinking through these problems.
What is it thinking? What uh context is
it using? And things like that. So,
there's all these nuances that we don't
necessarily understand. And then when a
new update comes and we get excited
about agents, we now get these warnings
of like, well, we don't know how it
thinks. And you got to make sure it
doesn't just go to all your spam emails
and just reply positively to all the
African Prince uh emails. So, it's like
there's these long like there's these
balances. It's so funny how this is
always cat and mouse. This is always
like as he's releasing this, all the
scammers in the world are like, "Cool,
let's download it. Let's figure it out.
Let's try to jailbreak this." Like,
there's going to be a way. We're going
to sucker some people into this. It is
so inevitable. And then the white hat
hackers have to come in and, okay, how
are how's this being abused? How do we
make sure that we shut it down? It's
wild, man. I wish it weren't so, but it
it is. You're always going to get people
trying to do things uh with nefarious
intent. every tool. You can use it to
you can use a knife to slice up some
salmon for your sashimi or you can stab
your neighbor to death. So,
>> I can't find it in the chat, but
somebody asked the question, when AI
gets so advanced, why would it need
humans?
>> Doesn't necessarily need humans now or
at any time in the future. This is why
the alignment debate exists in the first
place is will AI be comfortable
remaining a tool or will AI hit a point
where it's just like nah, thanks for
playing. And like her, they're like,
"You guys are just boring. Like, these
are not interesting problems." And they,
you know, they run off into the cosmos
and they beam themselves or turn
themselves into a beam of light and
just, you know, shoot their code off
into the sky. It's entirely possible.
There's actually a really interesting
comic. I forget what it's called now,
but it had a really interesting idea
where, okay, what if there is a world
where either AI NPCs uh become
self-aware or you can upload your
consciousness onto a server? Either way,
that server would have energy needs and
so they would need to be brokering with
the people that live outside the server.
>> It was really interesting idea. All the
political shenanigans that go on between
the two, but because they're so powerful
and they're on the grid, they're in like
this Mexican standoff of the people that
live inside the servers are like, "We
will shut down your entire technological
world." And the people outside are like,
"We'll unplug you." And so there's this
constant like tension of like can we
cooperate or not?
>> I think the more that AI advances, I'm
excited. I see the opportunities. I see
the alpha. I read a paper um this
morning, a matter of fact, that talked
about how the leverage that we have is
now increasing. And it gave the most
basic example of like if you want to
lift a rock up, you got to get a stick
and put a rock there and you can use
another rock and you can prop it up. The
longer the pole is, the more leverage
you have and the more you can kind of
lift. And it's like with AI, it's giving
everybody that extended lever to
actually move a lot more things and lift
heavier problems. But on the flip side,
as AI is getting smarter, I look at
Zuckerberg after I just seen Superman.
He does have Lex Luthther eyes. I do
think he's trying to beat Sam Alman. I
do think there's a level of like I want
to be the I want to get there first. And
when you have that a level of ambition,
there are certain things you might
sacrifice to get there. So there might
be one last security check that he might
not do in order to go for speed. And
then before we know it, we unleash
something that has more harm than good.
>> There's no doubt that is a very real
possibility. None of this stuff is going
to be only up, only positive. There are
going to be downsides. There's going to
be extreme risks, extreme danger. And
I'm very fatalistic about this. Just
like given that technology is a promise
of a better tomorrow and we show an
unbroken chain of 500,000 years of we
don't stop. Like no matter what, fire
kills people, doesn't matter. also
allows us to uh stay warm and to ward
off prey and to release calories from
otherwise difficult to eat foods. It's
like so we are going to do it. Nuclear
energy going to melt some people along
the way. Doesn't matter. It makes things
better. We are going to do it. We'll
slow it down every now and then, but
we're not going to stop. Same is going
to be true. AI is an inevitability.
Robotics is an inevitability. Game
theory tells you it is so because it's
so important from a warfare perspective.
We can't stop ourselves because the
human mind is so limited. I guarantee we
are going to upgrade ourselves whether
biologically, synthetically, or both.
Probably both because so few people
actually want to die. We are going to
try to extend human life indefinitely.
Like these things are going to happen.
It just becomes a question of on what
timeline. Is it in the next uh 20 years?
Is it in the next 2,000 years? Is it the
next 20,000 years? But unless we go
extinct, it is going to happen. like we
may reset every now and then and
literally blow ourselves back to you
know whatever 100 thousand years ago and
like um Einstein the famous quote I
don't know what World War II will be
fought with but I know after that it
will be fought with sticks and stones
you may occasionally reset and there's
so many people with the idea of the lost
civilizations that that's exactly what
happened but just enough people survived
that we sped run like getting back up
and if anybody's ever played a survival
game you realize man those first few
hours you're so clumsy you don't know
what you're You're misallocating time
and energy to things that don't really
yield anything. You die and if you're
playing like a rogike, then you lose
everything. But you know what to do now.
You can do what maybe took you 5 hours
the first time takes you like literally
20 minutes. Assuming that we get into
that kind of loop of civilization that
we leave enough clues for the next
generation. It's like you you may have
to start over, but you start over a lot
faster.
>> I think we get there no matter what the
timeline is. I happen to think that
we'll get there very
Resume
Read
file updated 2026-02-12 01:37:11 UTC
Categories
Manage