Sam Altman vs Elon Musk The Truth About the AGI War No One Is Telling You
JSUJ-gMXMDM • 2025-12-06
Transcript preview
Open
Kind: captions
Language: en
You're watching two billionaires race
towards something that could change
humanity forever. And you might think
they're on the same team. But here's
what most people miss. They're building
completely different visions of our AI
future. And one of them might actually
be right. I've spent weeks digging
through their statements, funding
rounds, and public battles. And what I
found is honestly shocking. These aren't
just competing companies. This is a
clash of philosophies that could
determine whether AGI saves us or breaks
us. Welcome back to bitbiased.ai, AI,
where we do the research so you don't
have to
join our community of AI enthusiasts.
Click the newsletter link in the
description for weekly analysis
delivered straight to your inbox. So, in
this video, we're diving deep into the
race between Sam Alman's Open AAI and
Elon Musk's XAI.
We'll break down their completely
different approaches to building AGI,
who's spending what, and most
importantly, whose strategy might
actually keep us safe.
By the end, you'll understand why this
race matters way more than just another
tech rivalry.
First up, let's talk about Sam Alman and
what makes OpenAI's approach so
different from what came before. Sam
Alman and OpenAI, the cautious
visionary.
Sam Alman has been steering OpenAI since
2015. And if you've used Chat GPT,
you've already experienced his vision
firsthand. But here's what makes his
approach interesting.
OpenAI's entire mission revolves around
one core idea. Ensuring that AGI, the
kind of AI that can outperform humans at
most economically valuable work,
actually benefits all of humanity.
That's not just marketing speak. It's
literally written into their charter.
Under Altman's leadership, OpenAI gave
us Chat GPT in November 2022, and the
world hasn't been the same since.
They followed up with GPT4, Dolly for
image generation, Codeex for
programming, and more. But wait until
you see how Altman thinks about the
timeline here. He recently said
something that should grab your
attention.
OpenAI now feels confident they know how
to build AGI as we've traditionally
understood it. He's predicting that the
first AI agents might join the workforce
as early as 2025, dramatically boosting
business output. Now, before you start
worrying about robots taking over,
Alman's view is actually more nuanced
than the Hollywood version.
He told Time magazine something
fascinating.
AGI will probably get developed sooner
than most people think, and it will
matter much less.
What he means is that society will adapt
gradually as AI tools speed up growth.
It's not going to be some overnight
transformation where everything changes
at once.
Instead, he sees it as a long
continuation toward any fully super
intelligent future.
But here's where it gets interesting.
Altman has also written that superhuman
AI is probably the greatest threat to
the continued existence of humanity.
So, he takes the risk seriously,
really seriously.
Open AAI's approach is what they call
iterative development. They gradually
release powerful models, learn from real
world use, and fund extensive alignment
research.
It's like they're building the plane
while flying it, but they're checking
every bolt along the way. And this
philosophy shapes everything they do.
Altman told Bloomberg something that
perfectly captures his mindset. The only
way to address AI risks is to ship
product and learn from experience.
In other words, you can't just theorize
about safety in a lab. You need to put
advanced AI tools into the world with
safety oversight of course and
understand how they actually behave in
practice.
This approach has required massive
resources.
Microsoft has poured over $13 billion
into OpenAI, making them the company's
largest backer by far. Open AAI operates
as what they call a capp profit company
under a nonprofit board. It's a hybrid
structure designed to raise enough
capital to compete in the AI race while
still prioritizing safety over pure
profit. And make no mistake, they need
that money.
Alman has openly said OpenAI won't turn
a profit until around 2029 when they're
projecting roughly 100 billion in
revenue. But there's something else
about Altman that sets him apart. When
someone asked if the public should vote
on super intelligence decisions, he
replied, "Yes, I really do. I hope we
can start a lot more public debate very
soon about how to approach this." That's
not something you hear often from
Silicon Valley CEOs racing to build
worldchanging technology.
It suggests he genuinely wants society
involved in these decisions, not just a
handful of tech leaders deciding
humanity's future behind closed doors.
Elon Musk, XAI, the aggressive
disruptor.
Now, let's talk about Elon Musk because
his story with AI is way more
complicated than most people realize.
Here's something that might surprise
you. Musk actually co-founded OpenAI
back in 2015, right alongside Sam Alman,
but in 2018, he walked away. The
official reason was a conflict of
interest with Tesla's own AI work, but
there were also disagreements about
OpenAI's direction that would become
very public later on. Fast forward to
July 2023 and Musk launches XAI with an
ambitious mission to understand the true
nature of the universe and create AI
that he believes is safe and truthful.
He assembled a dream team of engineers
from OpenAI, DeepMind, Tesla, and
Twitter. And then did what Musk does
best.
He went absolutely massive on resources.
We're talking tens of thousands of
NVIDIA GPUs and building something
called the Colossus Supercomputer
Cluster in Tennessee. Now, pay attention
to Musk's timeline here because it's
wild. In an April 2024 interview, he
predicted that if AGI means an AI
smarter than the smartest human, that
might be achieved next year within 2
years, meaning by 2025 or 2026.
He even suggested that his Gro 5 model
by the end of this year could be true
AGI.
That's an incredibly aggressive
prediction, especially compared to most
AI researchers who think we're decades
away. But here's what makes Musk's
approach different. He's not just
talking about timelines. He's backing
them with unprecedented resources.
Reports suggest XAI has raised funding
rounds valuing it near $200 billion.
In mid 2025, they raised 5 billion in
debt plus 5 billion in equity.
SpaceX committed another 2 billion. And
then in a move that shocked everyone,
Musk merged XAI with Twitter, which he'd
renamed X in a $33 billion deal.
The goal, combine social data with AI
development.
Now, this next part is where things get
really interesting, and it reveals a
fascinating contradiction in Musk's
thinking. For years, he's been warning
that unchecked AI is our biggest
existential threat.
He's called it summoning the demon and
regularly advocates for regulation,
international oversight, and even
physical off switches in robots.
He signed open letters calling for
pauses on developing AI models beyond
GPT4 levels. Yet at the same time, he's
racing to build some of the most
powerful AI systems on the planet. His
explanation, he's building what he calls
truth GPT, a maximum truth-seeking AI
that will naturally align with human
values because, in his words, humanity
is just much more interesting than not
humanity.
His theory is that a maximally curious
AI would naturally stay prohumity
because we're fascinating.
Critics call this approach naive, and
you can see why there's tension here.
Musk wants to slow AI development
globally, but he's simultaneously
sprinting ahead with XAI.
Some analysts worry this could actually
intensify the AI arms race rather than
make it safer. But Musk argues that
someone needs to build a counterbalance
to what he sees as biased or politically
correct AI from companies like Open AI.
And speaking of Open AI, Musk hasn't
been quiet about his feelings toward his
former company. He even sued OpenAI,
accusing it of breaking its nonprofit,
Open Mission, by partnering exclusively
with Microsoft. In April 2024, he
tweeted his confusion that the nonprofit
he helped back became what he called a
$30 billion for-profit operation.
His point, OpenAI's mission
fundamentally changed when it raised
commercial funding, whereas he positions
XAI as more independent.
Musk is also integrating XAI deeply into
his other ventures. Tesla has already
started using Grock chat bots in
customer service and he recently
proposed a shareholder vote for Tesla to
invest directly in XAI.
He even noted, "If it was up to me,
Tesla would have invested in XAI long
ago."
This integration across his companies,
Tesla, SpaceX, gives XAI unique
advantages in compute power, talent, and
investment that few other AI companies
can match. The clash comparing their
approaches. So, now that you understand
both players, let's break down how their
approaches actually differ because this
is where things get fascinating.
These aren't just two companies building
similar products with different names.
They're pursuing fundamentally different
philosophies about how to reach AGI
safely.
Goals envision Alman's open AI is all
about broad benefit and safety. Their
charter explicitly focuses on safely
developing AGI for humanity's future.
Alman sees AGI as a point on a
continuum, not some magic moment where
everything suddenly changes.
He envisions AI agents that boost
productivity while society continues to
adapt and evolve.
It's gradual transformation, not instant
revolution. Musk, on the other hand,
frames XAI as a guardian against AI
dystopia. He talks about building AI
that's truth seeeking and aligned by
default, specifically to prevent other
companies from creating biased or
dangerous systems. But Musk also openly
acknowledges the technologies disruptive
power. He said AI could make normal work
obsolete and fundamentally reshape
society.
Both men predict AGI like systems soon.
Alman points to 2025 for the first
workforce integrated agents. Musk says
2025 to 2026 for systems smarter than
humans.
But their expectations about what
happens next are completely different.
Development philosophy. This next part
reveals why their rivalry matters so
much. Altman emphasizes iterative
development with constant safety checks.
OpenAI continuously refineses models,
collaborates with safety researchers,
and cautiously deploys new versions.
They have an internal safety board and
other oversight groups.
Altman has even co-led open letters and
summits about AI risk, trying to build
industry consensus around responsible
development.
Musk cares about safety, too, but his
method is almost opposite.
He advocates for strict external
regulation, government oversight, and
physical off switches,
but then he builds aggressively
in-house.
His motto seems to be build the best AI
quickly, but guide it with truth-seeking
principles. Critics worry this approach
could accelerate the very arms race Musk
claims to fear. Here's the key
difference. Altman leans on broad
collaboration with Microsoft, academic
safety groups, and even competitors.
He's trying to build consensus and
shared safety standards. Musk leans on
aggressive in-house development and uses
his massive social media platform to
shape the narrative. He's betting that
he can build faster and better than
anyone else and that his maximally
curious approach will inherently be
safer. The funding battle. Now, let's
talk money because the scale here is
absolutely staggering.
OpenAI started as a nonprofit but later
created a capped profit arm to raise the
capital needed to compete.
Microsoft's $13 billion investment gave
OpenAI access to cloud computing and
research resources that would otherwise
be impossible to afford. Current reports
suggest OpenAI is planning share sales
that could value the company at around
$500 billion.
Yes, you heard that right, half a
trillion dollars. But Alman admits
they're burning cash fast.
Open AAI discloses losses of billions
per year with profitability not expected
until 2029.
They're essentially betting they can
reach AGI before running out of runway
and that once they do, the value created
will dwarf the investment. Musk's
funding model is different, but equally
massive. He's using his own empire and
partners to bankroll XAI. SpaceX
committed 2 billion to a $5 billion fund
raise just to keep pace with rivals.
Reports say XAI raised 5 billion in debt
plus 5 billion in equity in 2025. And
some rounds have valued XAI near $200
billion.
The Twitter acquisition for 33 billion
gives XAI access to vast amounts of
social data. Analysts predict XAI will
spend around $18 billion on data centers
in the near future. So both companies
are burning tens of billions, but the
models are different. Altman's approach
is partner and share the costs with
Microsoft and indirectly with users who
pay for services. Musk's model is
supercharge with heavy stakes from his
own ventures and allied investors.
Both are incredibly risky and both
require outcomes that justify these
astronomical investments.
Openness and transparency.
Here's where things get ironic.
Open AAI's name suggests openness, but
in practice, they've closed most of
their technology.
GPT4's inner workings aren't public.
Altman says this is necessary for safety
and to recoup development costs, but it
frustrates transparency advocates, and
well, it especially frustrates Musk.
Musk has loudly complained that Open AI
went closed source and abandoned its
original nonprofit ethos.
He originally funded Open AI on the
promise of open collaboration.
When they shifted to a capp profit model
and partnered exclusively with
Microsoft, he felt betrayed. In a
Twitter outburst, he questioned how
OpenAI could transform into a $30
billion for-profit entity while still
claiming to serve humanity's interests.
In contrast, Musk claims XAI will be
more transparent.
Early Grock versions have been open-
source, and he suggested XAI won't be as
beholden to corporate partners.
Whether this remains true as XAI scales
up is an open question. Companies often
start open and gradually close as
competitive pressures mount. Ethics and
safety. Both leaders publicly
acknowledge AGI's risks, but their
approaches to managing those risks
differ dramatically.
Altman calls misaligned AGI grievous
harm and has invested heavily in
alignment research at OpenAI.
He joined other CEOs in advocating for
pauses on AI training to assess safety.
He's often described AI as potentially
the most impactful technology in human
history while also acknowledging the
perils when it's unmanaged.
Musk goes even further in his warnings.
He regularly calls for regulation,
pauses, and global coordination on AI
safety.
He's warned that AI could eliminate or
enslave humanity if left unchecked. His
calls for oversight and even an
international governing body are
stronger than almost anyone else's in
the industry. But here's the tension.
Musk wants to slow everyone else down
while he races ahead.
Critics point out this contradiction.
How can you advocate for pausing AI
development while simultaneously
building some of the most aggressive AI
projects on the planet?
Musk's response is that he's building a
supposedly benevolent AI that will serve
as a counterweight to irresponsible
development elsewhere.
Whether that argument holds up is
something we'll only know in hindsight.
Public image and messaging.
The way these two leaders communicate
couldn't be more different, and it
shapes how the public perceives this
race. Alman is measured and careful. He
gives detailed interviews explaining
OpenAI's road map and discussing risks
in depth.
In his late 2024 essay, he even
expressed gratitude to friends who
helped him during his brief
controversial ouster and return as CEO.
His style emphasizes learning, trust,
and collaborative problem solving. Musk,
by contrast, is theatrical and
outspoken.
He'll discuss AI on Fox News or on X,
often mixing technical insights with
culture war commentary.
He frames issues like truth GPT as a
response to politically correct AI
playing into broader cultural debates.
His style generates headlines. Whether
it's predicting a 10 to 20% chance AI
goes catastrophically wrong or claiming
certain AI models are too biased.
This difference in communication style
matters because it shapes public
discourse.
Altman positions open AAI as the
responsible, safetyconsconcious player
that happens to be winning.
Musk positions himself as the
truthtelling outsider fighting against
establishment bias.
Both narratives resonate with different
audiences and both influence how
regulators and the public think about AI
development.
Key takeaways.
What this means for all of us. So what
should you take away from all this?
Let me break down the most important
points that will shape how this race
unfolds. Similar timelines, different
confidence.
Both Altman and Musk expect human level
AGI within the next few years. Altman's
open AI is releasing increasingly
powerful models GPT4 in 2023 with GPT5
rumored and he speculated that 2025
could see major AI integration into the
workforce. Musk echoes this timeline
telling Reuters in 2024 that we might
see AI smarter than the smartest human
by 2025 or 2026. Whether these
predictions are accurate or overly
optimistic is debatable, but the fact
that both leaders are making them should
tell you something about how fast this
technology is advancing. Competing
ethical frameworks. Altman leads a
company whose charter is explicitly
altruistic. AGI should benefit all of
humanity.
He stresses benefit sharing, iterative
safety research, and working with
experts across institutions. His
approach is collaborative and
processdriven. Musk also cares deeply
about humanity's future, but he frames
it through the lens of risk avoidance
and independent oversight.
He wants external regulation, but
doesn't trust others to build safely.
So, he's building his own system with a
truth-seeking mandate, betting that
intellectual curiosity will naturally
align AI with human values.
Alman invests in alignment research
within open AI. Musk funds external
bodies and now leads his own
organization with a completely different
safety philosophy.
Rival business models. OpenAI is tightly
integrated with Microsoft and actively
selling AI products, ChatGpt, Azure
integration, enterprise solutions. It's
partly a software company already
generating revenue and building toward
profitability by 2029. Their strategy is
to iterate on GPT models and AI agents,
refining safety and practice while
serving millions of users. XAI started
as more research focused and hardware
hungry, leveraging AI across Musk's
Tesla and SpaceX ventures.
Musk's strategy is to build a new AI
brain in Grock, make it open and
powerful, and distribute it widely
through X Tesla vehicles and other
platforms. It's a more integrated
cross-platform approach that could give
XAI unique advantages if the execution
works.
The unprecedented funding race,
the amount of money flowing into AI
right now is genuinely unprecedented in
tech history.
OpenAI has collected well over $100
billion through partnerships and stock
sales. XAI has tapped sovereign wealth
funds from places like Saudi Arabia and
Qatar along with corporate funds to
raise tens of billions.
Companies in this space are now valued
in the hundreds of billions and they're
projecting energy needs measured in
pedawatt. That's power consumption
comparable to entire countries. This
isn't sustainable forever. One of these
approaches will either succeed
spectacularly or fail spectacularly. And
the fallout will reshape the entire tech
industry.
The stakes couldn't be higher.
Transparency on risks.
Here's something important. Neither
leader downplays the dangers. Altman has
used phrases like the greatest threat to
humanity when discussing misaligned AGI.
Musk has been warning about existential
AI risk for years. Both acknowledged
that getting this wrong could be
catastrophic. The difference is in their
responses.
Altman focuses on controlling the roll
out through internal safety processes
and gradual deployment. He wants more
public debate and says society should
have input on super intelligence
decisions. Musk pushes for external
regulation and global coordination, but
also builds aggressively himself.
When asked about oversight, Altman
suggests starting public debates soon.
Musk has sometimes clashed with
regulators on other issues like
autonomous vehicles. But on a I He's
actually welcoming more government
involvement.
The bigger picture. Who's right? So
here's the question that matters most.
Whose approach is more likely to lead us
to safe AGI? And honestly, we don't know
yet. That's the uncomfortable truth.
Alman's iterative collaborative approach
has the advantage of learning from real
world deployment. By putting AI tools in
users hands and studying how they're
actually used, OpenAI can identify
problems and refine solutions in
practice. The downside,
this means the world becomes the testing
ground and mistakes could have serious
consequences before they're fixed.
Musk's aggressive, truth-seeking
approach has the advantage of speed and
integration across multiple industries.
If he's right that a maximally curious
A, I naturally align with human values,
X A, I could leapfrog competitors in
both capability and safety.
The downside, it's essentially a bet on
one philosophical principle, truth
seeeking, to solve alignment. If that
principle isn't sufficient, building
powerful systems quickly could make
problems worse, not better. What's clear
is that both leaders are fully committed
to this race. They're each spending tens
of billions of dollars, assembling the
best talent in the world and pushing the
boundaries of what's technically
possible.
In the next few years, we'll see whose
approach wins in advancing AGI, and more
importantly, whether either can truly
manage the risks they're both so vocal
about.
This isn't just a Silicon Valley
rivalry. This is two different visions
for humanity's future with artificial
intelligence. And the outcome will
affect everyone on the planet. The
decisions being made right now in
boardrooms and research labs will shape
the world your children and
grandchildren inherit. The race to AGI
between Sam Alman and Elon Musk
represents more than just competing
companies. It's a fundamental clash of
philosophies about how to build powerful
technology responsibly. Altman's
cautious, iterative, collaborative
approach versus Musk's aggressive,
truth-seeking, integrated approach. Both
have merit. Both have risks. What makes
this moment so critical is that neither
leader is naive about the stakes. They
both understand that misaligned super
intelligent AI could pose existential
risks to humanity.
They both want to solve the alignment
problem. They just disagree profoundly
on the best path forward. As this race
intensifies over the next few years, pay
attention to how these different
philosophies play out in practice. Watch
how OpenAI's partnership with Microsoft
evolves. See whether XAI's integration
across Musk's companies gives them
unexpected advantages.
Monitor how regulators respond to both
approaches. And most importantly, stay
informed about the safety research and
alignment progress both organizations
are making. Because ultimately, this
isn't about Sam Alman versus Elon Musk.
It's about humanity figuring out how to
create something more intelligent than
ourselves without losing control of our
future.
And right now, these two leaders are at
the forefront of that challenge. If this
breakdown helped you understand what's
really happening in the race to AGI, let
me know in the comments which approach
you think is more likely to succeed. And
if you want to stay updated as this
story continues to unfold, make sure
you're subscribed because this is just
the beginning. Thanks for watching and
I'll see you in the next one.
Resume
Read
file updated 2026-02-12 02:43:58 UTC
Categories
Manage