OpenAI’s First Hardware Device Revealed: Sam Altman & Jony Ive Are Changing Everything
70un8daysQs • 2025-12-08
Transcript preview
Open
Kind: captions
Language: en
You're probably tired of your phone
constantly bombarding you with
notifications, feeling like you're glued
to a screen just to talk to AI. And
here's the thing. I've been following
Sam Alman and Johnny Ives secretive
hardware project for months now, digging
through every public statement, every
leak, every supplier report.
What I found is honestly mind-blowing.
They're building something that could
completely change how we interact with
AI. And no, it's not another smartphone.
Welcome back to bitbiased.ai,
where we do the research so you don't
have to.
Join our community of AI enthusiasts.
Click the newsletter link in the
description for weekly analysis
delivered straight to your inbox. So, in
this video, I'm going to break down
everything we know about OpenAI's
mysterious AI device, from its
screenless design to why it could
actually replace your phone for certain
tasks. you'll understand exactly what
Altman and Iive are building when you
can expect to see it and why this might
be the most important AI hardware launch
in the next 2 years.
First up, let's talk about who's
actually behind this project because
this partnership is kind of a big deal.
The power duo behind this device.
When you combine the CEO of OpenAI with
Apple's legendary design genius, you
know something interesting is happening.
Sam Alman has been leading OpenAI since
2019, the company that gave us ChatGpt
and basically kickstarted the entire AI
boom we're living through right now.
Before that, he ran Y Combinator, one of
Silicon Valley's most prestigious
startup accelerators.
The guy knows how to spot and build
transformative technology. Then there's
Johnny IV. And honestly, if you've ever
used an iPhone, iPod, iMac, or Apple
Watch, you've held his work in your
hands.
For three decades at Apple, I've shaped
the design language of some of the most
iconic products ever made. He left Apple
in 2019 to start his own design firm
called Love From. And here's where it
gets interesting. He wasn't just
enjoying retirement. In May 2025, OpenAI
made a massive move. They acquired IV
startup IO products for roughly $6.5
billion.
Yes, with a B.
This wasn't just buying a company. It
was bringing Joanie Ives's entire design
team, including about 50 former Apple
engineers and designers, directly into
Open AI.
Think about that for a second. The same
team that designed the iPhone is now
working inside Open AI, reimagining what
computing looks like when AI can see,
think, and understand the world around
us. What makes this partnership so
powerful is the complimentary strengths.
Altman brings OpenAI's cuttingedge AI
research and the most advanced language
models on the planet.
Iive brings decades of industrial design
expertise and an obsession with
simplicity that's almost legendary in
tech circles. Together,
they're asking a fundamental question.
Now that AI can truly understand us,
what should the interface look like?
What this device actually is and why
it's so different.
All right, here's where it gets really
interesting.
Forget everything you know about
smartphones, tablets, or laptops because
this device breaks all the rules.
Based on multiple public interviews and
industry reports, what Altman and Iive
are building is completely screenless.
Let that sink in for a moment. No
display at all. The device is roughly
pocket-sized, comparable to an iPhone in
dimensions, though some rumors suggest
it could be even smaller, closer to the
old iPod Shuffle. But here's the kicker.
Instead of a screen, this thing is
packed with cameras and microphones that
give it what's called ambient
intelligence.
It's constantly aware of your
environment, your location, your
routines, and your surroundings.
Think of it as an AI assistant that
actually understands context.
Now, you might be wondering, if there's
no screen, how do I get information from
it? Great question. The device
communicates through audio, either via
built-in speakers or Bluetooth
connection. And there's even speculation
it might be able to project images onto
nearby surfaces. Imagine it projecting
information onto your hand or your desk
when you need to see something. Wild,
right? But wait until you hear what's
powering all of this. Under the hood,
this device is deeply integrated with
OpenAI's most advanced AI models. We're
talking ChatGpt level intelligence and
beyond, but designed to work proactively
on your behalf.
According to reports, it's built to know
everything you've ever thought about,
read, or said, and act on it. That
sounds intense, but the vision is
actually the opposite of overwhelming.
Here's the philosophy. Instead of your
phone constantly pinging you with alerts
and notifications, demanding your
attention every few minutes, this device
quietly handles tasks in the background.
It filters out the noise and only
interrupts you when something truly
matters.
Forbes India describes it as having
spatial awareness and contextual
awareness of your patterns. It learns
when to speak up and when to stay
silent. The intended use case is
fascinating. Altman and Iive describe it
as a calm companion. And I love that
phrase. You might simply speak a command
into the air and the device books
appointments, answers questions, or
summarizes information without you ever
looking at a screen. PC Gamer put it
perfectly. If you trust your AI to do
things for you, you only need to briefly
ask it to do something. You won't need
to see it in action. Think about how
radical that is. Most of our tech today
is designed to capture our attention.
This device is designed to stay out of
your way while making your life easier.
It's a complete inversion of the current
smartphone paradigm. The timeline.
how fast this is moving.
What's remarkable about this project is
how quickly it's progressed from concept
to reality.
Let me walk you through the key
milestones because this thing is moving
fast. Back in 2023, around 2 years ago
now, Sam Alman and Joanie Ives started
having conversations. According to
OpenAI's announcement, they were
exploring tentative ideas and
explorations about new forms of
computing. This was right when chat GPT
was exploding in popularity and they saw
an opportunity to rethink how we
interact with AI. By mid 2024, Iive
wasn't just talking anymore. He
co-founded IO products with design
partners Scott Cannon, Evans Hanky, and
Tang Tan. These are all heavy hitters
from the Apple design world. They
assembled a team of hardware and
software engineers, many of them former
Apple staff, and started building actual
prototypes.
This wasn't a research project. This was
a real company building real products.
Then came May 2025 and that's when
OpenAI dropped the bombshell.
They announced the acquisition of IO
products for approximately 6.4 to 6.5
billion. The entire IO team merged into
OpenAI though Love Ives creative agency
remained independent as a design
partner.
Sam Alman publicly stated this gave Open
AI access to the best hardware and
software engineers, best technologists,
experts in product development and
manufacturing. That's not hyperbole. He
literally brought Apple's A team into
OpenAI. Now, here's what gets me
excited.
By late 2025, in a public interview,
Altman and IV unveiled that they have
working prototypes,
not concept sketches, not 3D renders,
actual functioning devices that you can
hold and use.
Altman called them jaw-droppingly good.
He even mentioned they've iterated until
the device feels playful enough. And
here's a funny detail. It passed IV's
famous lick it or take a bite test.
That's an actual test I've uses. When a
product design is so refined and
appealing, it should make you want to
lick it or bite it. I'm not making this
up. So, when can you actually buy one?
All sources point to roughly 2 years or
less from now. Johnny IV himself said
consumers could see the device in less
than 2 years from late 2025.
The Virgin MC Rumors both reported the
same timeline. If we're being
conservative, we're looking at a 2027
launch, possibly even late 2026. That
might sound far away, but in hardware
development terms, especially for
something this ambitious, that's
incredibly fast.
To recap the timeline, they went from
initial conversations in 2023 to
founding a company in 2024 to a
multi-billion dollar acquisition in mid
2025 to working prototypes by late 2025
with an expected launch in 2026 or 2027.
That's lightning speed in the hardware
world. The players, who's making this
happen?
This isn't a side project in someone's
garage. The scale of companies and teams
involved here is massive and
understanding who's building this gives
you insight into just how serious this
is. At the center of everything is open
AI.
As the parent organization, they're
leading the entire effort. Their
research and engineering teams are
integrating their most advanced AI
models with this new hardware.
Sam Alman is the executive sponsor and
OpenAI has publicly acknowledged the
project in their blog posts and
interviews.
This is an official OpenAI initiative
with full company backing.
Then you have Johnny Ives design empire.
Love from his independent design firm
provides creative leadership and
oversight.
But remember IV's startup IO products
brought together the actual team of
builders, hardware engineers, software
developers, designers.
When OpenAI acquired IO products, they
got access to more than 50 former Apple
engineers and managers. These are the
people who actually built the iPhone,
iPad, and Apple Watch.
Now, they're sitting inside OpenAI's
hardware division under IV's design
direction. But here's where it gets even
more interesting, the manufacturing
side.
Building beautiful prototypes is one
thing. Manufacturing millions of units
is an entirely different challenge.
Open AAI clearly learned from Apple's
playbook here. According to Reuters,
they've signed deals with Lux Share,
which is one of the major assemblers of
Apple devices. Lux Share builds iPhones,
AirPods, and other Apple products at
massive scale. They know how to do
precision manufacturing for consumer
electronics.
OpenAI has also tapped Gore, a supplier
that makes components for AirPods and
HomePods, specifically for parts like
speakers.
These partnerships are crucial. They
give OpenAI access to the same worldass
supply chain that Apple uses.
It's like they're leveraging Apple's
entire manufacturing ecosystem without
actually being Apple. There's one more
interesting connection worth mentioning.
The Emerson Collective, owned by Lorine
Powell Jobs, Steve Jobs's widow, hosted
the demo event where much of this
information was discussed.
Now, they're not developers or
manufacturers, but it shows the Silicon
Valley connections and prestige
surrounding this project. What's notably
absent,
any connection to Altman's other
ventures, there's no public link between
this device and Worldcoin or any of his
other projects. This appears to be
purely an open AI initiative, which
means it has the full weight of one of
the world's most valuable AI companies
behind it. In summary, you've got
OpenAI's AI expertise, Apple's former
design dream team, and Apple's actual
manufacturing partners all working
together to build this thing. That's a
combination that rarely exists in the
tech world.
Why this device needs to exist right
now? You might be thinking, "We already
have phones with AI assistants. Why do
we need another gadget?"
That's fair, but Altman and IV have a
compelling answer rooted in real
problems that I bet you've experienced
yourself.
Think about your relationship with your
smartphone right now. Be honest. How
many times today has it interrupted you
with a notification you didn't need?
How many apps are constantly fighting
for your attention with badges, alerts,
sounds, and vibrations?
Altman describes using modern devices as
being like walking through Time Square,
dealing with all the little indignities
along the way.
That constant sensory assault isn't
making our lives peaceful and calm. It's
the opposite.
This new device is being positioned as a
direct remedy to smartphone overload.
It's a calm companion that only
interrupts you when absolutely
necessary.
No endless scrolling, no notification
bombardment, no distraction warfare.
Just an AI assistant that quietly does
work in the background and respects your
attention. But there's a bigger reason
this device makes sense now, and it has
everything to do with timing. We're at
this unique moment in tech history where
the AI brains have finally caught up to
our imagination.
Large language models like chat GPT can
understand context, have conversations,
and perform complex tasks.
The capability exists. What's missing is
the right physical interface to make
that capability feel natural and
intuitive. Right now, most people
interact with AI by typing into a chat
box on their phone or computer. That
works, but it's not elegant. It's not
ambient. It still requires you to stop
what you're doing, pull out a device,
navigate to an app, and type. Axios
makes a great point. An elegantly simple
gadget could be the way that everyday
people begin to interface fluently with
AI. In other words, we have incredible
AI technology, and this device aims to
be the intuitive body that makes it
accessible to everyone. There's also a
gap in current offerings that's worth
noting.
Major tech companies have tried to
create new AI interfaces. Amazon's Alexa
devices, Google Nest, smartwatches, even
AR glasses. But none have truly
redefined how we interact with AI on a
daily basis. Even well-unded startups
like the Humane AI pin have struggled
and largely flopped. OpenAI's project is
betting that a unique design-driven
approach led by the person who designed
the iPhone can succeed where others have
failed. Reuters explicitly notes that a
device like this could eat into the
markets of Apple and other consumer
electronics makers by challenging
smartphone dominance. That's not just
wishful thinking. It's a real
possibility if they get the execution
right. The timing also matters from an
AI development perspective.
Altman has talked about how we need to
shape how AI enters daily life before it
grows out of control or becomes
something we can't manage.
Starting this project when GPT style AI
became widespread means they can launch
the product as these models mature even
further. So the market need is clear.
People want a simpler, smarter interface
to AI that reduces clutter and respects
their time. And the technology has
finally reached a point where building
such a device is actually possible.
That's why this is happening now. The
design philosophy simplicity meets joy.
If you want to understand what makes
this device different, you need to
understand the design philosophy driving
every decision. And honestly, it's
refreshing in a world where tech gadgets
often feel like they're competing to add
more features rather than remove them.
Johnny IV has repeatedly emphasized what
he calls naive simplicity. He loves
designs that teeter on appearing almost
naive in their simplicity. The goal
isn't to impress people with complexity
or to show off how many features you can
cram into a small space.
The goal is to create something that
looks and feels like a simple tool,
almost obvious in hindsight.
When someone sees it, they should think,
"Of course, that's what it should be."
Alman is completely aligned on this
vision. He wants people to look at the
device and respond with, "That's it."
Because of its simplicity. But here's
what's interesting. This isn't sterile
minimalism.
There's a strong emphasis on whimsy and
delight woven throughout the design.
Iive has said explicitly that the device
should make people smile and feel joy.
It shouldn't try to impress through
complexity. It should charm through
playfulness.
Alman admitted he was initially
skeptical about adding whimsy to a
serious AI product, but now he says he's
so happy to have some whimsy back in
technology.
In practice, this means curved, friendly
shapes and interactions that feel
playful rather than robotic. Remember
that lick or bite test I mentioned
earlier? That's not just a joke. It's a
genuine philosophy.
When a design is refined enough, it
should be so appealing that you have an
almost instinctive positive reaction to
it. Your brain should register it as
something attractive and desirable at a
visceral level. Now, let's talk about
what they removed because that's just as
important as what they included. The
team has been ruthless about stripping
away anything unnecessary.
Altman noted that the degree to which
Joan has chipped away at every little
thing that this doesn't need to do is
remarkable.
This is why there's no screen. Screens
are distracting.
This is why there's no visible camera
array or button complex. Fewer elements
mean fewer distractions and a calmer
experience.
The hardware and software are being
co-designed in lockep, which is rare
even in big tech companies.
Open AAI's models run in the cloud while
the hardware provides seamless input and
output.
The device is specifically intended to
work closely with ChatGpt's artificial
intelligence models. According to
Reuters, Forbes India describes it as an
AI companion built around ambient
intelligence. Here's where the design
gets really clever.
The devices contextual awareness, those
cameras and microphones combined with
AI, allows it to filter information
intelligently.
It only speaks or notifies you when the
AI determines something is actually
important. This isn't just a technical
feature. It's a design decision that
fundamentally changes the user
experience.
PC Gamer explored this concept of trust
in their article, and I think they
nailed it. The entire vibe depends on
trusting your AI to handle tasks without
constant supervision.
If you can genuinely trust it to work on
long-term tasks in the background, the
experience becomes like sitting in a
cabin by a lake rather than the bustling
chaos of city life. That's Altman's own
metaphor, by the way. There's
essentially no traditional operating
system UI as we know it. No touch screen
to navigate, no keyboard to type on,
possibly no buttons at all.
interaction might be purely voice-based
or even gesture-based with the AI doing
all the heavy computational lifting. As
PC Gamer speculates, you won't need to
see it in action. It will just do it.
That's a radical departure from how we
currently think about using technology.
In summary, the design philosophy
marries Apple level simplicity and joy
with cuttingedge AI capabilities.
Altman put it perfectly when he said
they want the device to be so simple and
beautiful and playful that users
immediately feel it's just tools that
empower them. Behind that effortless
simplicity though is incredibly
sophisticated AI running 24/7 to create
that seamless experience.
What we know from public statements and
reports. Let's talk about what's
actually been confirmed versus what's
speculation.
The team has been remarkably secretive.
No product name has been released. No
official photos have leaked, but we do
have several key sources of information.
The first official word came in May 2025
through an OpenAI blog post. Sam Alman
and Joanie IV announced the IO products
acquisition and hinted at their work
together. They talked about building
products that inspire, empower, and
enable. And Alman praised Iive's design
sensibility as extraordinary.
However, they deliberately kept product
details vague at that point. It was more
of an announcement that something big is
coming without revealing what it
actually was. The floodgates opened in
November 2025 during what's been called
the Emerson Collective Demo Day.
This was a live streamed interview with
Loren Powell Jobs where Altman and IV
spoke at length about the device, though
still somewhat cryptically.
The media coverage from that single
event dominates everything we know
today. The Verge reported they've
settled on a design and the device could
arrive in less than 2 years.
Axios highlighted key quotes like
Altman's hope that people will say
that's it when they see it, emphasizing
the device would be unveiled within 2
years. Multiple outlets published
articles the same day. All based on this
interview and related intelligence from
that event and subsequent reporting, we
got confirmation on several major
points. The prototypes exist and are, in
Altman's words, jaw-droppingly good.
The design is playful and pass the
famous lick test.
It's aimed to be pocket-sized and
completely screenless.
These details appeared across
TechCrunch, Mc Rumors, Business Insider,
PC Gamer, and other reputable tech
publications. Supplier reports added
another layer of confirmation.
In September 2025, Reuters, citing the
information, reported on manufacturing
partnerships.
OpenAI had signed with Lux Share, an
Apple device assembler, to actually
build the hardware at scale.
They'd also approached Gore for
components like speakers. These reports
confirmed the project's massive scale
and that OpenAI was serious about mass
production using Apple's proven supply
chain. We've also seen consistent
specification hints emerge across
multiple sources.
Reuters noted the device is expected to
be aware of context and designed to work
closely with ChatGpt's artificial
intelligence models.
Various publications have called it an
AI phone without a screen, a third core
device after phones and computers, or
compared it to an iPod-like gadget with
cameras and microphones for
environmental awareness. None of these
descriptions are official marketing
materials, but they paint a remarkably
consistent picture when you look at them
together. The story hasn't changed
across dozens of articles and reports.
that suggests the information is
reliable, even if it's not coming
directly from OpenAI's PR department.
What we don't have yet are the crucial
details that usually define a product
launch.
There's no official name. We're all just
calling it OpenAI's device or the Altman
IV gadget. There are no leaked product
photos or renders. We don't know the
price point, though given the
manufacturing partnerships and design
pedigree, it's likely to be premium
priced.
We don't know exact technical
specifications like battery life,
processing power, or connectivity
options. The team is clearly keeping
those details locked down intentionally.
Altman has hinted that seeing the device
will be an aha moment, suggesting the
reveal is part of the experience.
We'll have to wait for a future official
unveiling to know exactly what it looks
like and all the specifics of how it
works. But here's what we can say with
confidence. This isn't vaporware.
Working prototypes exist. Manufacturing
partners are secured. A concrete
timeline has been given by the creators
themselves. And the vision has been
articulated clearly enough that we
understand the what and why, even if
we're missing some of the how. Let me
bring this all together because there's
a lot to unpack here. Sam Altman and
Joanie IV are building a new kind of
computing device that challenges
everything we assume about how we should
interact with AI. It's a small
screenless ambient AI companion designed
to feel calm and intuitive rather than
demanding and distracting. The device
leverages OpenAI's incredibly powerful
AI software. Think chat GPT and beyond.
combined with Johnny Ives's legendary
minimalist design philosophy.
That combination alone is fascinating.
But what makes it potentially
revolutionary is the underlying idea
that the best interface for advanced AI
might be almost invisible, operating in
the background of your life rather than
constantly demanding your attention.
Development has been remarkably rapid.
From initial concept discussions in 2023
to working prototypes in 2025, they've
compressed what usually takes 5 to 7
years in hardware development into just
2 to 3 years. The acquisition of IO
products for $6.5 billion wasn't just a
strategic move. It was open AAI bringing
Apple's design a team inhouse. The
consensus timeline points to a release
in approximately 2 years, somewhere
around 2027 or possibly late 2026. That
might seem far away, but in the context
of bringing a completely new hardware
category to market, it's actually
aggressive.
Until that launch happens, many details
remain speculation and educated guesses
based on public statements.
But what Altman and Iive have said
publicly paints a clear picture. They
want people to respond with, "That's it.
It's so simple." When they see the
device, they want to create something
that embodies both technical
sophistication and genuine joy in use.
In the coming years, this device could
fundamentally reshape how everyday
people interact with AI, moving us away
from screens and towards more ambient
contextual computing.
Whether it succeeds will depend on
execution, pricing, and whether
consumers are ready to trust an AI
assistant to operate largely outside
their direct supervision. But if anyone
can pull this off, it's the combination
of the company that built Chat GPT and
the designer who defined modern consumer
electronics at Apple. The stage is set,
the team is assembled, and the
prototypes are working. Now, we wait to
see if they can deliver on the promise.
Based on everything I've researched,
from Sam Alman and Joanie Ives's public
interviews and statements to OpenAI's
official announcements and extensive
coverage by outlets like TechCrunch, The
Verge, Axios, Reuters, Mc Rumors, and
Business Insider. That's where we stand
on this mysterious AI device. It's one
of the most ambitious hardware projects
in tech right now, and I'll definitely
be watching closely as more details
emerge. What do you think? Would you
trust a screenless AI assistant to
manage parts of your life? Let me know
in the comments below. And if you found
this breakdown helpful, hit that
subscribe button because we're going to
be following this project closely as it
develops. Thanks for watching and I'll
see you in the next one.
Resume
Read
file updated 2026-02-12 02:44:08 UTC
Categories
Manage