Can Technology Detect Deepfakes Better Than Humans Can? | NOVA | PBS
-wmZsL_rY_I • 2024-09-26
Transcript preview
Open
Kind: captions
Language: en
some AI generated deep fakes are getting
so good it can be near impossible to
tell what's real and what's not what if
I were to tell you that I'm not even a
human
being would you believe me many deep
fakes like these face swaps are obvious
and silly but others are more harmful
like the pornographic image of Taylor
Swift that recently went viral over 90%
of deep fakes are non-consensual sexual
images of women and it's not just
celebrities being targeted but also
regular teenag girls now we're already
seeing deep fake images being created
about girls in high school by their
peers in high school right no way for
these girls to protect themselves right
their images exist online we're in a
social media World there are images of
them that exist period and it takes very
very little to create a sophisticated
deep fake creating convincing deep fakes
can now be done quickly and easily by
anyone and about anyone people really
can't tell they they look at them and
they they think they can and I think we
have to be really careful about trying
to give people ideas that they can spot
this the Tells of yesterday's deep fakes
like an extra finger on a hand a strange
blink or a glitch aren't really there
anymore because the tools just keep
improving so can technology Faire any
better than humans at spotting and
detecting deep fakes can the same
technology that created them be used to
detect them to an extent the people who
know the technology the best are the
ones that are building it deep fakes are
made using a type of artificial
intelligence called Deep learning these
AI programs are trained using huge
amounts of data including photos or
videos then they use complex algorithms
to make up a new photo or video we're
going to talk about three categories of
tools being created to prevent or
identify deep fakes one embeds markers
to indicate if the content is real or
fake another spots deep fakes out in the
wild and the last tries to prevent them
from being made in the first
place first let's talk about watermarks
these are like an invisible stamp that's
embedded in an AI generated photo or
video as it's being created and you see
companies like Google and meta doing
this that they are applying an
imperceptible Watermark an invisible
Watermark that comes in at the point of
creation or may even be in the training
data of the tool I it's sort of baked
into the way the tool generates an image
The Watermark would be undetectable to
the human eye but could be read by
computers which would flag it as AI
generated the challenge is that
watermarks are currently optional to add
or relatively easy to remove people are
also trying to do things that involve
showing you how the media evolves over
time because the problem is a watermark
is pretty binary it's yes it's Ai No
it's not and with some AI tools you can
change just a part of an image or a
video right so an alternative to
watermarks is something called metadata
Providence the creators describe it as a
nutrition label for a piece of media it
embeds information about how it was
created how it was edited and how it was
distributed right into the media's
metadata if the media was altered say in
Photoshop or using Ai and then upload it
online those changes would also be
recorded the problem is at the moment
those tools are not yet available across
the whole ecosystem so some tools put
those signals in and then there are
plenty of other places when we're making
media which deliberately strip out
metadata right and so you know a
metadata based solution doesn't work yet
across the system so we have these
imperfect not yet fully implemented ways
to essentially signal that something was
made or edited with AI so another
approach for focuses on detecting deep
fakes after they've been created for
example Intel is working on a tool that
detects one thing that real life humans
have blood when our hearts pump our
blood changes color slightly from bright
red to dark red as it's enriched and
depleted of oxygen Intel's tool looks
for signals of that blood flow in the
pixels of a video uses an algorithm to
map it across the face and then uses
deep learning to identify if the person
is real or AI Genera
the blood flow tracking technology is
similar to those used in devices like
SmartWatches to track heart rate Intel
reports a 96% accuracy rate for spotting
fake videos but the system has not been
independently analyzed all of these
detection tools hold both promise and
pitfalls which is why it's best to not
just rely on one method for spotting
fakes detection is probably they they
talk about it as an ensembl approach is
the best way and basically the way to
think about that is to do good detection
you're going to have a bunch of
different techniques that think about
different ways you detect AI generated
manipulations but catching deep fig
isn't the only problem there's also
preventing yourself from being the
victim of one so let's move on to
preventative tools which stops deep
figes from being created in the first
place one example are Shields which add
an invisible protective layer to an
image that makes it hard for AI models
to recognize and manipulate them a team
at the University of Chicago developed a
tool called Nightshade for artists to
protect their work from being scraped to
train AI models without permission
Nightshade adds an invisible poison to
the pixels of an image that caused the
AI model to misinterpret what that image
is or to just behave in strange ways so
when we see a picture of a hat AI sees a
picture of a
cake but Shields can only be added to
new images that are uploaded on the
internet not ones that already exist
online the challenge is that all of
these tools are responding to the
current weaknesses of the AI models
which keep getting better and better
anybody who works in the field of
adversarial AI or any sort of security
knows it's a game of cat and mouse so we
create ways to identify people malicious
actors Bad actors just get better across
the board technical fixes will only go
so far experts say that regulation
across the entire system of how AI is
developed detected and deployed is the
only way to solve some of these problems
it's key we make sure that there's a a
legal responsibility to do this across
that AI Pipeline and then it's done with
our human and civil rights at the center
of it and that's really the
responsibility of governments to do that
in the meantime if the tools to detect
deep fakes aren't reliable or
standardized and the Bad actors are
always a step ahead what should the
average person do to sift through what's
true and what's not so I think it's a
it's it's unreasonable to expect the
average person to be able to spot these
images audio and video um it is
reasonable to say you know pause before
you share a video that is too good to be
true see if there's an alternative
Source see if someone's written a story
that explains this was made with AI that
comes from a credible journalist or a
credible Community Source you trust Sam
recommends using the sift method stop
don't have a reaction I is investigate
sources F find other coverage and T is
to trace claims I think one thing is
building that that critical thinking
that muscle memory to say I saw this
video or I saw this thing let me go
search it online and see if it's real
[Music]
Resume
Read
file updated 2026-02-13 13:00:54 UTC
Categories
Manage