Transcript
m1ypL2Bygjg • How To Build A Bright Future With AI: A Conversation with Dr. Rumman Chowdhury | NOVA | PBS
/home/itcorpmy/itcorp.my.id/harry/yt_channel/out/novapbs/.shards/text-0001.zst#text/0988_m1ypL2Bygjg.txt
Kind: captions
Language: en
hello hello welcome and thank you for
joining us today I'm Caitlyn saxs a
senior producer for Nova and today we
are in for a big treat a live
conversation with one of the experts uh
we featured in a recent film secrets in
your data so that you our audience can
delve further into the content and even
ask your own questions of of Dr uh ruman
chowri who we have here today with us
welcome Dr chadri how are you doing
today I'm doing great thank you so much
for having me thanks for coming so we
are going to dive deep on AI and
specifically uh we want to talk about
what um what you think a positive future
uh with AI could look like and how we
will get there so uh a little bit of
background uh Dr chowri is the CEO and
co-founder of Humane intelligence a
nonprofit which works to inform the way
AI models are built through algorithmic
auditing and evaluation now if that
sounds confusing we'll get to what that
means uh her work revolves around
researching how AI can be used
responsibly and she was also featured in
Nova's secrets in your data where she
gave insight into how into the seemingly
innocent ways internet users can get
roped into giving away their data online
through cookies social media and of
course AI is a whole new layer on top of
that so before we get started I have
enough questions here uh probably to go
for hours with Dr chadri we only have
about 40 minutes but um I do hope to get
to some of your questions so if you have
questions put them in the chat uh and we
will try to get to them so um Dr chadri
let's start kind of with the basics uh
it feels like something has happened in
the last few years with AI it was kind
of being developed for years and years
and years and then all of a sudden it
couple years ago it just sort of started
to feel like it is Among Us now it's
here what changed what what what what
has
happened well it's great that you put it
that way because this techn ology
actually has been around for quite some
time and actually to be frank it has
actually been used on people for a very
long time uh we just didn't have direct
access to it so what happened in
November of 2022 was when chat GPT
launched it was not just a revolution in
the technology actually the big
Revolution was on the accessibility of
the technology so before then for people
like myself who'd been working in the
field of AI explaining to regular people
how AI is used in their lives seemed
very abstract could say something about
recommendation systems or hiring
algorithms but it didn't really hold
water it didn't mean anything now with
chat GPT or with any of the large
language models you interact with an AI
model like you're texting your friend
and it's a few things one it's the the
very text-like design which is something
that most people are very familiar with
and second is the real-time
interactivity with what everybody knows
is an algorithm so the big revolution in
Fall 2022 was the access ability and the
direct access to the
technology and it it feels like since
then uh there's kind of two camps that
people are generally following in one is
this is really exciting this is cool AI
is going to help us in so many ways and
then the other extreme of that is it's
dangerous it could destroy Humanity um
it's going to you know make climate
change worse versus it's going to make
climate change better and so it feels
like a little bit of a bit of
polarization there where on that
spectrum and I assume it's actually a
spectrum between wearing Rose tinted
glasses versus dripping Crimson blood
where do you sit there what what's your
perspective on what AI is going to do
mean for Humanity yeah I mean I'm
definitely not wearing Rose tinted
glasses or dripping Crimson blood I
would probably be somewhere right in the
middle and uh so by background I'm a
social scientist and I think of all of
these things as tools that are wielded
by people so fundamentally the language
of AI is very interesting to me it's
quite unique we talk about this
technology as if it is making its own
unique decisions when actually it's not
true human beings design the technology
to to go do things right um so where I
sit with this is really it's a function
of our ability to get ourselves in order
our ability to ensure that the
technology is built the right way so you
know it's up to us to make the right
decisions
um so let's let's take it away from the
extremes then what what are the most
positive ways realistic positive ways
you foresee AI impacting some Society
what what are some of the good things
that you think we might actually this
this technology might allow us to
realize yeah so I love that question
because I think as we think through as I
said earlier it's up to us to decide
where it goes so what we have to have is
an affirmative Vision we have to have an
idea of where we want to to go with the
technology and there's a few things that
we are seeing really positive and
helpful advances the first and probably
the most obvious to most people is how
medicine and medical research is being
improved um so we have ai models that
can um you know help geneticists a lot
of whom are actually data scientists
create very specific approaches to
addressing diseases that we've not been
able to crack like ALS or cancer in fact
there's conversations about you know we
may be able to have vaccines and you
know cure certain cancers within our
lifetimes this is amazing because
frankly a lot of these questions about
um medical research are good questions
that AI can help answer it's not this
magical technology it's technology
that's able to Crunch a lot of data
without getting tired right do it
incredibly mathematically and
agnostically and that's actually the
help that geneticists needed in order to
do what they do really well so I think
we're going to see for example crisper
technology um um so medicine is one the
second I actually like to think about
interestingly which is people one people
don't think about is weather prediction
and this sort of relates to climate
change so we're all now experiencing the
weather being more extreme um so I live
part-time in Houston Texas uh what used
to be called 100e floods and 100-year
hurricanes now happen every 5 to 10
years and the ability to predict well in
advance when a dangerous weather
situation is going to happen is quite
valuable for saving human lives so now
we can know weeks in advance that there
is a you know there's some sort of
adverse weather condition forming and
that this is the general path it might
go and so you can be prepared that's
incredibly powerful and actually it's
already saving human lives today so
those are the two examples I like to
point to as to how AI is being used
really positively and very
helpfully and then on the other end of
the
spectrum I mean a lot of people go to
the Terminator scenario uh is that's
what keeping you is that what is keeping
you up at night or is are there other um
are there other dangers that that are
that are more concerning more realistic
ones that that are more concerning to
you um yeah so I think we are quite some
time away from AI making decisions I I
know a lot of these fears are sort of
shaped by movies um and it's interesting
that a lot of these like scenarios
people talk to talk to are literally
plots of movies um but you know the
things I'm more worried about are the
harms actually already seeing manifest
today so one is you know the
centralization of uh power and wealth
into the hands of very very few people
um who now have influence that's greater
than any given government uh that is
something to be concerned about that's
something we should always be concerned
about it's not something unique to
artificial intelligence I think the
other thing is how these AI tools are
being used in a in a way that is
directly impactful on our lives and we
don't have the ability to decide whether
or not this tool is used on us how it is
used on us uh and also our ability to
even give feedback on whether or not
it's performing accurately or correctly
um so that's a lot of the work that I'm
passionate about is how can we get
people regular people's perspectives in
how the data is collected uh how these
AI models are used um and you know these
are not new narratives that the
narratives we've had about Silicon
Valley for years is that it's sort of
this very small group of people who are
making these decisions that a very
massive impact and the average person no
matter where you are in the world in the
United States in senagal in the UK in
France you don't have a say in how these
models are being used and finally it's
that these models are often being used
to make major decisions in a way that
you will never see so algorithms being
used to determine your riskiness for
being given a loan or whether or not
you'd be a good fit for a job um there
are also ways in which we're already
seeing that we already know that many
algorithms and AI models um you know can
radicalize individuals I think the new
face of generative AI
is supercharging Bad actors that are out
there so I I've done some research on
generative Ai and gender based violence
online demonstrating how easy it is to
create an automated hate campaign um
with with very minimal ability to code
so I think there are some very serious
concerns but you'll probably see the
common threat in this is how people are
going to use the technology in malicious
ways rather than how the technology will
somehow do malicious things
magically so you you've said so you just
said that basically we're already
starting to see some of the harms that
it can do um to a certain extent it kind
of feels a bit like the cat is out of
the bag the there's you know it's not
just open AI now with chat gbt it's
there's a whole bunch of companies a
whole bunch of
models how do you go about even starting
to try to um Terrain in what Ai and to
to to uh prent against those
harms yeah I mean I I think one thing
that I'm very heartened by is that we've
seen a massive Global regulatory
response so I wrote an oped for Wired in
April of 2023 about how AI needs Global
oversight and the reason I think about
it is actually it's it's the the
negative externalities of this
technology is borderless so somebody can
be subject to a radicalization campaign
from somebody sitting in a totally
different country so passing laws in
let's say the United States about you
know not having hate speech or whatever
does not mean anything if the person or
the entity creating the hate speech sits
in Canada or wherever right um so like I
wrote this article when I wrote that
article a little over a year ago we had
no Global entities really working on it
now we have many we have un we have the
oecd we have different Global safety
institutes we have a lot of existing
bodies and new bodies forming to think
through these problems globally so I
think that's one is you know we're
thinking through what a regulatory
respon looks like I think the second is
you know part of it is just going to
actually be education like regular
people need to get smart about how AI
Works what manipulation looks like um
and we've sort of been through this
before right we've been through this
interestingly with r radio and broadcast
television right where people got smart
at Discerning good and bad information I
think the interesting thing about
generative AI is that it's able to
create fake media in ways that we are
not familiar with so most people people
prior to let's say you know 2022 would
consider a photo um and we're all
familiar with Photoshop but we we would
generally consider a video or a photo to
be factual well now we're going to have
to negotiate what it means to watch a
video and think oh that might not be
real which is very new to us so we're
making sort of these new connections so
I think there's responsibilities for
everybody here we have governments we
have companies there's also Civil
Society organizations which you know can
and are you know doing a lot of things
to help and then there's sort of the
regular person who needs to be smarter
about about the new generative AI world
that we're living
in uh so you're you are a responsible AI
fellow at the burkman Klein Center for
internet and and Society at Harvard
University tell me about what
responsible AI means to you I'm guessing
it's it's a bit more than like just not
using it to cheat on your homework or or
something like that I also I also have
some thoughts on that as well I don't
know if it's cheating and maybe more of
a a reexamination of educ that needs to
happen but we can put a pin in that one
um so responsible AIS you know to me
it's just it's actually quite a simple
definition we're building artificial
intelligence that helps everybody not
just a few people not just people with
very specific problems if truly this
technology is supposed to be world
changing which is just a direct quote
from all these companies say well then
it needs to serve everybody in the world
and responsib examines all the ways in
which this technology can do that um you
know first by asking is this necessary
is this useful is this helpful where are
the gaps and how do we fill those
gaps um let's take a pin out of the the
education thing because I think that's
something that's on a lot of folks's
Minds I personally I have a young child
I don't know if you have children but
one thing I think a lot about is oh my
gosh that fourth grade essay is going to
be really easy to write um what are your
thoughts on how this is going to change
not just education but how human
think I I really love your question um
so there's sort of two parts to it and
these are really commonly asked I do a
lot of talks at universities actually
all over the world and I talk to
Educators I talk to students and
interestingly it's the same common fear
so it's not that students are like super
enthusiastic like yeah I never have to
write an essay again a lot of college
students are really worried about what
they should major in what their life
path should be and they're very
concerned about what an AI Workforce is
going to look like given that we don't
know um so first is when we think
through Educational Systems right like
we actually have to think through what
is the purpose of Education right the
purpose of education is not just to like
make kids suffer through you know
reading Shakespeare and then write a
thing about it but actually it's to
teach them how to gather uh and
synthesize information to create new and
interesting thoughts right to the extent
that like you know an eighth grader can
do something like that right but and
it's interesting because it's teaching
them how to think right I I taught you
know I was in grad school I taught
students for quite sometime at a college
level I think the most important thing
you learn in college is how to think not
what to think and I think the concern
some of these concerns are sort of built
around the well how then are we going to
teach people what to think if the AI is
going to just tell them what the answer
is well then that's the wrong approach
to what education's for so now we live
in a world in which we have this very
sophisticated tool which by the way it's
not the first time we've been here the
internet too is a very sophisticated
tool that enables people to do really
Advanced research and learn things they
couldn't learn before well we've again
we've navigated this so the way I like
to think of how AI first of all like I
would not suggest any educator ban AI in
the classroom because it's just not
setting your students up for Success
they will enter a world in which
generative AI is going to be used in the
workforce it's going to be used in
various various situations they need to
create uh students who are able to have
good discernment and critical thinking
they need to learn how to use these
tools to make their natural abilities
better and that's there's nothing wrong
with doing it there's literally nothing
wrong with using a technology wisely so
the job of an educator is to teach these
students how to use it wisely so this
might mean you know reforming your
standard curriculum and I this just as
an example of some uh Forward Thinking
Professor friends of mine who have
already Incorporated chat GPT they'll
create assignments like um you know use
chat GPT or whatever use an a gen a
large language model to help you create
10 different hypotheses for this paper
um you know you then have to sort of
figure out what works and what doesn't
and give three arguments as to why this
is good and bad and then select one the
second one could be uh using generative
AI write a general outline um including
sources for your paper um you know and
again like you have to tweak it right
and the final thing would be write your
own paper but you'll notice how like
every step of the way you're teaching
people how to use this tool in a way
that's leveraging their own skills and
abilities versus treating it like an
adversarial uh like an enemy um and
again like I I think it's really
important because they will be entering
a Workforce where they need to
critically analyze so the thing that I
think a lot about as it relates to just
Society in general AI is critical
thinking right so how do you and this
this sort of relates to anything right
how how does a student know if the
output of an AI model is good or bad how
are you going to know if this rooc call
is real or fake how will you know if
that's an email from actually your
Banker or an email that's a fishing scam
it's all actually the same thing people
just we we're all on this journey to get
better at Discerning good and bad so you
make a really great point that um how we
educate kind of is a direct line to what
the workforce of the future we are
educating for is uh it seems like we're
in a bit of a challenging moment because
we're still figuring that out what does
that Workforce of the future look like
and I think one of the biggest anxieties
about AI is that it will start taking
our jobs so so what are your thoughts on
that we'll we'll NOA be produced by AI
in uh in 10 years or or and if so then
what will I
do uh I should hope that Nova is not
produced by AI in 10 years because it
would not be anywhere near as creative
or interesting or as thoughtful right
because one thing AI cannot do is come
up with own unique and original and
interesting ideas um and particularly
ideas that understand The Human
Condition like AI is just not capable of
doing that so while you could probably
generate or you know use AI to create
more Nova like things will it be as
educational and interesting and
inspiring no it won't be I think that's
really the thing we have to think of so
now we are in a world in which
generating content is extremely cheap
and easy so we're going to unfortunately
go through this painful period uh and
we're already talking about it right
they call it AI slop where there is just
a ton of just garbage out there and
we're waiting through this slop and no
matter what social media platform you're
on some are worse some are better but
the AI slop is everywhere so we're g to
and now so now we're going to get past
this initial excitement of oh anybody
can make a video about anything everyone
should be an influencer everyone should
have videos made by Ai and hopefully and
my my my hope and my faith in humanity
is that we will start asking ourselves
why and we will start to go for Quality
rather than quantity when I say quality
I mean quality of the idea quality of
the perspective right and we're already
seeing this on some social media
platforms like Tik Tok where you know
being genuine and authentic is more
important than being very polished I
find that very fascinating you look at
the evolution of social media you had
Instagram and we have terms like
Instagram phase right where everything
had to be perfect well now we can use AI
to create that Perfect Image so now what
we value are H human flaws I find that
beautiful I find that really beautiful
that we just look no matter what happens
we look for authentic connection so I
don't think that's happening um and
especially to answer your question about
future of work there have been two
papers I think are quite meaningful uh
because they basically both uh you know
come to the conclusion that we're not
going to have mass joblessness so the
two things are one is there's this paper
that came out when GPT 4 came out in
April I want to say of 2023 um and it's
called gpts or gpts knows that it was
open AI researchers working with
economists to see how large Lang models
would impact different different sectors
um what they estimated was that 80% of
jobs would have about 10% automated away
so you could see it as the impact of
like email right it made communication
faster and easier now it's going to make
ideation faster and easier that's pretty
much for 80% of jobs and I think they
said about 19% of jobs would have about
80% automated away and those are
industries that really need to sort of
think about what they're working on
journalism was one of them um but again
my faith is that like we're already
seeing you know creative field so I
think they had if I'm remembering
correctly like paralal journalism uh
photography a lot of the creative Fields
but again this is assuming that this is
just based on content production not
necessarily people's case and
discernment of what content they're
looking for the second paper that's very
fascinating is by this uh very
well-respected Economist um Dara Mugu it
was a paper published in NBR which is a
very respected economics journal and and
it later was the basis of some work that
came out of Goldman Sachs and he was uh
estimating the total factor of
productivity so what is the
macroeconomic impact of AI models and he
found it to be sub 1% over the next 10
years so but what will be interesting
and his paper is certainly worth a deep
dive it is dry it's an economist paper
the Goldman Sachs report is interesting
because it's more of like interview
based maybe for the average person who
doesn't want to read a dry econ Journal
paper um you know a little bit more like
direct and compelling but basically
saying that there will be some impact on
the market doesn't mean we have to all
explore Universal basic income does it
mean we're going to have a two-hour work
day or whatever and AI is going to do
the rest probably
not and it sounds
like forgive me if I'm I'm I'm
distilling not quite right but it sounds
like what you're saying is essentially
one of the ways that individuals can
start thinking of making
themselves Irreplaceable by AI is
leaning into more about what makes them
human um absolutely
absolutely the answer it sounds obvious
when you say it but there's something
kind of liberating about that that um
we've spent so much of uh of of the
history of humanity in our heads and
there are other aspects some of it's
still in our heads creativity but there
are other aspects that we can uh might
be freed up to lean into
um I hope so it's interesting because
these same conversations happened with
the first Industrial Revolution there
were so many conversations about you
know people will only have to work four
hours a week and the rest of the time
could be spent towards Leisure Etc and
frankly what we have found with every
technological advancement is actually we
don't work less we work more so I think
the bigger question to ask ourselves is
when is enough enough when is enough
productivity enough like so great now I
can work faster does that mean I need to
continue to produce more and more and
more or you know can we draw a
boundaries for ourselves and say this is
actually a sufficient amount of
production right that this is this is
enough like I've contributed I don't
need to contribute at my maximum
breaking point and it's interesting
because we are seeing some of these
conversations happening today especially
around gen Z and how genzi acts
differently in the workplace from
Millennials and previous generations so
like again as a social scientist I
always see these things are soci
technical right these are
technological things that exist in the
world and what matters is this interplay
between people in Ai and not just the AI
in a bubble or people in a
bubble so I want to step back a second
and talk a little bit more about your
work specifically so you're also the
founder of parody consulting which
provides short-term expert-led
responsible AI Consulting and
auditing what what does that mean how do
you evaluate AI for harmful behaviors
um in sure it can be a lot of math um
and it's like a lot of math plus a lot
of understanding how people work so I'm
a quantitative social scientist my
interest is understanding patterns of
human behavior using data that's what
I've basically always done even though I
came into Silicon Valley uh after
working in other fields and but that has
always been what has been really
compelling and interesting to me so
what's fascinating about some laws that
have passed for example the Digital
Services act in the UK the Digital
Services act basically says that um
companies that fall under its purview
which are sort of the biggest companies
in the world so it's all the you know
the the companies that directly impact
consumers so most social media companies
Etc need to demonstrate that their
algorithms are not harmful uh under
certain conditions so for example it's
not adversely impacting the course of
democracy it's not adversely uh you know
influencing children and young people
it's not causing it's not violating
fundamental human rights now that is
super interesting to me because that's
really hard to do like I I worked at
Twitter I leted them the uh algorithmic
ethics team at Twitter that is really
hard and it's it's it's hard because
what is it it's not hard because it's
mathematically hard it's hard because
it's conceptually hard to say what does
it mean to have impacted the course of
democracy of course people are on social
media conversing about Democrat like
right now there's elections happening
all over the world of course we're on
social media talking about it so how do
we discern how much social media is
impacting it and whether it is an
adverse impact or negative impact these
are actually more like almost
philosophical questions but the job of
someone like myself an algorithmic
auditor assessors to take that out of
philosophy and make it something
measurable and grounded so this could
mean uh understanding like who is
impacted by the algorithms collecting
data about you know people's behaviors
and how they've changed over time um or
in or maybe even thinking of ways in
which people who are not engaged online
are different from people who are
engaged online there are a lot of really
clever and creative ways that you can do
this work but it actually remains um you
know not a very well- defined field we
don't have standards and best practices
for algorithmic auditing one of my
concerns and one of the reasons for my
nonprofit is that we're writing more and
more laws like the Digital Services act
but we don't have the workforce that
understands how to do this
labor so do are there are there any sort
of examples to sort of make a little bit
more concrete like how you might test a
specific Al gorithm like what what the
what the test is how how that works what
what sort of work goes into it yeah so
um during my time at Twitter one of the
papers that my team published um that
actually made quite a bit of a splash
was looking at algorithmic amplification
of political content on our algorithmic
feed versus reverse quantologic so for
folks who are familiar unfamiliar with
Twitter um we had two different feeds
one is curated by an algorithm um so
that's whatever you know homegrown AI
models that we had made another one is
reverse chronological meaning that you
saw things in the reverse timeline order
of when people posted it so what's
interesting is that creates a nice
experiment so if people so we can look
at the kinds of political content that
are that are pulled up by our algorithm
versus what people would naturally see
without that algorithm um so in the in
the test we found actually that in seven
out of eight countries there was a
center right lean or amplification by
the algorithm so that's really
interesting right it's a very
interesting finding but here's where
here's where I gets hard and to be clear
this was TW this was Twitter Circ around
what what year year Twitter um 2020
actually it was a great question because
it sort of feeds into the second half of
you know how to think through this
problem so this was I think our data was
from April to August of 2020 um so
during that time there were many
elections including in the US right so
it's a very fascinating time period to
look at because when we're talking about
amplification of political content it's
influencing people's perspectives as
they're about to go vote right um so
it's really interesting so we find that
seven out of eight countries there's a
center right leaning now the question
that we don't know the answer to we we
know the phenomenon that's happening we
actually don't know why we don't know
why it's happening because as I
mentioned all these things are socio
technical right these algorithms are
built the purpose of the algorithm is to
amplify content that people people are
engaging with so it could be that
between April to August of 2020 most
people in the world are engaging with
cerite content in which case what is the
answer what is fair what is correct and
what is appropriate do we then say oh
well you're only allowed to see this
much content doesn't matter what
everybody else is talking about or do we
show people what they want to see so an
or is the problem that the algorithm is
bias so one phrase people are
increasingly aware of is algorithmic
bias but algorithmic bias means a very
specific things it means that although
we have not coded something into the
algorithm it is figuring that thing out
and it is artificially amplifying it
even though it's out of scope so I'll
give you a really good example of how
that happens um in the real world in the
United States uh people's ZIP code and
race are highly correlated with each
other because we just live in a
segregated country uh as a result if
somebody let's say creates an algorithm
where they're using zip code and
geography for whatever reason it is uh
inadvertently picking up race so the
output of the model may end up being
biased based on race and the bias is not
because they put the variable in but
because these two variables are really
correlated with each other right so when
we think about social media algorithms
in the situation we had well you know if
it were to be algorithmic bias and the
answer is specifically that even though
there is no variable in our Twitter
algorithms that said um you know what is
the political leaning of this post
somehow it was artificially picking it
up up to figure that out my team would
actually have to Deep dive and do quite
a bit of work or the hypothesis could be
that the algorithm is working just fine
the issue is that it's amplifying what
people are talking about and if that is
the issue which was that was my
hypothesis unfortunately we did not get
to test that before our team got fired
um when Elon Musk took over um if that
is the case and that's like a
deeper um it's a deeper question about
power and authority right who gets to
decide what isn't isn't correct for an
algorithm's performance who gets to
decide what isn't isn't fair who gets to
decide what is the correct amount of
information someone should be seeing so
all of these questions and all this
actually exists in generative AI as well
these are all questions of content
moderation and companies and the CEOs
who run these companies they get to
decide they get to decide who gets to
see what when and how and it's really
obvious to us on social media because
you know people will get platform
someone will get banned they'll get
removed they they'll talk about Shadow
Banning but by the way the same thing
happens with generative AI models these
models are taking data and they're
synthesizing it for you and they're
telling you what to think so now it's
another layer between you and a search
engine whereas in the search engine it's
curating sources generative AI is
another layer saying you know what you
don't have to read the sources I'm going
to digest it for you and I'm going to
tell you what to think so I just want
people to understand that the questions
that we have asked of social media
companies are actually the same
questions we should be asking of
generative AI
companies and so I hear you
that you feel that it is really the
companies or whoever holds the power
that is sort of responsible in a lot of
ways for these algorithms one of the big
and I have to ask because it's it's it's
the stuff of Science Fiction one of the
big concerns with AI is that as it
becomes smarter and smarter it could
reach something that is s similar to
sentience and I'm curious for your
thoughts on that you know if if it
thinks like a human then at what point
does it become what is sentience at what
point might it become that or could it
never become that yeah I mean I think
fundamentally the structure of these
models that doesn't think like a human
um so AI is the simplest way I explain
artificial intelligence is it uses data
to make a prediction um and it can make
a very sophisticated prediction but
actually when you interact with a let's
say a language model it is making a
prediction it is making a prediction of
the bag of words literally it's like a
group of words that best fits together
that will give the response that is what
you are looking for so it's actually a
predictive model in that sense which is
why when we talk about how AI
hallucinates how it makes up things that
are wrong that's not a bug it's a
feature it is part of because it does
not understand context it doesn't
understand why you're asking it doesn't
understand your intent it just sees this
list laundry list of words and it
calculates the probability that other
words will fit in to give you the
response you're looking for so that's
not how the human brain works and I'll
also add that even if we think about at
what level of sophistication today's AI
models are and how much data and input
they need to get there the average child
requires significantly less data to have
a higher level of cognition than today's
very rudimentary AI models and there's
already ation of we have you know sort
of reached the max of you know the data
that we that exists in the world so I
think there's a couple of things one is
that we are we are hitting very real
ceilings in the capabilities of the AI
models that are built because they are
very inefficient on how they use data
right so the the models that we have
today which aren't particularly good
they're okay and they're nowhere near
human cognition levels right um have
already nearly exhausted all of the data
that exists in the world isn't that
crazy where the average child can do so
much more with so much less so like the
the basic structure the architecture is
off and second is I think even just and
again everything for me starts to become
a not a philosophical question but like
a thinking question what do we mean by
sentients right so it's very interesting
to look at how the companies are
defining it so open a defines artificial
general intelligence as the automation
of all tasks of economic value which is
very fascinating is that really
interesting so is that is that what is
that what the human right is that what
the Human Experience is is The Human
Experience being an economically
productive unit and when we are not
economically productive when you are a
baby or when you are ill or when you're
pregnant and you know not working or
when you're elderly are you not a human
anymore are you not worthy of existing
because by that definition you are no
longer intelligent because you are not
economically productive I just I when
the fir I have chills every time I say
that because the first time someone told
me this this is um Dr Shannon Balor who
told me that she just published this
amazing new book uh called the AI mirror
I had to sit with that for like 10
minutes because what a what a reductive
way of thinking about what intelligence
is like it it makes me sad if we are
going to correlate human sensient and
our purpose of being our Consciousness
with only economic productivity
literature out the door music out the
door unless you're making billions of
dollars like Taylor Swift right you
can't do anything for yourself you can't
do anything out of love you can't just
write a poem or bake a bad
cake none of those are economically
productive therefore they have no
value interesting um we have we don't
have too much time left we're gonna take
one question from our uh our YouTube
chat and then I have one final question
for you so uh a someone in chat asks how
do we filter personal bias that
developers may add to AI systems to
avoid creating a bias system
that's a great question um I think the
answer to that interestingly is that
there's two parts of the question to the
answer one is to frankly have more
diverse teams uh the way you filter
personal bias is to not have everybody
on the team coming from the same
background whether it is economic
linguistic Geographic gender race you
name it right so we just need more
diverse teams uh and broader
perspectives the second is some of the
work that I work on um which is you know
giving more people access to evaluate AI
models so what I work on with Humane
intelligence is red teaming so as I
mentioned the very beginning the big
Revolution was in accessibility so
anybody was able to interact with an AI
model what I've done is create ways for
of Everyday People to evaluate AI models
using that same functionality so we have
done these exercises with Architects
with Scientists uh we're doing them with
students Etc like wide range of people
in order to get feedback to help improve
how AI models are
built great so final question this is a
little bit of a fun one um a number of
you in the uh YouTube chat will know
that uh Nova has been designing an
escape room uh Nova's first escape room
we've ever designed and it has an AI
theme so um if if you're in the chat we
have uh another live stream coming up at
one where we're going to be doing the
final design session on that but I want
to run the scenario past you Dr chadri
uh to get get your take on what are to
get your take take on our scenario let
us know how well we're doing so it's the
year 3 it's a few days before New Year's
Day and the year 3000 the world is run
by a benevolent AI that takes care of
everything there's no more disease
humans spend their time making music and
playing games watching science
documentaries doing yoga stuff like that
but a few days before y3k there starts
to be things start to go wrong it seems
to be a bug
and things are falling apart and so we
have a group of Engineers that need to
come back in and retrain the AI reclaim
all the Lost knowledge and retrain the
AI to uh save the world what what do you
think what will Year 3,000 really look
like with
AI I I that's that would make a great
movie um I think that's an amazing
scenario what I would love if the answer
is if we're going to retrain an AI we're
actually going to make sure we have
diverse perspectives and a wide range of
people who think about very very
different things so you need like a
botanist and a you know author and a
musician and like all these kinds of
people in the room to make sure that
you're actually making a benevolent Ai
and what an interesting concept right
this idea of a benevolent AI like
benevolent in what aspect right to
ensure that we are I think of as like a
masa's hierarchy of needs thing like
what does benevolent even mean making
sure we are all fed and clothed because
that's like the bottom tier are we
intellectually engaged because that's
actually the tier we want to be at so
what like what a fascinating scenario I
love
it um thank you well um thank you so
much for your time uh how can folks keep
up with your work going forward where
can they find you yeah so our website is
um human- inell org and I'm also on
LinkedIn you can just find me by my name
all right great um and uh you can also
stream secret to data which Dr chowri is
featured in on the PBS app and website
if you're interested in figuring out
what that whole escape room thing is
about uh stay tuned for another live
stream from Nova at 1M uh where you can
help us finish designing the escape room
and we'll show you some designs so more
info there thank you again for joining
us Dr chadri thank you for the
insightful
conversation a good one