Ideas & Insights
AI and its Pathologies
Special | 27m 47sVideo has Closed Captions
Ms. Madhumita Murgia's discusses her studies of AI and its effects on the world.
Ms. Madhumita Murgia's book is a sobering introduction to the dystopian nature of an AI-driven world. Code-Dependent: Living in the Shadow of AI subverts received wisdom about AI and reminds us that though enormously beneficial, unmoored from empathy and human values, this technology could destroy our humanity and lead us into a grim and grotesque world.
Ideas & Insights is a local public television program presented by WGTE
Ideas & Insights
AI and its Pathologies
Special | 27m 47sVideo has Closed Captions
Ms. Madhumita Murgia's book is a sobering introduction to the dystopian nature of an AI-driven world. Code-Dependent: Living in the Shadow of AI subverts received wisdom about AI and reminds us that though enormously beneficial, unmoored from empathy and human values, this technology could destroy our humanity and lead us into a grim and grotesque world.
How to Watch Ideas & Insights
Ideas & Insights is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Badrinath: Hello, everyone.
Welcome to Ideas and Insights, a show devoted to exploring novel perspectives on contemporary issues.
I am Badrinath Rao, your host.
Love it or loathe it.
We live in an epoch profoundly shaped by AI, machine learning, ChatGPT and other novel technologies.
Be dazzled by their Bountiful applications.
We think they offer solutions to all our existential challenges, though partly true.
Such assertions gloss over the hidden costs of AI, particularly for those who labor in its shadows.
My guest today is Ms. Madhumita Murgia, a science journalist and editor of AI at the Financial Times in London.
An expert in artificial intelligence and its social implications, Miss Marja is the author of Code Dependent Living in the shadow of AI.
Published this year by Picador Books.
She traveled to countries such as India, Kenya, Bulgaria, Argentina, the Netherlands and so on to report on the impact of AI on marginalized communities, Riveting and thought provoking, Codependent Living in the shadow of AI subverts received wisdom about AI and reminds us that, though enormously beneficial, unmoored from empathy and human values, this technology could destroy our humanity and lead us into a grim and grotesque world.
Its author, Ms. Madhumita Murgia, joins me to discuss her ideas further.
Welcome to ideas and insights, Miss Mugisha.
Thanks for joining us today.
Murgia: Thank you for having me.
Miss Murgia, your book focuses on the less known aspects of AI machine learning.
ChatGPT, and other such novel technologies.
You focus on the impact of these technologies on less privileged people in the developing world.
However, you say in the beginning of your book that AI is nothing more than a complex statistical software.
What do you mean by this?
And what are the implications of the use of AI in different areas of life?
Madhumita Murgia: Hi there.
Yes, that that's really key to understanding so much of the downstream effect of this technology to to really look inside the box, inside the machine and have, you know, an understanding of how it works.
And so when I call it a statistical technology, what that means is it's predicting probabilities.
So this isn't a calculator or a knowledge machine that you know, which what computer scientists call a deterministic technology that kind of looks up specific answers.
Instead, it's been trained on huge amounts of data, and it's making predictions, statistical predictions of what the most likely answer might be or the most likely decision or next word in a sentence.
And so if you understand that it works that way, then it's clear that it doesn't always give you the right answers, but only what it thinks are the most likely or probabilistically accurate answers.
And that's why we have errors that show up, in these systems.
And hallucinations as they are known, which is essentially fabrications or made up facts and numbers.
Because we have to we have to always remember that probabilistic.
It's, it's a predictive technology and not a calculator.
Badrinath: Early on in the book, you introduce your readers to the huge amount of back and work that goes into the creation of AI technologies.
And you also point out that this work is performed by low paid workers in developing countries, and their work is shrouded in secrecy.
In particular, you talk about two types of work which are tedious and traumatic data labeling and content moderation.
Tell us more about these, types of work and what they do to the people who do it every single day.
Madhumita Murgia: Yeah.
So this is so important when we look at the supply chain or the pipeline of AI technologies, because, when we hear about these in the press and we read about them, we only see one end of the supply chain, which is, you know, the, the big tech companies producing the software.
Right, the end and sort of what comes out of it.
But really, in order to get to that place where we have systems like ChatGPT, these models have to be taught what they're looking at.
You know, if you have a Tesla that has a self-driving component or if you've written in a Waymo car, these AI systems aren't self-learning systems like, say, a baby or the average human being.
They have to be shown data in order to know what they're looking at.
And for them to be trained, you need humans to do the labeling and the sort of manual teaching and to tell these systems what they're looking at or what they're reading or what they're hearing.
And so I was really fascinated in peeling back the curtain on this army of low wage workers.
Essentially, it's like the call centers of the AI age, where mostly employed in, as you said, you know, low wage economies.
I traveled, to Nairobi and Bulgaria in particular, to speak with workers who came from the local slums, the, informal settlements, for example, Kibera in Nairobi, which is the largest slum in Africa.
And in Bulgaria, staff, mainly by refugees from the Middle East who had come over from Syria, Iraq and elsewhere.
And, you know, there are millions of workers around the world where essentially looking at videos of cars driving down a street and drawing boxes to show what these cars are looking at or looking at snippets of text from ChatGPT and labeling which ones are toxic or illegal or, you know, unacceptable to teach these systems what's right and what isn't.
With content moderators like you mentioned, that's a far more, traumatic job because they're not just labeling text or images.
That are sort of neutral, like cars driving down a street or, you know, geospatial images, but really are looking at the worst content on social media, the dregs of humanity, you know, images of beheadings, war, torture, bestiality, child pornography, even, and their role is to the human filters and to teach algorithms at the same time, what is unacceptable for us to have on the internet.
But as you can imagine, watching hours of this content, having to label, categorize it, is deeply traumatic for the people involved who, again, are being paid, minimum living wage in the countries they live in and have very little support.
In the aftermath of these roles.
So I, I spoke to these workers to try and understand the impact on their lives, both positive and negative.
And really, the big question for me there is what, you know, how should these workers be fairly compensated?
Are they really seeing the upsides of this digital revolution?
Like we we are claiming that they do.
And you know, how how should they be, what rights should they have?
What agency should they have?
Being a part of this process and this industry, Badrinath: let's now turn to another disturbing aspect of AI that you discuss.
In your book, I'm referring to the emergence of deepfakes, an AI technology that produces hyper real images and videos of people that are fake.
And this, technology is used for creating pornographic videos of unsuspecting women and teenagers.
And you have described in your book at length about how innocent women have been horrified to discover that they, their images are used in these videos and their lives have been upended.
How big of a problem is this?
And what do you think we can do to ensure that AI technology is not abused in this manner?
Madhumita Murgia: Yeah.
So, you know, I jumped between lots of different applications of AI because it is used in such a diverse set of applications and in such different ways.
And this this is, a great example of a really unexpected way in which AI is impacting human lives.
Through the generation of deepfakes, as we call them.
People will be much more familiar with them now because we have these generative AI systems like ChatGPT and Gemini, where you and Midjourney, for example, where you can create images just by writing descriptions, create videos, even create audio, files and voices that are really similar to people that, you know, so it's become much easier.
The barrier has gone way down to create technologies like this, even from your phone or your laptop at home.
So when, when I went about finding women who had been affected by this AI to, like, you wanted to quantify the problem, the last numbers we have is, you know, that, we know for a fact that a lot of the impact of this is on women.
So when you look at deepfakes online, 98% of it is pornographic.
So we talk about political deepfakes or other sorts of uses, but primarily or overwhelmingly it's pornographic.
And of that, you know, almost that same amount, 98, 99% is of women.
So we know that that this is very skewed and whom it affects.
And it's a huge problem.
But it's very hard to quantify now because it's so easy to produce.
Anybody can do it in their homes today, and it's hard to even know what's out there unless you upload it.
So it's a it's it's a difficult problem to quantify, but it's, it's clear that most of what's being produced by these AI image generators is pornographic.
Is of women, ordinary women, not necessarily celebrities, who have no idea it's happening.
And the real question for me was, you know, there were two, two dual questions.
One was what the human impact of this, which is why I spent time particularly focused on two women, to really get into their stories.
One of them is quite a well-known poet in the UK who writes books and is a lecturer, and another who is a student in Sydney.
And to really get into sort of the their psyche and to understand what it feels like, to have something like this happen to you, where even though, you know, it's not real, it feels real because of how visceral it is to see videos of yourself doing something so shocking and horrific as, you know, having intercourse with a stranger, which they both had to deal with.
The one was that I really like to look at the quantitative impact, the qualitative impact really on on women.
And the second was to figure out why, you know, what you can do about it.
And I think the question most women who read this have had for me is, so what if this happens to me, how do I fix it?
And I think that's the core problem with, you know, and this isn't actually translates to issues beyond deepfakes as well with AI, which is there are no regulations in place for people to fix this.
will get into the question of, regulating AI momentarily.
But let me ask you a related question.
Just as, problematic as the, menace of deepfakes is, we also have an AI enabled technology which is creating havoc across the world.
And I'm referring to facial recognition technology.
You've spoken at length about this and explain how it is used for surveillance across the globe in authoritarian regimes and democratic, regimes as well, and how it has curtailed the freedom of people to express themselves, to voice their opinion, and so on.
In fact, you talk at length about how, China has, used facial recognition technology to, monitor the activities of the figures in Xinjiang province.
The question, then again, is, what can we do about this?
And how widespread is this issue?
Madhumita Murgia: Yes.
So I think, facial recognition is very interesting because it's used by those who are meant to protect us.
Right.
Law enforcement.
And national security.
And it's supposed to be a way for citizens to feel safe.
But, you know, for me, the primary issue is I explore this is that it's it remains a flawed technology.
We know still that it the technology makes errors when it's trying to identify women and people of color to a much higher degree compared to when it's identifying male and Caucasian faces.
as I say, not only is it is it flawed, it there's a basic sort of question of, our privacy and ability to live our lives in public spaces that I think we haven't interrogated here.
Because a lot of the places I focus on, like the King's Cross neighborhood of London, for example, where they were, had facial recognition cameras in public spaces without the knowledge of pedestrians and passers by.
You know, we're now at a point where you are no longer a private citizen who can go about your daily lives, expecting to be anonymous.
And I think there are beyond just being identified as much kind of deeper a question there about how what kind of society we want to live in.
If if nothing we do in public is anonymous any longer and everything can be traced back to me as, as multimeter major, and of course, this can be abused.
I think the biggest fear, if particularly if you're a journalist in a certain part of the world, if you're a political dissident, an activist, a protester is today, you might feel, you know, be on the right side of the technology.
But tomorrow, when the rules change, when you have a new regime, a political regime, or you somehow occupy a community that's that needs to be surveilled further, you're suddenly on the wrong side of it.
And just having access to this sort of data opens the doors for governments to do this at scale.
So I think it really fundamentally changes our expectations of privacy and in public, all over the world.
Badrinath: Let's now turn to an issue that you, discussed, at length in your book.
And that is the use of AI technologies in different fields.
It is obviously, used, in a significant way in healthcare.
It is used in, the judiciary, in law enforcement and, policing.
And you have discussed how policing by algorithms, the use of predictive algorithms, to go after communities, particularly marginalized immigrant communities, and keep an eye on them, ostensibly for crime control, how this technology has degenerated into a tool, of surveillance and literally harassment, policing by algorithms, is supposedly meant for checking acts of terror.
You know, deviant and criminal behavior and so on.
So are you suggesting that in the police force, predictive algorithms should not be used?
If so, what alternatives do you have in mind?
Madhumita Murgia: Yeah.
So, you can see how so many of these stories connect to each other, right?
Because we've talked about facial recognition and surveillance, which seems to be non-target it and broad, but then it connects very quickly with these sorts of AI systems that try and predict who is going to commit a crime.
And often that, sort of knowledge comes from these data feeds, from facial recognition.
When you're watching areas, communities and people, you start to kind of draw out patterns of who you think might be a miscreant.
So, I also, you know, I specifically focus in Amsterdam, as you mentioned, on, on a police algorithm that was trying to predict which children would go on to commit crimes, minors, 14 to 18 year olds.
Right.
And to prevent this from happening.
And so I think, you know, the question you asked is, should we have these systems?
And I think this comes up with, with, with so many different AI applications where the intentions are good ones.
Right?
So you have an intention to make make a place safer in the case of the children's crime algorithm, it wasn't wasn't to put children in jail.
It was so that the city could intervene and help families, you know, ostensibly to to support them to prevent these children from going down a path of no return that would damage their family in future.
So, so I, I obviously don't have an objection to trying, you know, to, bring in the welfare state to support families that require it to give them the help that they need.
The intentions are good ones.
But I think the question is, how is it implemented?
That's where you see failures, right?
And in the case of policing, you know, if you're creating an algorithm, with a skewed data set that largely sort of singling out children who come from minority immigrant communities, then what that look like in terms of impact was not that these people felt supported and helped, but that they felt targeted.
They felt surveilled.
They felt, like they were being blamed for how they were parenting their children and really largely excluded from the social fabric of the community, which is, you know, the last thing that they were setting out to achieve, right, where these, often Moroccan and North African community felt othered by police state and they felt like it was something punitive rather than something that was supposed to be supportive.
Well, aside from policing, you have throughout your book, discussed at length the negative implications of the use of AI and related technologies, you have identified how it's a form of data colonialism, how it expropriated the labor of people in developing countries and leaves them, traumatized, how it, robs people of their agency, you call it, a new form of, plunder and, the, source of great inequality in the world and so on.
All of which brings us to the most significant question, namely, since AI is here in our lives, and since it has clearly hurt a huge number of people and continues to do so, how do we regulate it?
Now you talk about the Rome call, which is an ad initiative for the ethical development of AI, the Bletchley Declaration, and so on.
And you also raise ten important questions in the epilog of your book, where you wonder whether our engagement with AI will be able to reckon with the issues that you think it raise in having studied it very deeply.
The question, Miss Margie, is what are the probabilities that we will be able to arrive at a global consensus for regulating AI?
And what are your thoughts on ensuring that this comes about?
Madhumita Murgia: Yeah, this is, that billion or trillion dollar question.
Right.
And it's one that we have all the experts in the world putting their heads together to figure it out.
And I don't pretend to have all of the answers, but I think, you know, firstly, the first thing to say is, you're right.
You know, my book brings out a lot of the darkness that surrounds these technologies and the ways in which it fails.
But I would say that rather than seeing it as a polemic against the technology or, seeing me as a pessimist in terms of, you know, the, you know, saying this should be banned or shouldn't exist.
I really see this as a realistic on the ground, illustration of what's happening in the world to ordinary people.
So to shift the focus from the sort of utopian, pictures created by technologists inventing it for us to really consider how it fails in practice, what are the issues that really, know, are embedded in the technology?
And what does it feel to be on the wrong end of it?
And I hope that armed with sort of that realistic understanding of what's happening around the world and within our own communities, we feel more empowered to participate because otherwise you know, we're all living in a world where we either shut out of the technology because we don't understand it, and it feels complicated.
And tech companies sort of hold the power there.
Or feel like there's nothing we can do.
And, and that I want to change that by showing kind of how people can have a voice in this.
So that was my, my goal.
You ask about regulation and I think.
Yes.
So I, I went to Rome and, and spent time with religious leaders, who are meeting with technologists and governments.
And also went to Bletchley Park, where the UK government convened people around this similar question.
And it's very easy to say, you know, these parties have no power and it's true, you know, it's the tech companies that are building things, the the people with the money and the data, and the knowhow and who are ruling it out.
So this is very much captured by industry.
Right.
But I think it's important for other voices and communities to have views and to kind of put forward their flag here.
So, you know, the religious community felt and it was really interesting because, you know, I met the Pope, and, you know, he signed this declaration along with leaders from Judaism and Islam.
And it's people believe the first time that's ever happened on any topic, you know, went in, in such a, fragmented world from a political perspective.
So I think it's really important for us to have these other players who come in and show that we can unite as humanity to, to, to end and to come up with what are the sort of ethical, moral principles that we value, that we want to preserve as these technologies increasingly become part of our lives?
What where are the red lines?
Otherwise, we're sort of walking into a situation where it's already being used in the military, in wars, you know?
Is that what we want?
And so I think it's important that you have these other voices, other civil society, not just religions, of course.
You know, we need activists.
We need educators, philosophers and journalists and professors like yourself, to, to to kind of remind ourselves what matters here.
And, and so that that can be encoded into these global roles.
And although the internet is sort of borderless, and I similarly cannot be contained within borders, it will have different applications in different cultures and markets.
You know, what what it will be used for.
And, the, you know, India will be very different from how it manifests in the NHS in the UK on the privatized system in the US.
Right.
So I think you do need to have different governments who will come up with their own laws.
But I do believe that we can have sort of global agreement on what lines like we have had with nuclear disarmament and, you know, other technologies in the past, you know, that we won't cross, and that we can all agree, you know, to that we want to preserve humanity's dignity, rather than giving everything over to automation.
So I think it's, you know, we need both.
Badrinath: You have provided an extraordinarily interesting account of, the negative implications of AI and you also started a conversation on what it takes to come to terms with its negative externalities and how we might, regulate it.
As you pointed out, this is not an easy, question, but we have to start somewhere.
And you have done a great deal of service through your book in, initiating this conversation.
Miss Marja, thank you so much for taking the time to talk to us.
I appreciate your time and your insights.
Madhumita Murgia: Thank you so much.
Badrinath: That's it for today.
Join us next week for a new episode of Ideas and Insights.
We would love to hear from you about this episode.
You can email us at Ideas and Insights at w gte.org.
Remember, you can access ideas and insights any time by visiting our website wgte.org/ideas and insights.
Thanks for joining us today.
We will see you next week.
Until then, goodbye.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipIdeas & Insights is a local public television program presented by WGTE