Skip to main content
A Magazine for Sheffield
Give Over

Artificial Intelligence is what we make it

AI is a product of racial capitalism, so the question is not, ‘Can we trust AI?’ but, ‘How can we trust those behind AI?’ writes Nabila Cruz de Carvalho.

Screenshot 2023 11 07 at 15 52 29
Cash Macanaya/Unsplash

The advent of artificial intelligence (AI) tools has led to a flurry of headlines about its dangers. ChatGPT, for example, is constantly learning from more information than any one human could hold. This harvesting of data gives it its ability to mimic intelligence, which in turn feeds the worry that it will one day make humans obsolete.

After all, how many headlines have we seen that say AI will take our jobs? That AI will render students unable to handwrite? That it will lead to the extinction of humanity?

It feels inevitable and unstoppable. But is it really? And who do we trust to develop and use AI in society?

AI is already embedded in our lives – in search engines, in social media feed recommendations, in the promise of self-driving cars, voice virtual assistants like Alexa and Siri, and many of the apps we use every day. We have grown accustomed to accessing information through social media or ordering our Friday night takeaway through food delivery apps, all of which rely on algorithms to function, enabling these systems of supply and demand to continue existing.

So we cannot forget that AI, just like other algorithmic technologies, is a product of racial capitalism. And, as Ruth Wilson Gilmore said: “Capitalism requires inequality and racism enshrines it.” It’s no surprise then that we see algorithms being used in ways that perpetuate inequality, including through racist practices like predictive policing and racial profiling.


Can we trust AI?

We are utterly dependent on such systems to take part in everyday capitalist life – to communicate, to make payments, to get jobs, to be paid. In a world permanently online, we have no choice.

But trust is built through our personal and social histories, so it follows that communities who have good reason to distrust social systems like capitalism are unlikely to be able to trust AI. Scaremongering headlines do little to address the real issue: how can we trust the companies and people behind these technologies?

AI is also being put to good use. Research has shown that it can diagnose breast cancer in images with more accuracy than human doctors. Algorithms in healthcare apps can lead to more personalised care that meets our individual needs. It can improve accessibility to media and technology in everyday life. So when AI is touted as a positive change, there is some truth in this. But often those behind these societal predictions about the impact of AI also stand to profit from it.

AI is a challenge for the present, not the future. We are feeling its effects right now. The labour market has been dramatically changed by the gig economy of delivery apps, which uses AI to assign jobs to couriers and drivers. The viral rise of AI generated images by the Lensa app has taken social media by a storm, but also revealed the encoded inequalities present in sexist images. This is algorithmic bias and it reflects and exacerbates already existing societal inequalities we know too well.

Those who created the data that feeds AI so it can learn and create content are deeply biased themselves, consciously or not, as we have seen in predictive policing in the US, where now-discontinued LAPD data-driven ‘pre-crime’ software relied on data collected in the field by officers – who are, of course, not neutral parties. It’s not the AI itself at fault, but a society that built it and fed it racist data.

People, not power

AI is not magical, but simply a tool we can use. It isn’t a fantastic utopia, nor a sci-fi style dystopia.

What appears to be missing from the conversation is a focus on the people and companies who design the tools, their biases and their intentions. AI is not going to save or destroy us because the question is not whether we can trust AI, but whether we can trust the people and structures behind it. Most of us will never understand the inner workings of the technology we use every day, but we should have processes in place to ensure that it is trustworthy and working to the benefit of society, instead of deepening our already discriminatory systems.

It’s not enough to regulate it or reform it. We need to imagine a radical new approach where AI can be used to dismantle racial capitalism. As Anita Gurumurthy writes:

Whether AI will be an autonomous weapon of social injustice or a powerful agent of autonomous societies depends on the stories we choose to weave, the parameters we decide to make worthy.

AI has been changing society for longer that we realise – but we still have time to make sure those changes are equitable to all.

Next from Give Over

More Give Over

More News & Views

More News & Views