AI is a product of racial capitalism, so the question is not, ‘Can we trust AI?’ but, ‘How can we trust those behind AI?’ writes Nabila Cruz de Carvalho.
The advent of artificial intelligence (AI) tools has led to a flurry of headlines about its dangers. ChatGPT, for example, is constantly learning from more information than any one human could hold. This harvesting of data gives it its ability to mimic intelligence, which in turn feeds the worry that it will one day make humans obsolete.
After all, how many headlines have we seen that say AI will take our jobs? That AI will render students unable to handwrite? That it will lead to the extinction of humanity?
It feels inevitable and unstoppable. But is it really? And who do we trust to develop and use AI in society?
AI is already embedded in our lives – in search engines, in social media feed recommendations, in the promise of self-driving cars, voice virtual assistants like Alexa and Siri, and many of the apps we use every day. We have grown accustomed to accessing information through social media or ordering our Friday night takeaway through food delivery apps, all of which rely on algorithms to function, enabling these systems of supply and demand to continue existing.
So we cannot forget that AI, just like other algorithmic technologies, is a product of racial capitalism. And, as Ruth Wilson Gilmore said: “Capitalism requires inequality and racism enshrines it.” It’s no surprise then that we see algorithms being used in ways that perpetuate inequality, including through racist practices like predictive policing and racial profiling.
We cannot forget that AI, just like other algorithmic technologies, is a product of racial capitalism.
We are utterly dependent on such systems to take part in everyday capitalist life – to communicate, to make payments, to get jobs, to be paid. In a world permanently online, we have no choice.
But trust is built through our personal and social histories, so it follows that communities who have good reason to distrust social systems like capitalism are unlikely to be able to trust AI. Scaremongering headlines do little to address the real issue: how can we trust the companies and people behind these technologies?
AI is also being put to good use. Research has shown that it can diagnose breast cancer in images with more accuracy than human doctors. Algorithms in healthcare apps can lead to more personalised care that meets our individual needs. It can improve accessibility to media and technology in everyday life. So when AI is touted as a positive change, there is some truth in this. But often those behind these societal predictions about the impact of AI also stand to profit from it.
AI is a challenge for the present, not the future. We are feeling its effects right now. The labour market has been dramatically changed by the gig economy of delivery apps, which uses AI to assign jobs to couriers and drivers. The viral rise of AI generated images by the Lensa app has taken social media by a storm, but also revealed the encoded inequalities present in sexist images. This is algorithmic bias and it reflects and exacerbates already existing societal inequalities we know too well.
Those who created the data that feeds AI so it can learn and create content are deeply biased themselves, consciously or not, as we have seen in predictive policing in the US, where now-discontinued LAPD data-driven ‘pre-crime’ software relied on data collected in the field by officers – who are, of course, not neutral parties. It’s not the AI itself at fault, but a society that built it and fed it racist data.
It’s not the AI itself at fault, but a society that built it and fed it racist data.
AI is not magical, but simply a tool we can use. It isn’t a fantastic utopia, nor a sci-fi style dystopia.
What appears to be missing from the conversation is a focus on the people and companies who design the tools, their biases and their intentions. AI is not going to save or destroy us because the question is not whether we can trust AI, but whether we can trust the people and structures behind it. Most of us will never understand the inner workings of the technology we use every day, but we should have processes in place to ensure that it is trustworthy and working to the benefit of society, instead of deepening our already discriminatory systems.
It’s not enough to regulate it or reform it. We need to imagine a radical new approach where AI can be used to dismantle racial capitalism. As Anita Gurumurthy writes:
Whether AI will be an autonomous weapon of social injustice or a powerful agent of autonomous societies depends on the stories we choose to weave, the parameters we decide to make worthy.
AI has been changing society for longer that we realise – but we still have time to make sure those changes are equitable to all.
A coalition of community organisers demanding action are pushing elected officials to act
Don’t miss any of our independent, award-winning citizen journalism. From community news and opinion to featured artists, cultural content and indie trade pieces – we’ve got you covered.
We’re committed to removing any barriers between you and our articles. If you like what we do, consider donating to Now Then. You’ll be helping us to sustain our platform and keep our content free for everyone.
A week after the first demonstration, around 1,000 people took to Barkers Pool to call for an end to the war in Gaza.
Last week, wealthy landowners launched what one resident described as “a physical assault on the people of Sheffield.” How do we stop them?
Through indigeneous languages which don’t objectify and categorise, we can learn to see our relationship to nature and the causes of the climate crisis very differently, writes Calvin Po.
After cancelling HS2, the Prime Minister announced a whole host of new railway projects for our city. But now it looks like they might not happen after all.
The Owls won a fairytale promotion in May, but a series of screw-ups and disappointments have left unhappy supporters pointing blame at the club’s owner. Is a fan takeover the answer?