Skip to main content

The problems AI has today go back centuries

In March of 2015, protests broke out at the University of Cape Town in South Africa over the campus statue of British colonialist Cecil Rhodes. Rhodes, a mining magnate who had gifted the land on which the university was built, had committed genocide against Africans and laid the foundations for apartheid. Under the rallying banner of “Rhodes Must Fall,” students demanded that the statue be removed. Their protests sparked a global movement to eradicate the colonial legacies that endure in education.

The events also provoked Shakir Mohamed, a South African AI researcher at DeepMind, to reflect on what colonial legacies might exist in his research as well. In 2018, just as the AI field was beginning to reckon with problems like algorithmic discrimination, Mohamed penned a blog post with his initial thoughts. In it he called on researchers to “decolonise artificial intelligence”—to reorient the field’s work away from Western hubs like Silicon Valley and engage new voices, cultures, and ideas for guiding the technology’s development.

Now in the wake of renewed cries for “Rhodes Must Fall” on Oxford University’s campus, spurred by George Floyd’s murder and the global antiracism movement, Mohamed has released a new paper along with his colleague William Isaac and Oxford PhD candidate Marie-Therese Png. It fleshes out Mohamed’s original ideas with specific examples of how AI challenges are rooted in colonialism, and presents strategies for addressing them by recognizing that history.

How coloniality manifests in AI

Though historical colonialism may be over, its effects still exist today. This is what scholars term “coloniality”: the idea that the modern-day power imbalances between races, countries, rich and poor, and other groups are extensions of the power imbalances between colonizer and colonized.

Take structural racism as an example. Europeans originally invented the concept of races and the differences between them to justify the African slave trade and then the colonization of African countries. In the US, the effects of that ideology can now be traced through the country’s own history of slavery, Jim Crow, and police brutality.

In the same way, the paper’s authors argue, this colonial history explains some of the most troubling characteristics and impacts of AI. They identify five manifestations of coloniality in the field:

Algorithmic discrimination and oppression. The ties between algorithmic discrimination and colonial racism are perhaps the most obvious: algorithms built to automate procedures and trained on data within a racially unjust society end up replicating those racist outcomes in their results. But much of the scholarship on this type of harm from AI focuses on examples in the US. Examining it in the context of coloniality allows for a global perspective: America isn’t the only place with social inequities. “There are always groups that are identified and subjected,” Isaac says.

Ghost work. The phenomenon of ghost work, the invisible data labor required to support AI innovation, neatly extends the historical economic relationship between colonizer and colonized. Many former US and UK colonies—the Philippines, Kenya, and India—have become ghost-working hubs for US and UK companies. The countries’ cheap, English-speaking labor forces, which make them a natural fit for data work, exist because of their colonial histories.

Beta testing. AI systems are sometimes tried out on more vulnerable groups before being implemented for “real” users. Cambridge Analytica, for example, beta-tested its algorithms on the 2015 Nigerian and 2017 Kenyan elections before using them in the US and UK. Studies later found that these experiments actively disrupted the Kenyan election process and eroded social cohesion. This kind of testing echoes the British Empire’s historical treatment of its colonies as laboratories for new medicines and technologies.

AI governance. The geopolitical power imbalances that the colonial era left behind also actively shape AI governance. This has played out in the recent rush to form global AI ethics guidelines: developing countries in Africa, Latin America, and Central Asia have been largely left out of the discussions, which has led some to refuse to participate in international data flow agreements. The result: developed countries continue to disproportionately benefit from global norms shaped for their advantage, while developing countries continue to fall further behind.

International social development. Finally, the same geopolitical power imbalances affect the way AI is used to assist developing countries. “AI for good” or “AI for sustainable development” initiatives are often paternalistic. They force developing countries to depend on existing AI systems rather than participate in creating new ones designed for their own context.

The researchers note that these examples are not comprehensive, but they demonstrate how far-reaching colonial legacies are in global AI development. They also tie together what seem like disparate problems under one unifying thesis. “It enables us a new grammar and vocabulary to talk about both why these issues matter and what we are going to do to think about and address these issues over the long run,” Isaac says.

How to build decolonial AI

The benefit of examining harmful impacts of AI through this lens, the researchers argue, is the framework it provides for predicting and mitigating future harm. Png believes that there’s really no such thing as “unintended consequences”—just consequences of the blind spots organizations and research institutions have when they lack diverse representation.

In this vein, the researchers propose three techniques to achieve “decolonial,” or more inclusive and beneficial, AI:

Context-aware technical development. First, AI researchers building a new system should consider where and how it will be used. Their work also shouldn’t end with writing the code but should include testing it, supporting policies that facilitate its proper uses, and organizing action against improper ones.

Reverse tutelage. Second, they should listen to marginalized groups. One example of how to do this is the budding practice of participatory machine learning, which seeks to involve the people most affected by machine-learning systems in their design. This gives subjects a chance to challenge and dictate how machine-learning problems are framed, what data is collected and how, and where the final models are used.

Solidarity. Marginalized groups should also be given the support and resources to initiate their own AI work. Several communities of marginalized AI practitioners already exist, including Deep Learning Indaba, Black in AI, and Queer in AI, and their work should be amplified.

Since publishing their paper, the researchers say, they have seen overwhelming interest and enthusiasm. “It at least signals to me that there is a receptivity to this work,” Isaac says. “It feels like this is a conversation that the community wants to begin to engage with.”



from MIT Technology Review https://ift.tt/33acxJu
via IFTTT

Comments

Popular posts from this blog

Roundtables: Unveiling the 10 Breakthrough Technologies of 2025

Recorded on January 3, 202 5 Unveiling the 10 Breakthrough Technologies of 2025 Speakers: Amy Nordrum , executive editor, and Charlotte Jee , news editor. Each year, MIT Technology Review publishes an annual list of the top ten breakthrough technologies that will have the greatest impact on how we live and work in the future. This year, the 10 Breakthrough Technologies list was unveiled live by our editors. Hear from  MIT Technology Review  executive editor Amy Nordrum and news editor Charlotte Jee as they share an unveiling of the list of the 10 breakthrough technologies. Related Coverage The 10 Breakthrough Technologies of 2025 3 things that didn’t make the 10 Breakthrough Technologies of 2025 list The 10 Breakthrough Technologies of 2024 from MIT Technology Review https://ift.tt/0Xert49 via IFTTT

Why scientists want to help plants capture more carbon dioxide

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday,  sign up here. Hello hello!  This week in The Spark, we’re taking a look back at one of my favorite sessions from our ClimateTech conference last week, from a chapter we called “Cleaning Your Plate.”  In the session, I sat down with Pamela Ronald, a plant geneticist at the University of California, Davis. She’s been working for years on helping rice survive floods, and now she’s turning her attention to using advanced genetics for carbon removal on farmland.  Genetics and plants Scientists have a wide range of tools at their disposal to influence how plants grow. From standard genetic engineering to more sophisticated gene editing tools like CRISPR, we have more power than ever to influence what traits we want in crops.  But genetic tweaking isn’t anything new. “Virtually everything we eat has been improved using some sort o...

The Animation Guild ratifies a contract with big studios, without AI demands such as letting members opt out of using AI or having AI train on their work (Gene Maddaus/Variety)

Gene Maddaus / Variety : The Animation Guild ratifies a contract with big studios, without AI demands such as letting members opt out of using AI or having AI train on their work   —  The Animation Guild has ratified its contract with the major studios, despite concerns from some about protections against artificial intelligence. from Techmeme https://ift.tt/4pCbZY7 via IFTTT