Skip to main content

The problems AI has today go back centuries

In March of 2015, protests broke out at the University of Cape Town in South Africa over the campus statue of British colonialist Cecil Rhodes. Rhodes, a mining magnate who had gifted the land on which the university was built, had committed genocide against Africans and laid the foundations for apartheid. Under the rallying banner of “Rhodes Must Fall,” students demanded that the statue be removed. Their protests sparked a global movement to eradicate the colonial legacies that endure in education.

The events also provoked Shakir Mohamed, a South African AI researcher at DeepMind, to reflect on what colonial legacies might exist in his research as well. In 2018, just as the AI field was beginning to reckon with problems like algorithmic discrimination, Mohamed penned a blog post with his initial thoughts. In it he called on researchers to “decolonise artificial intelligence”—to reorient the field’s work away from Western hubs like Silicon Valley and engage new voices, cultures, and ideas for guiding the technology’s development.

Now in the wake of renewed cries for “Rhodes Must Fall” on Oxford University’s campus, spurred by George Floyd’s murder and the global antiracism movement, Mohamed has released a new paper along with his colleague William Isaac and Oxford PhD candidate Marie-Therese Png. It fleshes out Mohamed’s original ideas with specific examples of how AI challenges are rooted in colonialism, and presents strategies for addressing them by recognizing that history.

How coloniality manifests in AI

Though historical colonialism may be over, its effects still exist today. This is what scholars term “coloniality”: the idea that the modern-day power imbalances between races, countries, rich and poor, and other groups are extensions of the power imbalances between colonizer and colonized.

Take structural racism as an example. Europeans originally invented the concept of races and the differences between them to justify the African slave trade and then the colonization of African countries. In the US, the effects of that ideology can now be traced through the country’s own history of slavery, Jim Crow, and police brutality.

In the same way, the paper’s authors argue, this colonial history explains some of the most troubling characteristics and impacts of AI. They identify five manifestations of coloniality in the field:

Algorithmic discrimination and oppression. The ties between algorithmic discrimination and colonial racism are perhaps the most obvious: algorithms built to automate procedures and trained on data within a racially unjust society end up replicating those racist outcomes in their results. But much of the scholarship on this type of harm from AI focuses on examples in the US. Examining it in the context of coloniality allows for a global perspective: America isn’t the only place with social inequities. “There are always groups that are identified and subjected,” Isaac says.

Ghost work. The phenomenon of ghost work, the invisible data labor required to support AI innovation, neatly extends the historical economic relationship between colonizer and colonized. Many former US and UK colonies—the Philippines, Kenya, and India—have become ghost-working hubs for US and UK companies. The countries’ cheap, English-speaking labor forces, which make them a natural fit for data work, exist because of their colonial histories.

Beta testing. AI systems are sometimes tried out on more vulnerable groups before being implemented for “real” users. Cambridge Analytica, for example, beta-tested its algorithms on the 2015 Nigerian and 2017 Kenyan elections before using them in the US and UK. Studies later found that these experiments actively disrupted the Kenyan election process and eroded social cohesion. This kind of testing echoes the British Empire’s historical treatment of its colonies as laboratories for new medicines and technologies.

AI governance. The geopolitical power imbalances that the colonial era left behind also actively shape AI governance. This has played out in the recent rush to form global AI ethics guidelines: developing countries in Africa, Latin America, and Central Asia have been largely left out of the discussions, which has led some to refuse to participate in international data flow agreements. The result: developed countries continue to disproportionately benefit from global norms shaped for their advantage, while developing countries continue to fall further behind.

International social development. Finally, the same geopolitical power imbalances affect the way AI is used to assist developing countries. “AI for good” or “AI for sustainable development” initiatives are often paternalistic. They force developing countries to depend on existing AI systems rather than participate in creating new ones designed for their own context.

The researchers note that these examples are not comprehensive, but they demonstrate how far-reaching colonial legacies are in global AI development. They also tie together what seem like disparate problems under one unifying thesis. “It enables us a new grammar and vocabulary to talk about both why these issues matter and what we are going to do to think about and address these issues over the long run,” Isaac says.

How to build decolonial AI

The benefit of examining harmful impacts of AI through this lens, the researchers argue, is the framework it provides for predicting and mitigating future harm. Png believes that there’s really no such thing as “unintended consequences”—just consequences of the blind spots organizations and research institutions have when they lack diverse representation.

In this vein, the researchers propose three techniques to achieve “decolonial,” or more inclusive and beneficial, AI:

Context-aware technical development. First, AI researchers building a new system should consider where and how it will be used. Their work also shouldn’t end with writing the code but should include testing it, supporting policies that facilitate its proper uses, and organizing action against improper ones.

Reverse tutelage. Second, they should listen to marginalized groups. One example of how to do this is the budding practice of participatory machine learning, which seeks to involve the people most affected by machine-learning systems in their design. This gives subjects a chance to challenge and dictate how machine-learning problems are framed, what data is collected and how, and where the final models are used.

Solidarity. Marginalized groups should also be given the support and resources to initiate their own AI work. Several communities of marginalized AI practitioners already exist, including Deep Learning Indaba, Black in AI, and Queer in AI, and their work should be amplified.

Since publishing their paper, the researchers say, they have seen overwhelming interest and enthusiasm. “It at least signals to me that there is a receptivity to this work,” Isaac says. “It feels like this is a conversation that the community wants to begin to engage with.”



from MIT Technology Review https://ift.tt/33acxJu
via IFTTT

Comments

Popular posts from this blog

An interview with El Salvador's top crypto regulator Juan Carlos Reyes on taking a tech-minded approach to crypto regulation, how his agency works, and more (Tom Carreras/CoinDesk)

Tom Carreras / CoinDesk : An interview with El Salvador's top crypto regulator Juan Carlos Reyes on taking a tech-minded approach to crypto regulation, how his agency works, and more   —  The National Commission of Digital Assets is the agency in charge of regulating crypto in El Salvador, the first nation to accept Bitcoin as legal tender. from Techmeme https://ift.tt/j9ifNz1 via IFTTT

How Amazon Haul, a storefront for fashion, home, and other items at "ultralow prices", reflects the "haul" shopping phenomenon popularized by YouTube and TikTok (Vanessa Friedman/New York Times)

Vanessa Friedman / New York Times : How Amazon Haul, a storefront for fashion, home, and other items at “ultralow prices”, reflects the “haul” shopping phenomenon popularized by YouTube and TikTok   —  It is the shopping phenomenon of our times, and now it's an Amazon store. from Techmeme https://ift.tt/bPLRUu8 via IFTTT

Q&A with ex-CEO of CrowdTangle Brandon Silverman about the founding principle of the research tool, Meta's decision to close it in an election year, and more (Chris Stokel-Walker/Fast Company)

Chris Stokel-Walker / Fast Company : Q&A with ex-CEO of CrowdTangle Brandon Silverman about the founding principle of the research tool, Meta's decision to close it in an election year, and more   —  Meta announced this month that in August it will be closing CrowdTangle, the platform monitoring tool the company bought in 2016. from Techmeme https://ift.tt/VtOfiTx via IFTTT