We’ve put our trust in tech to solve the world’s biggest challenges. To mitigate climate change, to disrupt in-place systems and give voice to the people. But what does it matter, if the tech we rely on is untrustworthy, unaccountable and full of bias?
It’s this very simple juxtaposition that is now driving a movement. A call to action for redefining and rebuilding algorithms so they serve humanity as a whole and not the privileged few. Initiated by Joy Buolamwini, The Index Award 2021 Winner in the Community category is Algorithmic Justice League.
It all started when Buolamwini, an algorithmic researcher and future Founder of Algorithmic Justice League, stumbled upon a widely used computer vision software that couldn’t consistently detect her face. In the end, she had to put a white mask on for it to recognise her.
“Why am I wearing a white mask to be detected? I have lighter-skinned colleagues who seem to use this just fine. Is it because of the lighting conditions? What's going on?,” Buolamwini shared in The Open Mind podcast. “That's really when I started exploring facial analysis technology, which is being powered by AI techniques.”
Buolamwini started diving deeper into the history of algorithmic software and running tests on systems from companies like IBM and Microsoft. They scored high on overall accuracy, but with devastating disparities: Light-skinned males had an error rate of 1%, whereas darker-skinned women were at an error rate of up to 47%.
Computer vision is proving to have a “coded gaze”, as Buolamwini has called it, that’s been molded by biased datasets. For decades, AI has been trained with pictures of a racial and unjust past and taught to ignore faces that don’t fit a human-build norm. The outcome is that people on the margins are being denied job opportunities and loans, even access to their own homes or healthcare is on the line. It can even turn innocent people into suspects.
”It’s not just a question of how accurate these systems are. It's a question of how they're being used.”
That’s why AJL is making new and improved datasets and offering companies audits on its AI systems. But that’s not all they do: To create a true movement towards AI justice, they’re combining art and education to not only bring awareness to and amplify marginalised voices but also shift societies. Because “it's not just a question of how accurate these systems are. It's a question of how they're being used,” as Buolamwini says.
“In the US, there are no federal regulations for facial recognition technology (...) So you have a space where companies can sell systems to government entities and other types of organisations without any kind of oversight.”
Even though big players like Microsoft, Amazon and IBM have made self-imposed restrictions for selling AI solutions to police forces, governments still struggle to make laws on the topic. There’s a need for affirmative consent, benchmarks for datasets, ways to contest decisions made by AI and so on so forth.
But the need for regulations and policy can’t be constrained to companies or borders only, as, once again, marginalised and low-resource areas will be the victims of the side effects of coded bias.
“What I'm starting to see is almost like a bit of a parallel to the transatlantic slave trade.””
“You're starting to see this with facial recognition systems right now, where you have Chinese companies going to African nations,” Buolamwini explains to The Open Mind. “What I'm starting to see is almost like a bit of a parallel to the transatlantic slave trade, where you have bodies but now digital bodies being sourced and exploited.”
Luckily, AJL as an organisation has gained a following. Joy’s TED Talk on the topic has +100,000 views, and the ‘Coded Bias’ documentary showcasing the implications of AI as well as AJL’s work can be streamed globally on Netflix. They’re working with partners like Olay to double the number of women in STEM who can help decode biases and offer their services to companies who could sit on harmful AI. They even have a Safe Face project meant to prevent the lethal use of facial analysis technology.
Because without disruption, unreliable and unregulated AI could be one of the biggest threats to humankind. It can fuel governments’ urge to surveil their people, or amplify already unjust systems. It can turn humans into digital currency and force them to fight for their lives. It can cause thousands of human rights violations in seconds and put societal progress to a halt.
AJL is the first movement to push this challenge into the spotlight and the goal is to make a profound change: “It’s about having agency regarding the processes that govern our lives,” as AJL puts it. “As companies, governments, and law enforcement agencies use AI to make decisions about our opportunities and freedoms, we must demand that we are respected as people.”