Once upon a time, artificial intelligence (AI) was viewed as distant and unachievable — it was regarded as nothing more than a fantasy to furnish the plots of science fiction stories. We have made numerous breakthroughs since, with AI software now powerful enough to understand natural language, navigate unfamiliar terrain, and augment scientific research. As COVID-19 reduced our ability to interact with each other, we’ve seen AI powered machines step in to fill that void, and AI be used to advance medical research towards better treatments. This ubiquity of AI may only be the beginning, with experts projecting that AI could contribute a staggering $15.7 trillion to the global economy by the end of the decade. Unsurprisingly, many prosperous members of society view the future of AI optimistically, as one of ever increased efficiency and profit. Yet many on the other side of the spectrum look on much more apprehensively: AI may have inherited the best of human traits, our intelligence, but it also has inherited one of humanity’s worst: our bias and prejudice. AI — fraught with discrimination — is being used to perpetuate systemic inequalities. If we fail to overcome this, an AI dominated future would be bleak and dystopian. We would be moving forward in time, yet backwards in progress, accelerating mindlessly towards a less equitable society.

Towards dystopia is where we’re headed if we don’t reverse course. AI is increasingly being used to make influential decisions in people’s lives — decisions that are often biased. This is due to AI being trained on past data to make future decisions. This data can often have bias which is then inherited by the AI. For instance, AI hiring tools are often being used to assess job applicants. Trained on past employee data which consists of mostly men, the AI absorbs this bias and continues the cycle of disfavoring women, which perpetuates the lack of diversity in key industries such as tech. This is absolutely unacceptable, and that’s to say nothing of the many other ways AI can be used to reinforce inequality. Known as the ‘tech to prison pipeline’, AI — trained on historical criminal data — is being used in criminal justice to determine verdicts. However, African Americans are overrepresented in the training data, and as such, the AI has been shown to hand down harsher sentences for African Americans.

To move towards a future with AI that is not only intelligent, but fair, we must enact regulation to outlaw discriminatory uses, and ensure that the developers of AI software are diverse, so their perspectives are included in the software they created.

Perhaps counterintuitively, a world with fair AI will see social justice advanced even further than a world before any AI. The sole reason that AI has become unfair is due to humans themselves holding lots of bias — which AI has absorbed. But with fair AI replacing humans in decision making, by definition, we will be at a state of zero bias, and thus increased equality.

Achieving fair AI may be the key to a better future — one of increased economic prosperity, furthered scientific progress, and more equity. But in the meantime, we must be diligent in ensuring that the AI being used reflects the best of humanity, rather than our worst.


  1. So, D. (2021, September 24). Alibaba News Roundup: Tech takes on the outbreak. Alizila. Retrieved August 31, 2022, from https://www.alizila.com/alibaba-news-roundup-tech-takes-on-the-outbreak/?spm=a2c65.11461447.0.0.4bed70c973Xx2s
  2. PricewaterhouseCoopers. (n.d.). Ai to drive GDP gains of $15.7 trillion with productivity, personalisation improvements. PwC. Retrieved August 31, 2022, from https://www.pwc.com/hu/en/pressroom/2017/ai.html
  3. Research shows AI is often biased. Here’s how to make algorithms work for all of Us. World Economic Forum. (n.d.). Retrieved August 31, 2022, from https://www.weforum.org/agenda/2021/07/ai-machine-learning-bias-discrimination/
  4. Winick, E. (2022, June 17). Amazon ditched AI recruitment software because it was biased against women. MIT Technology Review. Retrieved August 31, 2022, from https://www.technologyreview.com/2018/10/10/139858/amazon-ditched-ai-recruitment-software-because-it-was-biased-against-women/
  5. Hao, K. (2020, April 2). Ai is sending people to jail-and getting it wrong. MIT Technology Review. Retrieved August 31, 2022, from https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/

Related Posts

See all recent posts