Guest Blogs

Bias is intrinsic to AI; but it can be mitigated

Sanas, a US tech startup, has developed software that alters a person’s accent. The company’s website states clearly on its demo that “Sanas it tuned for Indian and Filipino speakers.” When I click the “With Sanas” slider after recording my “Hindi” voice, my speech sounds slightly robotic, but distinctly white American.

Sanas wants call center employees, regardless of where they are from, to sound white and American. The company asserts that it would improve customer service and help call center employees become more accessible to American consumers.

Sanas’ software is obviously biased, though, as it consistently promotes the notion that white American accents are better than foreign accents.

If voice accent seems like a minor instance of AI bias, read on.

According to a 2019 study published by the US National Institute of Standards and Technology (NIST), 189 face recognition algorithms from 99 developers—all of whom used artificial intelligence (AI)—including Microsoft, Cognitec, and Megvii, incorrectly identified Asian and African-American faces 10–100 times more frequently than Caucasian faces. In the collection of more than 18 million images, representing almost 8.5 million people, the study also discovered that Native Americans had the greatest mistake rates.

This is a severe case of AI prejudice with practical ramifications that could include false positives, racial profiling, discrimination, exclusion, and even jail time.

AI is now being used in a wide range of serious business applications, such as customer service, marketing and sales, product development, operations, risk management, fraud detection, cybersecurity, human resources, finance, healthcare, manufacturing, transportation, and energy, these consequences will only get bigger, deeper, and wider.

For example, Amazon uses AI to detect fraud, enhance its shipping system, and offer customized product recommendations. Google uses AI to improve its search engine, create new goods and services, and automate procedures. Microsoft uses artificial intelligence (AI) to improve its cloud computing platform, offer new tools for productivity, and protect its users from online threats. Walmart uses AI to improve its supply chain, manage its inventory, and optimize its pricing.

The technical, social, and ethical biases of AI

Yet, as with all transformation, AI is not without its flaws. One of the most pressing challenges facing AI is bias. This can lead to unfair and discriminatory outcomes.

Because of AI’s exponential growth, the consequences will only get bigger, deeper, and wider.

AI bias goes beyond simple technical issues. Biases based on social and ethical standards may have several more serious detrimental effects. Artificial intelligence (AI) technologies are being utilized more frequently to decide crucial life decisions, like who is granted access to healthcare, loans, and jobs. People of color, women, and other oppressed groups could suffer disproportionately from these decisions, which would be unjust and destructive for people’s lives.

How to reduce prejudice in AI

AI bias has no simple solution, as with any complex problem. Organizations can, however, take a few steps to reduce the risks:

  1. Apply debiasing algorithms, which are meant to find and eliminate bias from AI systems. To lessen bias against particular groups, one kind of debiasing method, for instance, modifies the weights of various features in an AI model.
  2. Conduct routine bias audits of AI systems. Finding patterns in the system’s outputs and testing it with a range of inputs are two ways to accomplish this.
  3. Teach staff members how to spot and steer clear of bias in AI systems. The technical as well as moral implications of AI bias should be included in this training.
  4. Involve a range of stakeholders in the creation and application of AI systems. This will make it easier to guarantee that the systems are created and operated in a fair and equal manner.
  5. Be open and honest about how AI is being used by your company and how it is reducing prejudice. You can accomplish this by making data accessible to the public and by sharing documentation on your AI systems.

Make AI systems more explainable and transparent

To ensure the public trusts their artificial intelligence systems, organizations must be transparent about the way they use and implement AI. This can be done by publishing documentation on their AI systems; making data available for public scrutiny; and answering questions from the public about their use of AI.

In addition to these steps, organizations need to make a commitment to fairness and equity. They need to create a culture where everyone feels comfortable speaking up about bias and where everyone feels valued and respected.

To be fair to Big Tech, some prominent names are working to mitigate AI bias. Google has developed a number of tools and resources to help developers and organizations mitigate bias within their AI systems. These include the TensorFlow Fairness Indicators and the Google AI Principles. Microsoft has published a number of white papers on the subject of AI bias, including “Fairness in Machine Learning” and “The Ethics of Artificial Intelligence”.  Amazon has developed a number of tools and services to help developers and organizations mitigate AI bias, including the AWS SageMaker Fairness Console and the AWS AI Ethics Review.

As AI becomes more widely used, it is important for all organizations to take similar mindful steps to ensure that their AI systems are fair, equitable, transparent, and explainable.

The use of AI can lead to bias in decision-making, but it is not an insurmountable problem. By mitigating the risks, organizations can help ensure that AI is used for the greater good and that benefits from its potential do not discriminate against already marginalized publics.

Dev Chandrasekhar

You may also like

More in Guest Blogs