Tech Trends : Are AI Bots Showing Racial and Gender Bias?

Artificial intelligence (AI) is coming. AI is a computer that can replicate human thought, decision making, and even learn just like we do. While true AI hasn’t arrived yet, it’s coming. But you may be surprised to find that, just like human beings, computers can exhibit decision making that is both racist and sexist. We already know AIs will be human-like in their ability to think and learn. But will the AIs of the future stereotype people in the same way that human beings do? 

AI and Bias 

We’re not trying to trick you with this topic, so let’s cut to the chase. People make the computer algorithms behind AI, so the chances are high that both unconscious and conscious bias is reflected in the lines of code that make up AI.  

Published research by MIT scientist Joy Buolamwini suggests that AI sold by Amazon, IBM, and Microsoft all have gender and racial bias baked right into the lines of code that make up the platform. She uncovered these biases in facial recognition programs, which, when tasked with guessing the gender of a face, the programs performed better on male faces than female, and on lighter-skinned men over darker-skinned women. When the systems were asked to classify the genders of Oprah Winfrey, Serena Williams, and Michelle Obama, they failed.  

How This Works and Why It Matters 

AI is made up of a variety of computer algorithms that help the device learn and improve. Called machine learning algorithms, these lines of code help your Alexa provide you with better information based on your prior interactions with the machine. This technology is infiltrating into every digital technology and it’s getting better all the time, making the machines that power our everyday lives much smarter. 

But here’s the problem; when the machines get things wrong it can have more dire consequences than having your navigation system send you to the wrong place. The police and Homeland Security are now using these tools to make decisions. Facial recognition software is now used to help spot criminals in a crowd of people. The problem? It has a high number of false positives, like the day the Amazon AI inadvertently matched 28 members of Congress with the mug shots of criminals.  

What about the 2017 research published in Science that showed computers that learn English by teaching itself with machine learning become prejudiced against African Americans. When these computers trolled the web to cull data to teach itself English, it also picked up stereotyped biases around gender and careers as well as associating names that appeared to have a European descent with pleasant terms and African American names with unpleasant terms. That’s because people say and write bad things on the Internet about these groups so the computer scrapped that data, added it to the mix, and developed a bias in the process. 

It seems our computers and the AI of the future are vulnerable to abuse and both conscious and unconscious biases from the people that write the computer code. Making sure our developer teams are inclusive and aware of the issue of racial and gender stereotypes will ensure that the AI of the future plays fair and square.

Looking to Hire An IT Employees?

 Contact The Custom Group of Companies and we will help with all your IT staffing needs!

got questions?

WE’VE GOT ANSWERS:

CONTACT US