“But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. In one study, researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US). Another study found similar widespread evidence of racial bias against black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts.<br><br>Both papers, presented at a recent prestigious annual conference for computational linguistics, show how natural language processing AI — which is often proposed as a tool to objectively identify offensive language — can amplify the same biases that human beings have. They also prove how the test data that feeds these algorithms have baked-in bias from the start.”

“But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. In one study, researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US). Another study found similar widespread evidence of racial bias against black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts.

Both papers, presented at a recent prestigious annual conference for computational linguistics, show how natural language processing AI — which is often proposed as a tool to objectively identify offensive language — can amplify the same biases that human beings have. They also prove how the test data that feeds these algorithms have baked-in bias from the start.”

The algorithms that detect hate speech online are biased against black people

A new study shows that leading AI models are 1.5 times more likely to flag tweets written by African Americans as “offensive” compared to other tweets.

Back to Top
Back to Top
Close Zoom
Subscribe to stay in touch!
error: Content is protected !!