A new study by software nonprofit Mozilla Foundation found that 71 percent of videos study participants deemed objectionable were suggested to them by YouTube’s own recommendation algorithm.
“Research volunteers encountered a range of regrettable videos, reporting everything from COVID fear-mongering to political misinformation to wildly inappropriate ‘children’s’ cartoons,” Mozilla Foundation wrote in a statement.
The largest-ever crowdsourced probe into YouTube’s controversial recommendation algorithm found that the automated software continues to recommend videos that viewers considered “disturbing and hateful,” Mozilla said, including ones that violate YouTube’s own content policies.
The study involved nearly 38,000 YouTube users across 91 countries who volunteered data to Mozilla about the “regrettable experiences” they have had on the world’s most popular video content platform. Overall, participants flagged 3,362 regrettable videos between July 2020 and May 2021, with the most frequent “regret” categories being misinformation, violent or graphic content, hate speech, and spam/scams.
Mozilla said that almost 200 videos that YouTube’s algorithm recommended to volunteers have since been removed from the platform, including several that YouTube deemed violated their own policies.
“YouTube needs to admit their algorithm is designed in a way that harms and misinforms people,” said Brandi Geurkink, Mozilla’s senior manager of advocacy, in a statement. “Our research confirms that YouTube not only hosts, but actively recommends videos that violate its very own policies. – READ MORE
Listen to the insightful Thomas Paine Podcast Below --