YouTube says it’s bringing back human moderators who were “put offline” during the pandemic after the company’s AI filters failed to match their accuracy.
Back in March, YouTube said it would rely more on machine learning systems to flag and remove content that violated its policies on things like hate speech and misinformation.
But YouTube told the Financial Times this week that the greater use of AI moderation had led to a significant increase in video removals and incorrect takedowns.
Around 11 million videos were removed from YouTube between April and June, says the FT, or about double the usual rate. Around 320,000 of these takedowns were appealed, and half of the appealed videos were reinstated. Again, the FT says that’s roughly double the usual figure: a sign that the AI systems were over-zealous in their attempts to spot harmful content.
As YouTube’s chief product officer, Neal Mohan, told the FT: “One of the decisions we made [at the beginning of the pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in [a] slightly higher number of videos coming down.”
This admission of failure is notable. All major online social platforms, from Twitter to Facebook to YouTube, have been increasingly under pressure to deal with the spread of hateful and misleading content on their sites. And all have said that algorithmic and automated filters can help deal with the immense scale of their platforms.
Time and time again, though, experts in AI and moderation have voiced scepticism about these claims. Judging whether a video about, say, conspiracy theories contains subtle nods toward racist beliefs can be a challenge for a human, they say, and computers lack our ability to understand the exact cultural context and nuance of these claims. Automated systems can spot the most obvious offenders, which is undoubtedly useful, but humans are still needed for the finer judgment calls.
Even with more straightforward decisions, machines can still mess up. Back in May, for example, YouTube admitted that it was automatically deleting comments containing certain phrases critical of the Chinese Communist Party (CCP). The company later blamed an “error with our enforcement systems” for the mistakes.
But as Mohan told the FT, the machine learning systems definitely have their place, even if it is to just remove the most obvious offenders. “Over 50 percent of those 11 million videos were removed without a single view by an actual YouTube user and over 80 percent were removed with less than 10 views,” he said. “And so that’s the power of machines.”
Latest Stories
-
DAMC, Free Food Company, to distribute 10,000 packs of food to street kids
39 minutes -
Kwame Boafo Akuffo: Court ruling on re-collation flawed
58 minutes -
Samuel Yaw Adusei: The strategist behind NDC’s electoral security in Ashanti region
1 hour -
I’m confident posterity will judge my performance well – Akufo-Addo
1 hour -
Syria’s minorities seek security as country charts new future
2 hours -
Prof. Nana Aba Appiah Amfo re-appointed as Vice-Chancellor of the University of Ghana
2 hours -
German police probe market attack security and warnings
2 hours -
Grief and anger in Magdeburg after Christmas market attack
2 hours -
Baltasar Coin becomes first Ghanaian meme coin to hit DEX Screener at $100K market cap
3 hours -
EC blames re-collation of disputed results on widespread lawlessness by party supporters
3 hours -
Top 20 Ghanaian songs released in 2024
3 hours -
Beating Messi’s Inter Miami to MLS Cup feels amazing – Joseph Paintsil
4 hours -
NDC administration will reverse all ‘last-minute’ gov’t employee promotions – Asiedu Nketiah
4 hours -
Kudus sights ‘authority and kingship’ for elephant stool celebration
4 hours -
We’ll embrace cutting-edge technologies to address emerging healthcare needs – Prof. Antwi-Kusi
4 hours