YouTube’s Deradicalization Argument Is Really a Fight About Transparency

The reason we’re all focusing on the algorithm’s effects is we don’t know how it really works

Chris Stokel-Walker
FFWD

--

Image: Unsplash/Norwood Themes and Chris Stokel-Walker

YouTube’s algorithm has once again been thrust into the limelight as 2019 comes to an end, with a new study published on the arXiv, an academic repository for what are commonly referred to in academia as “pre-print” papers (which haven’t been peer-reviewed) about the code that makes the platform tick.

The paper, authored by programmer Mark Ledwich and a UC Berkeley researcher, Anna Zaitsev, looks at the role YouTube’s algorithm plays in recommending videos to users. Outside the paper, Ledwich has gone further, saying his data — which shows that YouTube’s algorithm is recommending videos from mainstream publishers — demonstrates, contrary to many media headlines, that YouTube’s algorithm “deradicalizes” users.

His co-author, Zaitsev, distanced herself from Ledwich’s comments when approached by FFWD. “ We don’t really claim that it does deradicalize, we rather say that it is a leap to say that it does,” she says. “The last few comments about deradicalization are blown out of proportion on Twitter. Our main point is rather in the direction of the traffic.”

--

--

Chris Stokel-Walker
FFWD
Editor for

UK-based freelancer for The Guardian, The Economist, BuzzFeed News, the BBC and more. Tell me your story, or get me to write for you: stokel@gmail.com