Members of parliament have condemned YouTube for "promoting radicalisation".
Honestly, that's just absurd.
In a hearing with representatives from YouTube, Facebook and Twitter, the Home Affairs Select Committee has accused them of not "doing their jobs" and underestimating the scale of the problem.
But do they grasp the scale we're talking about?
Four hundred hours of content are uploaded to YouTube every hour. Five hundred million tweets are posted every day. Facebook has 2.7 billion users on its platform, sharing billions of pieces of content every day. Personally, I don’t find it "hard to understand" why these platforms don’t have a 100% success rate.
A lack of precedents
These companies are powering some of the most groundbreaking technology. Their algorithms and machine learning are top-tier. But machine learning can only learn so fast. It needs examples in order to form rules and detect certain types of content, and constant tinkering to make sure no bias sneaks in. We're still at a stage where the human touch is a necessary part of the process and there’s only so many content moderators you can hire to work with the scale we’re talking about.
The irony seems to be lost on the committee that the processes underlying machine learning have clear parallels with the law; just as the law develops, case by case, from the abstract into a coherent set of principles to be repeatedly applied to real-life scenarios, so does machine learning work itself out over time when faced with new challenges and scenarios. A perfect system can't be created overnight.
Don't get me wrong: mistakes of this severity are unacceptable. We should be constantly challenging these platforms to do better. But we should be doing the same for the mainstream media and ourselves. The Christchurch video was spread by several mainstream media organisations, including the Daily Mail, and 70% of its views on Twitter were from videos shared by traditional media outlets.
Where do we draw the lines for the propagation of sensitive content in the context of communicating news? Some content is plain illegal, some content breaks the platforms’ policies, but some content can fall in a grey area that's not always easy to define.
Finger-pointing is not the way forward – collaboration is
Rather than shifting responsibility on to social media, wouldn’t it be more productive to approach the problem constructively? Surely a more collaborative dialogue would be beneficial to all parties.
If the government were to take absolute control, what would be the implications for freedom of speech on social media? It’s a slippery slope, for sure. In the case of the Christchurch video, there were different types of shares: those intentionally promoting the content, those condemning it and those sharing it for awareness as a news item. Would it really be fair to penalise them all in the same way? Banning voices might tackle a problematic symptom, but wouldn’t address the root cause, in any case.
Social media platforms have been self-policing based on their own content policies because there’s no industry-wide regulation established for these types of situations. The crux of the issue is user protection. For that, we need people with an intuitive understanding of the digital space to call the shots; a regulatory body that works as fast as digital changes and understands the strengths and limitations of tech. And the social media giants seem to agree with this, even if it's only for self-serving reasons: Facebook’s public policy director admitted in the hearing that there were areas where they could benefit from more government guidance.
Social media is not the enemy
By the way, framing extremist content as a way for is absolute nonsense. No advertiser is going to intentionally pay for ads alongside extremist content. Brand safety is incredibly tight when it comes to YouTube, and advertisers won’t hesitate to pull spend from the entire platform if they have an inkling of their messaging appearing anywhere they don’t want it to. It’s not in YouTube's advantage to keep problematic content on its platforms at all.
The truth is that there’s a trade-off to take into account with content filters. As a Twitter executive has previously explained, Not everyone would agree that banning politicians would be an acceptable trade-off in this scenario.
Aside from the technical issues at play with content moderation, we’re also looking at deeper underlying societal and political trends represented by more context-dependent, nuanced and inconsistent content. Social media is stuck between a rock and a hard place, facing backlash from all sides, whatever it does.
Google, Facebook and Twitter are undeniably learning from their mistakes and pouring huge amounts of resources into moderation. Progress is definitely being made. Unfortunately, people have always looked for ways to get around the system – whether it’s the law or a website’s policies. It’s when these gaps are found that social media platforms can implement new solutions to make sure no-one can subvert their filters again. Machine learning is a powerful tool, if we give it time to do what it does best: learn.
As Microsoft’s put it, everyone has a role to play in this and unity is key. Against all odds, these companies are beginning to understand that they need to collaborate to do better, such as by sharing versions of problematic content across platforms to catch it more quickly. This is entirely unprecedented in the history of these competitors and immensely encouraging to witness. Hopefully, the government will follow suit.
Daniel Gilbert is chief executive of Brainlabs