Social media companies have been slow to respond to terrorism and extremism on their platforms, with deadly consequences. Soldier Lee Rigby was beheaded on a London street after his killers spread messages of hate on Facebook, Twitter has inspired the easily-influenced to become Jihadi fighters and videos posted to YouTube have inspired the rise of a new generation of white supremacists.
While Silicon Valley has the technological tools for detecting all flavours of extremism – services have added more moderators, better flagging systems and implemented machine learning – the Silicon Valley top brass only responded when the public and governments applied pressure. Nation states around the world have reacted angrily to the proliferation of extremist propaganda and hate speech on social platforms, resulting in with stricter controls on the firms that own them.
How Facebook is using AI to tackle terrorism online
“We have to have these companies being responsible,” says David Stupples, a professor at City, University of London’s school of computer science. “But if we then push for more regulation and more penalties for the tech companies then they will try to circumvent them”.
Stupples believes in self-regulation of social media but says there are differences in placement of the “red line” between technology companies and governments. In March, MPs at a session questioning technology firm representatives complained they could find hate speech online “within seconds” and accused companies of “commercial prostitution”. In the same set of questions, Facebook defended not deleting a page called ‘Ban Islam’ as it wasn’t “expressly… designed to attack Muslims”.
The rise in neo-Nazism, inspired by websites such as The Daily Stormer, have raised the question of where technology firms draw the line on hate. A Stormer article abusing 32-year-old Heather Heyer, who died after being deliberately hit by a driver during a peaceful protest in Charlottesville, prompted a backlash against the site, with the likes of Cloudflare, GoDaddy and Google all pulling support. But the balancing of freedom of speech, and what is considered offensive, is complex. Online privacy advocates at the Electronic Frontier Foundation argue that top level removal of content on the internet has little transparency and processes should be followed, rather than companies simply reacting to headlines.
Through their terms of service, private companies have been forced to act as de-facto law enforcement agencies, policing their own networks for illegal content. The link to The Stormer article was shared 65,000 times on Facebook, with the company deciding to remove link posts that didn’t contain any additional comments (where users could decry the message of the story). In response to the increased pressure on tech firms, Mark Zuckerberg said his company would “always” take down posts that contain “hate crimes or acts of terrorism”.
Under the guidance on Angela Merkel, Germany is one of the first countries to pass a specific law regulating hate speech online. Although, it has not yet been signed into operation by president Frank-Walter Steinmeier, the Netzwerkdurchsetzungsgesetz (NetzDG) legislation requires social media companies remove illegal content within 24 hours of it being reported to them. Where a post on a social network has been flagged as offensive but doesn’t appear to be directly against the law, a company will have seven days to decide what to do about it. A provider failing to comply with the law could be fined between €5 million (£4.5m) and €50m (£45.8m).
“There is no debate in Germany that these forms of terroristic speech are a crime,” Bernd Holznagel, a professor in telecommunications law at the University of Munster says. However, the law isn’t just a reaction to terrorism. Its statues, translated into English here, include defamation of the country’s president, its flag and coat of arms; being approving of illegal acts; and defaming religions.
These areas already exist in German criminal law but the NetzDG makes it easier for the country’s authorities to enforce deletion. If one of these criminal areas of illegal content, plus the others included in the law, is infringed upon a takedown notice can be issued to the company hosting it. “The effect of these statutes is naturally a chilling effect and these notice and takedown procedures in general have a chilling effect on freedom of speech,” Holznagel says. The pressure such laws could put companies under, he continues, is cause for concern. “They [companies] need more than 24 hours to balance whether the content is protected strongly by a free speech or if it hurts the rights of other overwhelmingly. And that, in practice, is a very difficult situation because you have to look at the context of the speech”. No appeals mechanism is present within the law, either. David Kaye, a UN special rapporteur on freedom of expression, sent a detailed letter to German lawmakers stating NetzDG was “vague”, “ambiguous” and could result in companies “over-regulating expression” to avoid fines.
At the time the law was passed Heiko Maas, the federal minister of justice and consumer protection, released a statement saying freedom of expression can include “ugly utterances” but it “ends where criminal law begins”. Maas continued: “With this law, we put an end to the verbal law of the jungle on the internet and protect the freedom of expression for all”. Companies also have to report the number of complaints they’ve received twice a year and how they dealt with them. “I think it will be prohibitive in terms of free speech, we’re also privatising the work that should be done by the police and the courts,” says Nico Lumma, an investor and chairperson of digital think tank D64. “Now private platforms will have to decide what is right or wrong; this is why we have the judicial system”.
And deciding what is right and wrong is a big problem. When it comes to images of child sexual abuse, social media platforms and law enforcement have successfully worked together to remove content from networks. This proves the technological solutions do exist, but only work when everyone agrees on what should be removed. But when defining hate speech, the “red line” has proven more elusive. Yet the principles of the new German legislation have been copied by other nations around the world, including the UK. The MPs who grilled social media companies decided a similar approach to Germany may be needed. They concluded a “stronger law” and “system of fines” should be considered for companies that don’t remove hate speech and extremism on their networks.
“Social media companies that fail to proactively search for and remove illegal material should pay towards costs of the police doing so instead,” the committee of MPs said. Many of the suggestions echo what has been seen in Germany. Working with the French government, Theresa May has said companies that fail to “remove harmful content from their networks” will face a legal liability forcing them to do so. If they don’t comply they will face fines. Separately, she has said “big companies” should be controlled by “international agreements”.
Additionally, the UK has announced it is looking to create an internet ombudsman to deal with social media complaints and enforce laws on social media companies. The Guardian has reported a new internet safety strategy will be published by the Department for Digital, Culture, Media and Sport this year – including the levy on social media companies. Labour MP Yvette Cooper responded by saying social media companies have been too slow to act to “illegal content” on their platforms, adding it was time for government “to get on with this”.
Governments stepping into regulate what is posted online can be seen one of two ways: either as a failure by technology firms, or overstepping by lawmakers. In its most extreme form, nations control what users can and can’t see at a higher technical level than posts on websites. China and North Korea, for instance, have well-known censorship systems, which stop users from accessing entire websites. For instance, North Korea only has 28 domain names registered to its .kp address and most of them celebrate Kim Jong-Un.
More governments have moved to control the freedom of internet users online, even for those who use privacy-enhancing tools. Both China and Russia have introduced bans on virtual private networks to stop people accessing prohibited content. (Within China, Apple removed VPNs from its app store at the behest of the government). In Brazil, judges have ordered WhatsApp to be blocked on multiple occasions. Across the world, the encryption protocols used to keep data safe are consistently under attack from politicians who want evermore access to people’s data.
EU wants to ban encryption backdoors to protect your data from governments and the law
“Censorship is another word for information control,” says Shehar Bano, a postdoctoral researcher at University College London. Bano’s work on internet censorship has highlighted more than 60 countries with some forms of state-sponsored censorship online; including in Pakistan where the entire of YouTube was blocked because of one blasphemous video. Bano says censorship can take place at a level where content is completely blocked, but it can also be more subtle. “The Chinese government has recruited a large number of people who are paid to inject comedy into social media posts,” she explains. “So if the discussion is becoming anti-government they put in comments to neutralise the kind of conversation either [by saying] something that is neutral or something that glorifies the government”. The creeping rise of government control over online content – whether like North Korea or more regulation at a lower level – can lead to it being seen as normal. “In China you get a lot of people, in their thirties especially, that just accept the controls,” Stupples says. “The very young people, however, are getting a much better view of what social media is like in other countries”.
Following Germany’s adoption of NetzDG, one oppressive regime followed suit with its own plans to crackdown on hate content being posted on social networks. A Russian bill put forward by the country’s legislators was a “copy and paste” version of what had already been implemented in Germany, according to Reporters Without Borders. Russia suggested new fines for social media companies that didn’t remove hate speech from their websites, but kept the definition of unlawful content deliberately vague. “Our worst fears have been realised,” Christian Mihr, RSF Germany’s executive director, said in a statement. “The German law on online hate speech is now serving as a model for non-democratic states to limit internet debate.”