Listen to Larry Magid’s interview with Quilliam’s director of public policy, Jonathan Russell
Today’s column is from the 11th annual Internet Governance Forum (IGF), this year in Guadalajara, Mexico. IGF is a United Nations-sponsored “multi-stakeholder” conference. Unlike many U.N. events, it is open not just to representatives of member states, but also to “civil society” (non-profits), academia and industry.
To that end, there are people here from numerous governments around the world as well as advocacy groups and technology companies, including Google, Facebook and Microsoft.
I’m here speaking on three panels in my capacity with ConnectSafely.org.
Just about any topic about internet policy is fair game at this conference, and there are workshops on inclusion and accessibility, net neutrality, online protection of children, empowerment of women, safety, privacy and security implications of “the internet of things,” as well as discussions about censorship and free expression.
I was asked to moderate a session titled “Free Expression and Extremism: An Internet Governance Challenge.” Another session, “Social Media and Youth Radicalization in the Digital Age,” covered similar ground.
Both sessions explored vexing issues for governments, social media companies and advocacy groups hoping to curtail radicalization and terrorism without limiting free speech. And, like many complex issues, finding a consensus was impossible.
At both sessions, there wasn’t even an agreement that the terms “extremism” or “radicalization” are appropriate ways to describe the problem. As the moderator of the social media radicalization session put it, “The definitions itself of radicalization are very diverse. They differ from country to country, context to context.”
Indeed, I remember a student group in Berkeley during the ’60s called “Radical Student Union,” which was radical only in the literal sense that it sought to get to the “root” of social problems like war and inequality.
Yet, there is no question that there are radicals and extremists engaged in dangerous acts that, too often, result in the loss of human life. And there is also no question that some of these groups successfully use the internet and social media to recruit, radicalize and, in some cases, activate individuals to carry out atrocious acts.
There are numerous strategies that can be employed to combat what I think most would agree to be dangerous online radicalization, but the most widely discussed are censorship and so-called “counter speech.”
Censorship, which includes blocking or removing offensive content, has been shown to be “ineffective in tackling extremism,” and “potentially counter-productive,” according to a report from London-based Quilliam Foundation, which was represented on our panel.
Guy Berger, director of UNESCO’s Division of Freedom of Expression and Media Development, agreed. “There is no clear evidence that censorship actually reduces radicalization,” he said at the panel about youth radicalization.
Counter speech is the posting of positive comments, video or other media to counter negative or harmful postings. It’s a strategy often used not just to counter extremism, but also cyberbullying and online harassment.
While far from a panacea, counter speech has been shown to be at least somewhat effective. “It drowns out the content that is promoting violence, promoting hate or promoting fear,” said Indrajit Banerjee, also of UNESCO.
Indeed, a report from the London-based Institute for Strategic Dialogue concluded, “The use of counter-narrative messaging with measurable impact is replicable and scalable, though not without difficulty.”
Facebook, Twitter and Google have experimented with counter-speech efforts that “could be effective in reaching target audiences and driving conversations among them,” according to The Verge.
One conversation during my workshop focused on the line between legitimate free speech and expression, including protests and non-violent civil disobedience on the one hand, and what could be considered dangerous radical or extremist speech on the other.
The closest we had to consensus was a general agreement that speech which specifically advocates or encourages violence fits that definition. But someone raised the example of “pizzagate,” the fake news story that alleged a pizza restaurant in Washington, D.C., was the center of an online child pornography ring sponsored by a close associate of Hillary Clinton.
The conspiracy theory reportedly convinced one individual to take matters into his own hands by showing up at the pizza restaurant with a rifle so that he could “self-investigate” the bogus theory. The weapon was discharged, but it was a matter of luck that no one was hurt or killed as an indirect result of this piece of fake news, which had been widely circulated on Reddit, Twitter, Facebook and other platforms.
It would be a stretch to claim that those who promulgated or shared this fake story intended to incite violence, but that easily could have been a consequence of their online actions.
Michael Flynn, Jr., the son of Trump’s pick for national security adviser, left the presidential transition team after tweeting, “Until #Pizzagate proven to be false, it’ll remain a story. The left seems to forget #PodestaEmails and the many ‘coincidences’ tied to it.” At least in this case, there was a consequence to spreading fake news.
Despite the general lack of consensus, there was one point of agreement among all who spoke at my workshop. Potentially dangerous extremism, radicalization, hate speech and fake news are not limited to any single ethnicity, religion, nationality or ideology.
Listen to Larry Magid’s 1-minute CBS News Tech Talk segment about online extremism