This is the second of several reports we are now choosing to share to highlight why we do what we do, and what we think are the current challenges facing those working to address issues within the communications ecosystem.
Increased noise
With regards to disinformation, an immediate concern from our partners focused on the simple proliferation of outlets and users, of where and how technology is now ingrained into our social and economic communication (between people, between people and things, and between things). Beyond the cybersecurity implications, this raises concerns over the manipulation of content, the isolation of vulnerable people set away from real world points of intervention, and the ability of people to filter and process the data they are confronted with on a day-to-day basis.
The increased online activity generated, in part, by the decrease of offline social engagement has also led to an increase in noise within the communications space - there is simply more. From a security and prevention perspective this has made the monitoring, assessment, investigation, fact-checking (and potential take down) of aggressive, illegal, and hostile online activity harder - harder to find and harder to identify amongst the volume of content.
Similar to the impact of “firehose of falsehood1” approach noted in Russian propaganda, it has put considerable pressure on traditional policing and response mechanisms (fact checking, take downs etc.) and has meant there is more information for people to filter, assess and understand – opening up space for exploitation by conspiracy and disinformation (which we will explore more of later). This was noted particularly from those in Eastern European countries.
However, this increased quantity of content has had the benefit of making some forms of extremist content having a reduced share of “airtime”, whilst at the same time, other communications challenges such as the explosion in conspiracy theories have been seized upon by far-right extremists who were able to connect their usual narratives with anti lockdown and pandemic conspiracy conversations.
Multiplication and migration to new platforms
The biggest change in the threat arena for disinformation, social cohesion and radicalisation is the multiplication of platforms and the migration of communication of concern to new platforms.
In some respects, this was reflected as a policy win within the sphere of removing hate speech and hostile actors from the public spaces online:
“The scale and amplifications are reduced, it’s limited, and limiting the size of groups...this is a plus, limiting the negative effects that come with use of the tech, not the tech itself.”
It was noted that if we are concerned about the radicalising effect of communication on people online, or the spread of hate speech in the public sphere, the migration of this to more niche channels is seen as a push to the “back streets and side alleys of the online public space”. This makes it much harder for people to stumble across it. They have to go actively looking for it or be signposted to it. This is seen as a level of protection, certainly against early stage radicalisation.
However, this migration and multiplication creates new dynamics of concern. This move is primarily to smaller, more niche platforms often owned by the extremist organisations themselves or those outside of the mainstream. These platforms do not comply in monitoring content nor in removing content - illegal or otherwise. They do not take part in database sharing such as the GIFCT database of #, nor wider training for start-ups to support reducing exploitation of their sites - such as Tech Against Terrorism. Rather, these sites exist and benefit from attempts to push such conversations from the mainstream. They have no interest in cooperating with international agreements or in some cases the law.
It was considered by our tech-interviewees that the focus of concern for hosting and disseminating extremist and terrorist content can no longer be seen as just a problem for social media platforms. They do not think the conversation on the necessary response has broadened as fast as the challenge. Non-cooperative platforms have always been a concern - 4 and 8 Chan for example - and concerns about the dark web are not new. However, it was noted that Web hosting now needs to be brought into the conversation, and this poses further complications. This requires a faster shift in the perspective of the conversation. It is no longer about the largest, and most public platforms, but about the operational ability of the digital space as a whole - the whole communications ecosystem.
However, whilst the mainstream platforms are no longer hosts to the radicalising or extremist content, they are still used by people who seek to gain supporters. These actors play within the rules, never saying anything to get them removed from the sites but just enough to attract those who share similar views. They are then signposted to the private or encrypted sites where more active radicalisation efforts take place or where more extremist content is hosted. This digital communication ecosystem, like with real-life crime on the street, is increasingly complex and unlike the real world is not actively policed.
“Like all cities around the world there are no-go areas. The internet is becoming the same, ever increasingly a reflection of the real world.”
The challenge into 2021 and beyond is that policy makers and those seeking to address this exploitation now need to look beyond social media. The concern raised was that “social Media gets all the attention, but these platforms are not the be-all-end-all”, and as mentioned above, there are other spaces that are part of a shared ecosystem.
Our investigation highlights an urgent need to better map the migration of content to other platforms and digital spaces and to better understand the nodes of amplification of content (already underway by some security services). Such nodes are increasingly seen as individual accounts (mass-spreaders or influencers), rather than groups or organisations per se, and thus harder to censor or proscribe if they remain within the terms of service of the platforms, and with the laws around free speech. It is understood that the multiplicity of platforms, and the use of other spaces to hold social engagement, is considered advantageous for those seeking to exploit the communications ecosystem - making them harder to detect and censor. How this advantage may develop needs to be monitored. For example, is it a retreat to the shadows that marginalises such issues, or will more malign activities fester in the dark and infect society more broadly in ways as yet unknown? Moreover, if this multiplication of channels is an advantage to hostile actors, how does society, government and its law enforcement agencies reduce that advantage?
One need is for policy makers and law enforcement to be better at anticipating shifts to new platforms caused by their action on the mainstream platforms. The challenge comes from the number of platforms that are multiplying. Policy, and the public discourse, focuses too much on the large platforms. It was argued to us that this focus enables the smaller platforms to thrive. Terrorist and extremist groups in particular are finding a home in these new spaces because they are less likely to be taken down. Responding to multiplicity is a capacity issue, but also a relationship issue. Unlike the large multinational corporations that make up the largest platforms there is no commercial benefit for these small platforms to cooperate with the national government - in fact that would be detrimental to their success. However, as our interviews progressed following the USA 6th January Capitol Riots, it was felt that such platforms, and how they are grappled with by policy leads, is an issue that will rise up the agenda.
Government cannot and, some argue, should not have eyes and ears everywhere. Furthermore, whilst this multiplication of platforms and migration from the mainstream to the shadows may reduce cost for the hostile or criminal actor, it increases costs and complicates the response for those seeking to prevent and protect populations against such activity. How this is addressed raises questions that we will look at later both about the business model of how the digital space operates, but also the role of the digital space in geopolitical considerations.
Comments