Meta has shared a year-end update on its efforts to combat networks engaged in coordinated inauthentic behavior, while it’s also launched a new initiative to help expand its detection processes by opening up more data on these elements to outside research teams.
First off, on coordinated activity – Meta says that it removed four networks in November 2021, originating from Palestine, Poland, Belarus and China, with a cumulative 852 Facebook and Instagram profiles and 99 Facebook Pages removed.
Most of these networks were detected due to internal investigations into suspected activity related to local unrest and political conflict, while Meta also removed coordinated groups in both France and Italy that had engaged in mass harassment of journalists, elected officials and medical professionals in relation to vaccinations, linked to back to a known anti-vax group.
A Vietnam-based group was also removed for falsely mass reporting activists and government critics for policy violations, in an attempt to silence them
The disclosures provide some additional perspective on the various ways such groups are seeking to utilize Meta’s huge reach for political influence activities, and the evolving strategies being employed to avoid detection and removal.
In addition to the latest updates, Meta has also outlined its latest initiative to expand its research into such activity, with a new platform that will enable researchers to glean more data about suspect activity.
Using its CrowdTangle content insight platform, Meta is looking to provide more data on coordinated inauthentic behavior to academics and other research organizations, as part of a broader effort to improve its detection systems and identify shifts in approach.
“Over the past year and a half, we’ve been working with the CrowdTangle team at Meta to build a platform for researchers to access data about these malicious networks and compare tactics across threat actors globally and over time. In late 2020, we launched a pilot CIB archive where we’ve since shared ~100 of the recent takedowns with a small group of researchers who study and investigate influence operations. We’ve continued to improve this platform in response to feedback from teams at the Digital Forensic Research Lab at the Atlantic Council, the Stanford Internet Observatory, the Australian Strategic Policy Institute, Graphika and Cardiff University.”
Meta’s looking to make this new resource available to more researchers in 2022, providing additional insight into the evolving tactics of malicious actors, and helping it remove even more of this activity moving forward.
Which needs to be a key focus. The 2016 US election was revelatory in exposing the use of Facebook and Instagram for political influence activity, and while that’s helped improve enforcement, and made users more skeptical of the content that they see in these apps, it also opened the eyes of many political activist groups who have since sought to implement their own processes to use the same measures in their campaigns.
Really, it highlighted the power that Facebook, in particular, can have in this respect, in regards to influencing opinion and shifting public sentiment, and since then, more and more lobbyists and groups have initiated their own attempts at moving the needle in various ways.
As such, it’s important for Meta to take action where it can, and expanding access to the broader academic and research community can only help in this respect.