According to an interview with Dr. Erin Saltman, a researcher for the anti-extremist think-tank Institute for Strategic Dialogue (ISD), here’s how it might work:
“Through sentiment mapping, activity across the Web, and participation in different online groups or forums, ISD can build a profile of an individual at risk of joining extremist organizations, and can either target ”counter-narrative“ advertising through Facebook using videos or information created to dissuade their beliefs, or reach out to individuals on a personal level.”
In other words, by using the same technology that keeps showing you underwear ads on every site you go to, ISD can also identify potential extremists. Not just Islamists, but also “neo-fascists” (who seem to love mixed martial arts, according to Dr. Saltman’s research) and other types of radicals.
ISD has run one experiment, identifying a set of profiles using Facebook’s Graph Search that were deemed “at risk” for radicalization and connected them with former extremists on Facebook Messenger. 60 percent of the profiles that responded had a “sustained conversation” with the former extremists, but beyond that, we don’t know whether the program had any lasting effects.
We also don’t know how many of those profiles were actually at risk of radicalism. Dr. Saltman notes that “when it comes to automatically identifying and targeting extremist or terrorist content on a broad scale, subtle differences in language, meaning, and ideologies make it difficult to identify what’s bad or good content.” That’s just for filtering — using targeted advertising is even hairier, and not just because of the tech’s existing privacy concerns.
It’s not hard to imagine a journalist, researcher or even a fiction-writer being identified as a potential radical based on the sites they visit for their research. At least in the programs being developed by the ISD, potential radicals are only getting counter-radical advertisements, videos and messages from ex-radicals like the one below:
These are, according to Saltman’s research, the most effective forms of anti-radical communication. If ISD is going to get government backing, their lack of conclusive evidence on the effectiveness of their program might force ISD to change their methods to something more propaganda based. For now, Facebook will look at the work ISD is doing to suggest additional methods and tools to make their work more effective — and they should know what works here.
However those same tools and targeting methods could be used in a much more nefarious way, particularly if the government isn’t convinced that the “soft” approach is the way to go. Imagine this same set of algorithms being used to identify #BlackLivesMatter supporters, “radical” environmentalists or any other organization that’s threatening to the status quo. Nobody likes Daesh, but using targeted ad data to find their potential recruits opens up a new avenue for our data trails to be exploited against us. That’s ultimately more dangerous than another Daesh attack.
(featured image via Steven Mileham)