(Reclaim The Net)—Lexipol, a private consultancy geared towards providing services to law enforcement in the US, has come up with a recommendation to law enforcement to set up a “Misinformation/Disinformation Unit.”
A piece published on the company’s platform, Police1.com, asks its client police departments whether they are “prepared (for) the battle against mis/disinformation.”
Coming from Lexipol, this is no ordinary question, as the firm is said to have contracts with more than 8,000 law enforcement agencies, and is consequently considered to be a key player in what is known as “privatized police policymaking.”
According to Lexipol’s own statements, its reach in March 2020 extended to 8,100 agencies that used the company’s services and manuals (a year earlier, reports said that these agencies were located across 35 US states).
From that position, Lexipol is now making recommendations to its “subscribers” in the law enforcement community to establish a unit that would not only tackle supposed misinformation and disinformation, but also “collaborate with tech companies and civil society organizations to develop early-warning systems and identify harmful content in real time.”
This can be read as brazen defiance of the ongoing efforts, including in the US Congress, to put an end to just such “collaboration” between private and government (here, law enforcement) entities – investigated in one instance as government-Big Tech collusion.
But Lexipol’s write-up plays on fears that it is “disinformation” that might increase public hostility toward police officers and put them at greater risk.
The kind of disinformation breeding hostility Lexipol has in mind may not be exactly the same as that of many police officers, however. The company mentions what are at this point “soft targets,” at least to a certain brand of political and media thinking in the US – Russia, China, Iran, and North Korea – as somehow an example of that domestic law enforcement, too, might be harmed by disinformation, and what to do about it.
With the scaremongering in place, Police1 promotes the well-established narratives: online speech needs to be “protected” from the dangers of AI, and this should be done by the police employing “proactive strategies.”
What is recommended to these state entities is not really different from what the current US authorities ask of social media, and media in general: in this case, it would be a unit, one “charged with identifying false information, fact-checking claims, and creating counter-narratives.”