In May, several French and German social media influencers received a strange proposal.
A London-based PR agency wanted to pay them to promote posts on behalf of a client. A neat three-page document detailed what to say and on which platforms to say it.
But he asked influencers to push not beauty products or vacation packages, as is typical, but lies tarnishing Pfizer-BioNTech’s Covid-19 vaccine. Stranger still, the Fazze agency has claimed a London address where there is no evidence that such a company exists.
Some recipients have posted screenshots of the offer. Exposed, Fazze cleaned up his social media accounts. The same week, Brazilian and Indian influencers posted videos echo Fazze’s script to hundreds of thousands of viewers.
The scheme appears to be part of a secret industry that security analysts and U.S. officials say is exploding: hiring misinformation.
Private companies, straddling traditional marketing and the shadow world of geopolitical influence operations, sell services once primarily provided by intelligence agencies.
They sow discord, meddle in elections, sow false narratives and push viral conspiracies, mostly on social media. And they offer customers something valuable: denial.
âRented disinfo actors employed by government or actors adjacent to government are getting more and more serious,â said Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab, calling it an âemerging industry. booming “.
Mr. Brookie’s organization followed one of them taking part in a mayoral race in Serra, a small town in Brazil. Ideological promiscuity Ukrainian company stimulated several competing political parties.
A wave of anti-American messages in Iraq, apparently organic, has been followed to a public relations firm that has been separately accused of faking anti-government sentiment in Israel.
Most of them can be traced back to underground companies whose legitimate services resemble those of a marketer or an email spammer.
Job postings and LinkedIn employee profiles associated with Fazze describe it as a subsidiary of a Moscow-based company called Adnow. Some Fazze web domains are registered as belonging to Adnow, such as first reported by the German sales outlets Netzpolitik and ARD Kontraste. Third Party Reviews represent Adnow as a struggling advertising service provider.
EU officials say they are investigating who hired Adnow. Sections of Fazze’s anti-Pfizer talking points look like promotional material for Russia’s Sputnik-V vaccine.
For-hire disinformation, while sometimes effective, becomes more sophisticated as practitioners iterate and learn. Experts say it is becoming more and more common in all parts of the world, overtaking operations carried out directly by governments.
The result is an accelerated increase in polarizing conspiracies, bogus citizen groups and fabricated public sentiment, deteriorating our shared reality beyond even the depths of recent years.
The trend emerged after the Cambridge Analytica scandal in 2018, according to experts. Cambridge, a political consulting firm linked to members of Donald J. Trump’s 2016 presidential campaign, has collected data on millions of Facebook users.
The controversy has drawn attention to common methods among social media marketers. Cambridge used its data to target hyper-specific audiences with personalized messages. He tested what resonated by tracking likes and shares.
The episode taught a generation of consultants and opportunists that there was a lot of money in social media marketing for political causes, all disguised as organic activity.
Some newcomers eventually came to the same conclusion that Russian agents did in 2016: disinformation works particularly well on social platforms.
At the same time, the backlash from Russia’s influence peddling appears to have left governments reluctant to get caught, while demonstrating the power of such operations.
âThere is, unfortunately, a huge market demand for disinformation,â Mr. Brookie said, âand many places in the ecosystem are more than willing to meet that demand.
Commercial firms carried out for-hire disinformation in at least 48 countries last year, nearly double the previous year, according to a Oxford University Study. The researchers identified 65 companies offering such services.
Last summer, Facebook deleted a network of Bolivian citizen groups and journalistic fact-checking organizations. He said the pages, who had promoted lies supporting the country’s right-wing government, were bogus.
Stanford University researchers trace content to CLS Strategies, a Washington-based communications company that had registered as a consultant with the Bolivian government. The company had carried out similar work in Venezuela and Mexico.
A spokesperson referred to the company’s statement last year that its regional head had been put on leave, but disputed Facebook’s accusation that the work qualified as foreign interference.
New technologies allow almost everyone to get involved. Programs batch generate fake accounts with profile photos that are difficult to find. Instant metrics help refine effective messaging. The same goes for access to users’ personal data, which can be easily purchased in bulk.
The campaigns are rarely as sophisticated as those of government hackers or specialist companies like the Kremlin-backed Internet Research Agency.
But they seem to be cheap. In countries that mandate campaign finance transparency, companies report charging tens of thousands of dollars for campaigns that also include traditional advisory services.
The denial layer allows governments to sow disinformation more aggressively, at home and abroad, than might otherwise be worth the risk. Some contractors, when caught, have claimed that they acted without their client’s knowledge or only to win future contracts.
The platforms have stepped up their efforts to root out coordinated disinformation. Analysts notably credit Facebook, which publishes detailed reports on the campaigns it disrupts.
Yet some argue that social media companies also play a role in compounding the threat. The algorithms and design elements that drive engagement, search results, often prioritize content that divides and conspires.
Political norms have also changed. A generation of populist leaders, like Rodrigo Duterte of the Philippines, was formed in part through the manipulation of social media. Once in power, many institutionalize these methods as tools of governance and external relations.
In India, dozens of government-run Twitter accounts have shared posts on India Vs Disinformation, a website and set of social media feeds that claim to verify information about India.
India Vs Disinformation is, in fact, the product of a Canadian communications company called Press Monitor.
Almost all of the posts seek to discredit or scramble information unfavorable to the government of Prime Minister Narendra Modi, including on the heavy toll of Covid-19 in the country. An associated site promotes pro-Modi stories under the guise of press articles.
A digital forensic research laboratory report investigating the network called it an “important case study” in the rise of “disinformation campaigns in democracies”.
A representative for Press Monitor, who would identify only as Abhay, called the report completely false.
He only clarified that he had incorrectly identified his business as being based in Canada. He was asked why the company lists an address in Toronto, a Canadian tax registration and identifies as “part of Toronto’s thriving tech ecosystem”, or why he was reached at a Toronto phone number, he said he had business in numerous countries. He did not respond to an email asking for clarification.
A LinkedIn profile for Abhay Aggarwal identifies him as the managing director of Toronto-based Press Monitor and says the company’s services are used by the Indian government.
A set of pro-Beijing operations suggests the capacity for rapid change on the ground.
Since 2019, Graphika, a digital research firm, has been tracking a network it calls “SpammingâFor its early reliance on social spam platforms with content echoing Beijing’s line on geopolitical issues. Most of the posts received little to no engagement.
In recent months, however, the network has developed hundreds of accounts with elaborate personas. Each has their own profile and their own post history which may seem genuine. They seemed to come from many different countries and backgrounds.
Graphika traced the accounts to a Bangladeshi content farm who wholesale them up and likely sold them to a third party.
The network is pushing harsh criticism against Hong Kong Democratic activists and American foreign policy. By coordinating without appearing to be, it created the appearance of organic shifts in public opinion – and often attracted attention.
The stories were amplified by a large media network in Panama, prominent politicians in Pakistan and Chile, Chinese YouTube pages, left-wing British commentator George Galloway and a number of Chinese diplomatic accounts.
A distinct pro-Beijing network, discovered by a Taiwanese investigative body called The Reporter, operated hundreds of Chinese-language websites and social media accounts.
Disguised as news sites and citizen groups, they promoted Taiwanese reunification with mainland China and denigrated protesters in Hong Kong. The report found links between the pages and a Malaysian-based startup that offered netizens Singaporean dollars to promote the content.
But governments may find outsourcing this obscure work to come with risks as well, Brookie said. On the one hand, businesses are more difficult to control and may adopt unwanted messages or tactics.
On the other hand, companies organized around deception may be just as likely to turn these energies towards their customers, inflating budgets and charging for work that is never done.
âAt the end of the day, crooks will scam online,â he said.