The research is the latest in a growing body of evidence that social platforms are failing to prevent a flood of disinformation on their services ahead of the runoff election Sunday between President Jair Bolsonaro and former president Luiz Inácio Lula da Silva. Brazilian lawmakers last week granted the nation’s elections chief unilateral power to force tech companies to remove misinformation within two hours of the content being posted — one of the most aggressive legal measures against social media giants that any country has taken.
The right-wing Bolsonaro has repeatedly alleged without evidence that voting machines used for a quarter century in Brazil are prone to fraud. The rhetoric of Bolsonaro supporters has often appeared to echo that of President Donald Trump supporters during the 2020 U.S. election, who questioned election results under the banner Stop the Steal.
Some of the top narratives that circulated in Brazil before the first-round vote on Oct. 2 included specific allegations of fraud, messages attacking the Supreme Electoral Court, and false calls for “inspectors” at the polls, according to Brazilian researchers and the left-leaning human rights group Avaaz. Viral audio and video on Telegram, WhatsApp, Facebook and TikTok alleged that ballot boxes were being pre-filled with votes for former president Luiz Inácio Lula da Silva.
Misinformation has also been spread by the left. The messages include false allegations that Bolsonaro has confessed to cannibalism and pedophilia.
Major social media companies allowed Stop the Steal content to spread virtually unchecked until the violent consequences of the rhetoric became clear on Jan. 6, 2021. Groups on Facebook in particular have been found by researchers to be major vectors for organizing ahead of the Stop the Steal rally at the Capitol, and that Facebook’s own software algorithms played a significant role in helping such groups gain members.
Since then, companies including Meta and to a lesser extent TikTok have promised to do better, and in particular to clamp down on election-related content that could lead to violence. But the latest evidence shows that the companies are failing to keep their promises — particularly outside of the United States.
“What we are seeing is Meta and Google taking the protection of Brazilian voters less seriously than [that] of their American counterparts,” said Nell Greenberg, deputy director of Avaaz. Ahead of the U.S. midterm elections next month, she noted, the companies have been labeling, downgrading and removing content that incites violence and spreads false information about the election.
“There are still crucial actions they can take to help ensure a safe Election Day, and prevent a potential Brazilian ‘January 6th,’ ” she wrote in an email. “The question is, will they do any of them?”
Meta spokesman Tom Reynolds said the company has updated its search tools in recent weeks in preparation for the election. He said top search results now direct users to information from Brazilian authorities.
“We’ve worked to remove several keyword recommendations that may lead to misinformation and applied labels on election-related posts on both apps,” he said. “Around 30 million people in Brazil have clicked on those electoral labels on Facebook and were directed to the Electoral Justice’s website.”
TikTok spokeswoman Jamie Favazza said the company has invested in protecting the site ahead of Brazil’s elections.
“We take our responsibility to protect the integrity of our platform and elections with utmost seriousness and appreciate feedback from NGOs, academics and other experts,” she said in an email. “We continue to invest in our policy, safety, and security teams to counter election misinformation as we also provide access to authoritative information through our Election Guide.”
The Brazilian research institute NetLab found that both Meta and Google allowed political candidates to run advertisements on their platforms during the first round of voting on Oct. 2, even though such advertising is prohibited by Brazilian law during this period. The group also found evidence of paid advertising encouraging military interventions in the election as voters went to the polls.
A test of Meta and YouTube’s ad systems by the human rights group Global Witness revealed that the companies approved large numbers of misleading ads, including spots that encouraged people not to vote or gave false dates for when ballots could be posted. YouTube said it “reviewed the ads in question and removed those that violated our policies,” although the Global Witness report showed all the ads submitted were approved by the Google-owned site.
To study how platforms pushed people toward misinformation, the SumOfUs researchers created dummy accounts on Facebook, Instagram and TikTok. They then typed in the terms “ballot,” “interventions” and “fraud” into the search bars of these social media services and counted up the results.
They found that five out of seven of the groups recommended by Facebook under searches for the term “intervention” were pushing for a military intervention in Brazil’s election, while five out of seven of the groups recommended under the search term “fraud” encouraged people to join groups that questioned the election’s integrity. The groups have names such “Intervention to Save Brazil” and “Military intervention already.”
Overall, the group found, 60 percent of all content recommended by Facebook and Instagram pushed misinformation about the electoral process.