The new rules for the evaluation and removal of messages will apply to tweets that try to undermine people’s faith in the election process itself , such as false claims about elections or falsification of ballots, or false data on the vote, Twitter said.
The policy goes into effect on September 17, several weeks before the US presidential election on November 3. Many Americans are expected to vote by mail over the COVID-19 pandemic, which is likely to delay the announcement of the election results.
Social media companies are working to strengthen their policies to prevent disinformation, but it is unclear if their efforts will be enough, writes the Associated Press.
Facebook said last week that it would cut back on new political ads the week before the election and remove posts containing misinformation about COVID-19 and voting. It will also include links on the official election results in the posts of the candidates and their committees prematurely announcing the victory.
Twitter has a more aggressive policy than Facebook, AP estimates. In May, it completely banned political advertising and began tagging President Donald Trump’s tweets with factual checks, which left the president unhappy.
Twitter said its labeling policy , instead of removing tweets from factual world leaders, would continue to apply under the latest rules.
This means that even if a candidate publishes misleading information regarding the election result, the post is likely to remain unchanged as Twitter deems it “in the public interest.” However, you will not be able to forward it.
“We will not allow our services to be abused in connection with civic trials and, above all, with elections,” Twitter wrote.
Twitter’s new rules will be released in a few weeks in conjunction with the US election, but over 80% of Twitter users are outside the US, and will therefore apply worldwide, the Associated Press notes.