Did social media actually counter election misinformation?

Did social media actually counter election misinformation?

SeattlePI.com

Published

Ahead of the election, Facebook, Twitter and YouTube promised to clamp down on election misinformation, including unsubstantiated charges of fraud and premature declarations of victory by candidates. And they mostly did just that — though not without a few hiccups.

But overall their measures still didn't really address the problems exposed by the 2020 U.S. presidential contest, critics of the social platforms contend.

“We’re seeing exactly what we expected, which is not enough, especially in the case of Facebook,” said Shannon McGregor, an assistant professor of journalism and media at the University of North Carolina.

One big test emerged early Wednesday morning as vote-counting continued in battleground states including Wisconsin, Michigan and Pennsylvania. President Donald Trump made a White House appearance before cheering supporters, declaring he would challenge the poll results. He also posted misleading statements about the election on Facebook and Twitter, following months of signaling his unfounded doubts about expanded mail-in voting and his desire for final election results when polls closed on Nov. 3.

So what did tech companies do about it? For the most part, what they said they would, which primarily meant labeling false or misleading election posts in order to point users to reliable information. In Twitter's case, that sometimes meant obscuring the offending posts, forcing readers to click through warnings to see them. For Facebook and YouTube, it mostly meant attaching authoritative information to election-related posts.

For instance, Google-owned YouTube showed video of Trump’s White House remarks suggesting fraud and premature victories, just as some traditional news channels did. But Google placed an “information panel” beneath the videos noting that election results may not be final...

Full Article