【】

  发布时间:2024-11-10 07:40:00   作者:玩站小弟   我要评论
When it rained misinformation during election season, it poured policy updates from tech platforms. 。

When it rained misinformation during election season, it poured policy updates from tech platforms. Too bad none of them thought to bring a sturdy umbrella in the first place.

The Mozilla Foundation, a nonprofit which advocates for a secure, open internet that supports democracy, has released an interactive timeline that shows the misinformation policy changes online platforms made before, during, and after the 2020 election, and how those actions intersect with major political and cultural events that spurred misinformation. Mozilla is most widely known for its Firefox browser, but the nonprofit also tracks online misinformation in an effort to improve tech policies.

Mashable ImageThe year in misinformation policies.Credit: screenshot: mozilla

Mozilla will be releasing a series of blog posts with learnings from the tool, but a major takeaway is that platforms like Facebook, Twitter, and YouTube were largely reactive, rather than proactive, when it came to making and (crucially) enforcing major policy changes.

"Creating the visualization made it obvious that there were bursts of activity," Jon Lloyd, Mozilla's lead researcher for the project, said. "A lot of the changes that platforms were making were happening after millions of Americans had already cast their ballots, or, a lot of the time, they were even coming after election day itself."

In October, Mozilla launched a campaign pressuring social platforms to limit misinformation. That included a table that logged and compared every major platform's misinformation policies. The need for a third-party compendium showed how poorly platforms handled their misinformation strategies. It was woefully unclear what the most up-to-date policies were, and who was doing what.

"It was very hard to determine what platforms were doing, because they were releasing statements, they had policy documents, but also they were releasing blogs, there was nothing more in place," Lloyd said. "What we wanted to do is just make sure that all of those changes that they were making in order to address this misinformation problem were in just one central resource that was easy to parse."

The table turned out not to be enough. It did not capture the upheaval that came with Trump's refusal to concede the election, the Capitol Hill riot — and the deluge of bans and updates that followed.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

To broaden the scope in terms of both time and context, researchers constructed a timeline from October 2019 to January 2021. It compares when Facebook, Twitter, TikTok, and YouTube intervened due to misinformation and when political and misinformation events took place.

Users can see policy changes and enforcements (which both fall under the umbrella of what the researchers call "interventions") taken by all four platforms, or one by one. Click on a particular name, like Facebook, to see data from just that one company, or hover over points on the graph to see the interventions a platform took at that time.

On the timeline, you can see events like the first Trump impeachment (remember that?), Trump's first claims of mail-in-ballot fraud, election day, notable polls of public sentiment, and the Capitol Hill riot. Specific moments that, according to the data, served as flashpoints for change on social media.

The plots on the graph intersect with those interventions. Presenting the information this way makes it easy to see how platform action often followed — or reacted to — notable events. For example, amid mass protests against police brutality after the killing of George Floyd, Twitter began putting warning labels on Trump's tweets for glorifying violence.

"Tracking each change by the platforms, we noticed that they often came in a flurry in response to something that was making the news, often something Trump may have said," Lloyd said.

Looking at the trend lines of the graph is interesting, too. While misinformation policies and enforcements began 2020 with a trickle, they snowballed in early January 2021 around the time of the Capitol Hill riot. Many of those interventions were banning Trump.

The timeline only encompasses January 2020 to January 2021, but the story of how platforms handle misinformation is far from over. Mozilla researchers found that platforms have already begun rolling back misinformation protections they put in place during the election, which Lloyd sees as a recognition by platforms that "they know that type of political content is a major vector for misinformation, but in reversing the ban they're really saying that they're willing to put their own profits above the health of our democracy and our society."

Recently, YouTube said that it would lift Trump's ban when there was a lowered risk of violence. But Lloyd sees this move as backpedaling, potentially off a cliff.

"YouTube said it will restore Trump's account when the rest of violence falls, which I guess for me begs another question," Lloyd said. "They've done such a bad job of assessing that risk of violence so far. So how can we possibly trust them to make the right call again?"

If the new timeline is any indication, we can't.

Related Video: How to recognize and avoid fake news

TopicsActivismFacebookTwitterPolitics

  • Tag:

相关文章

最新评论