Twitter is experimenting with adding brightly colored labels directly beneath lies and misinformation posted by politicians and public figures, according to a leaked demo of new features sent to NBC News.
Twitter confirmed that the leaked demo, which was accessible on a publicly available site, is one possible iteration of a new policy to target misinformation it plans to roll out March 5.
In this version, disinformation or misleading information posted by public figures will be corrected directly beneath the tweet by fact-checkers and journalists who are verified on the platform, and possibly other users who will participate in a new “community reports” feature, which the demo claims is “like Wikipedia.”
“We’re exploring a number of ways to address misinformation and provide more context for tweets on Twitter,” a Twitter spokesperson said. “Misinformation is a critical issue and we will be testing many different ways to address it.”
The demo features bright red and orange badges for tweets that have been deemed “harmfully misleading,” in nearly the same size as the tweet itself and prominently displayed directly below the tweet that contains the harmful misinformation.
Examples of misinformation included a false tweet about whistleblowers by House Minority Leader Kevin McCarthy, R-Calif., a tweet about gun background checks by Sen. Bernie Sanders, I-Vt., and a tweet by an unverified Twitter account posting a doctored video of House Majority Leader Nancy Pelosi, D-Calif.
The leaked demo also shows an example of medical misinformation, including an example about the coronavirus by a verified Twitter account.
The impending policy rollout comes as the 2020 election season is ramping up, with Twitter playing a central role in some of the daily give-and-take between the candidates. On Thursday, former New York City Mayor Mike Bloomberg’s campaign posted an edited video that made it seem as if there had been a long pause when he asked during Wednesday’s Democratic debate if the other candidates had ever started a business.
Last month, Twitter announced a new policy to ban tweets that “deceptively share synthetic or manipulated media that are likely to cause harm,” such as deep fakes.
In one iteration of the demo, Twitter users could earn “points” and a “community badge” if they “contribute in good faith and act like a good neighbor” and “provide critical context to help people understand information they see.”
The points system could prevent trolls or political ideologues from becoming moderators if they too often differ from the broader community in what they mark as false or misleading.
“Together, we act to help each other understand what’s happening in the world, and protect each other from those who would drive us apart,” the demo reads.
Twitter reiterated to NBC News that the community reporting feature is one of several possibilities that may be rolled out in the next several weeks
“This is a design mock-up for one option that would involve community feedback,” the spokesperson said.
In the demo, community members are asked if the tweet is “likely” or “unlikely” to be “harmfully misleading.” They are then asked to rate how many community members will answer the same as them on a sliding scale of 1 to 100, before elaborating on why the tweet is harmfully misleading.
“The more points you earn, the more your vote counts,” the demo reads.
Some other websites have successfully used community moderation to regulate their platforms. Information on Wikipedia has been moderated by anonymous users since its inception in 2001. It is frequently vandalized in breaking news situations by political actors, which can sometimes lead more powerful moderators to temporarily lock down pages.
Reddit also has hundreds of volunteer moderators who set and enforce rules for its many communities.