Twitter explains its methods for identifying photos and videos that have been manipulated

– 11 hours ago – Company

  • Home page
  • Company
  • Twitter explains its methods for identifying manipulated photos and videos

The American social network presented its battle plan against rigged media on 5 February to avoid any manipulation of public opinion, especially during elections.

There’s going to be a change on Twitter soon. From 5 March 2020, videos and photos shared on the social network could be accompanied by a warning label, if its moderation teams judge that they are media that have been altered with the aim of misleading the public. In the most serious cases, this may even go as far as the removal of the disputed content.

The announcement of the community site, which came on February 5, follows Facebook’s announcement that in January it updated its moderation rules to better regulate artificial media, starting with “deepfakes”, i.e. faked videos that can be constructed in such a way as to mislead the public. On February 3, YouTube was also there to show its commitment to fighting manipulation.

All these initiatives are taking place in a particular political sequence, since the USA is being monopolised by its 2020 presidential election – and these platforms have every interest in showing that they are acting and that they are aware of what is at stake, since in the previous election they were accused of having been blind to foreign, and more particularly Russian, manoeuvres of influence.

À quoi devrait ressembler le label que Twitter apposera sur certains médias à partir du 5 mars 2020.

What should the label that Twitter will put on certain media be like as of March 5, 2020?

In the case of Twitter, the moderation reading grid to assess whether a medium is “artificial and manipulated” revolves around three questions:

  • Is the media artificial or manipulated?
  • Is it deceptively shared?
  • Is it likely to have an impact on public safety or cause serious harm?

For each of these questions, Twitter looks at different elements. In the case of the first question, moderation looks at whether the content “has been significantly altered in a way that fundamentally alters its composition, sequence, rhythm or framing“, whether visual or audio information has been added or modified (in terms of subtitles, images or sound).

A table summarizes the policy that Twitter is expected to follow on a case-by-case basis. Nor is the idea to blindly slash all content that is deemed to be artificial or manipulated, because it may be designed for humorous purposes or, paradoxically, for informative purposes, as part of an awareness campaign on some theme, for example.

Is the media artificial or manipulated?

Is it deceptively shared?

Is it likely to cause serious harm?





The media can be labeled




The media may be labeled, or may be removed




The media is likely to be labeled




Removal of the media is very likely

Intermediate levers of action

Between putting a label on the medium and simply removing it, Twitter also has other tools at its disposal to act in an intermediary way.

This could be the appearance of a warning message to people who would like to share the contentious tweet or show their approval by clicking on “like”. The site may also reduce its visibility on Twitter – the site doesn’t say how -, avoid being recommended, or “provide additional explanations or clarifications, if necessary, such as a Home page with more context“.

Several factors that the social network takes into account are presented in a blog post that has been translated. This is not an exhaustive list of all the criteria it considers in its assessment, but a general presentation of its methodology. In order to defuse some future controversy, the site warns that it may “make mistakes” and asks for “patience” in the face of the “challenge” of combating misinformation.

Twitter asks for patience and warns that it may “make mistakes.

“However, we are determined to do things right,” the site says, to avoid publications that “lead to confusion or misunderstanding, or suggest a deliberate intention to mislead people about the nature or origin of the content, for example by falsely claiming that it represents reality. In France, the system will be politically useful from the 2020 municipal elections.

However, one mystery remains, which Twitter does not solve: isn’t this device likely to disarm a little more the critical spirit that Internet users should have when faced with what they see on the net? If the Social Network is missing contentious content, it should not be seen as just because it has not been labelled. But maybe compared to the current situation, it’s a lesser evil.

You May Also Like