Facebook, TikTok and Reddit have updated their fake news policies in the last few days. All the details
Facebook, TikTok and Reddit have updated their fake news policies in recent days, showing a certain sensitivity to the pressures coming from the now upcoming American presidential elections. And not only. Almost all the giants of the web that disseminate content – including Twitter, YouTube – are under the crossfire of authorities and public opinion, also for the dissemination of erroneous information globally on vital issues such as Covid-19 and ethnic conflicts. . WHAT FACEBOOK, TIKTOK AND REDDIT ARE DOING
In the case of Facebook, TikTok and Reddit, it is the first time that social media giants take a hard line in particular on the prohibition of deepfake content, typically video or audio manipulated using artificial intelligence (AI) or machine learning to intentionally mislead users.
More precisely, Facebook has announced a ban on spreading modified deepfakes for example to misrepresent someone’s statements. TikTok said it will ban disinformation created to cause harm to users or the public, including disinformation about elections or other similar democratic proceedings manipulated with the aim of causing harm. Reddit, on the other hand, has stated that it will ban accounts impersonating an individual or entity in a misleading or deceptive way. It also bans deepfakes or other manipulated content “presented to mislead or to fraudulently attribute claims to an individual or entity.” CONCERNS AGAINST DEEPFAKE
Concern about deepfakes began to emerge after the 2016 election and has since become an important talking point for ensuring a fair narrative of the facts. Hazel Baker, head of news gathering on Reuters user-generated content, for example, told Axios last month that “ninety percent of the manipulated media we see online is an actual video extracted from the context used to feed a narrative. different”.
According to Axios experts, therefore, if social media platforms do not begin to deal more aggressively in ‘fake’ audio and video content, they risk seeing their timelines transformed into manipulated and completely unreliable news containers. NO CLEAR PLANS FOR THE CONTRAST OF FAKE NEWS
Experts have been warning about the next era of deepfakes for some time, but the platforms have not yet drawn up clear and unambiguous plans to counter them, beyond some internal policy updates.
We need to stop the line of “I don’t want to be the arbiter of the truth,” Berkeley University professor, fake news expert Hany Farid said at an Axios event on the subject. “It’s nonsense”. Farid notes that all of those platforms already draw dividing lines, such as banning pornography, because they believe allowing such content is bad for their business. GROWING CHALLENGES IN THE NEXT YEARS
According to Axios, however, platforms are slow to figure out which manipulated media should be purged, which tagged, and which left alone. But the challenges will grow over the next few years as tools for creating falsified video and audio become more powerful and easier to use, and as platforms hesitate to take drastic measures against disinformation in the face of Republican claims. such as anti-conservative bias and embedded in Silicon Valley’s content moderation practices. “I think we have already wasted precious time due to the politicization of the problem,” said Nina Jankowicz, disinformation fellow at the Wilson Center. CONCERNED USERS
Nearly three-quarters of regular internet users have recognized at least one of the top three online risks – fake news, cyberbullying and fraud – but fake news is by far the top concern, according to the Lloyd’s Register Foundation’s global risk survey.
Gallup pollsters conducted more than 150,000 interviews in 142 countries on behalf of the foundation and found that 57% of internet users, across geographies, age groups and socio-economic backgrounds, perceived false information or false news as the greatest concern.
People living in regions with high economic inequality, or ethical, religious or political polarization, tend to be the most concerned, which, according to the foundation, risks leading to a weakening of social cohesion and trust. This was particularly evident in Malawi, Rwanda, Bolivia, Uganda and Senegal, where this concern was cited by over 80% of respondents.
Despite this, the survey also found a significant number of people who are not at all aware of the risk of disinformation. According to Richard Clegg, CEO of Lloyd’s Register Foundation, this confirms that there is a clear security threat. WHAT TO DO
So what to do
Of course, banning all manipulated media is complicated, given the daily practice of editing videos for large retailers and the fact that videos are often manipulated for satirical purposes as well. It’s not all. The mere existence of video and audio manipulation technology also allows politicians to dismiss true but unflattering images as false.
There are actually several technological solutions to address the problem, including the effort to create authenticated content streams to verify that they have not been significantly altered through processing, Axios experts concluded.

Previous articleDreaming of the pope: dressed in white, smiling
Next articleChristmas Truce 1914: a football match to alleviate the wounds of the First World War