Purge social networks
By Ashit Kumar Srivastava
Even though we have seen a pandemic of epic proportions, there is something called an infodemic. And human civilization faces a major threat.
An infodemic is the wide dissemination of accurate or inaccurate information around the world. But it primarily refers to the spread of misinformation and misinformation. And in the digital age we live in, its impact cannot be understated. Accessibility to the internet and various digital platforms has made it easier for an individual to broadcast their thoughts and even spread rumours.
The idea of content neutrality or content regulation is to purge content of its directional characteristics and non-factual basis. As social media platforms grow in popularity and come alive in human expression, they are becoming an alternative to conventional news platforms, be it news channels or radio shows. People are increasingly using these platforms as a source of information. This is an alarming situation for any democratic country.
While the platforms have continually maintained that they are neutral with no interference in published content, the fact that they process and prioritize information qualifies them as curators of information. Thus, the policy demands that a much larger role be played by these platforms in dealing with ‘fake news’ or ‘hate speech’.
Read also : Tale of two women
Content neutrality in relation to social media platforms refers to the unbiased content posted there. And it’s no surprise that most social media platforms deal with cases of fake news and hate speech at their administrative levels. However, the question that always comes up is whether there are good enough measures to purge the platforms of malicious content.
There are allegations against social media giant Facebook over its uneven enforcement of community standards. There are allegations that Facebook may not have sufficient resources and skills to be equipped with all 22 official languages of India and therefore may not be able to control the spread of misinformation.
In fact, a whistleblower, Frances Haugen, leaked documents that are now called “Facebook-Paper” revealing that the company had been aware of the spread of misinformation for years. In 2020, Facebook’s head of India policy Ankhi Das resigned amid allegations of bias towards right-wing content.
It’s not that no action has been taken at the platform level; instead, sophisticated artificial intelligence-based mechanisms are being introduced to counter the growth of misinformation. However, the sheer size of social media and microblogging websites makes careful scrutiny imperative. Thus, he calls for better regulation to ensure the neutrality of content on the platforms.
This is exactly what most countries follow. However, questions of applying content neutrality will always raise some fundamental questions about its societal impact, its directional capacity and whether the speech is opinion or serves to be news.
Read also : Supreme Court rejects Maharashtra plea in Anil Deshmukh case
Thus, any attempt to impose content neutrality will lead to questions about the quality of the content and its inherent nature. Germany had already taken a step in this direction by enacting the Network Enforcement Act (NetzDG Act) for the removal of hate speech from the platform within a specified period. The deadline is 24 hours from receipt of the complaint, with a penalty of up to 50 million euros if the platform does not comply. Several free speech groups have denounced NetzDG Laws for its harsh punishment process.
India has also drafted the Information Technology Rules 2021 (Guidelines for Intermediaries and Code of Ethics for Digital Media). The fundamental principle behind the 2021 IT rules is to revoke the escape clause, which has been used time and time again by social media platforms in India. to release their responsibility for the content of third parties published on their platforms. In fact, in the preamble of the new IT rules 2021, there is no mention of the safeguard clause (section 79 of the Information Technology Act, 2000).
The preamble of the new 2021 IT rules makes it very clear that the new rules replace the old 2011 IT rules, which were based on the safeguard clause.
Rule 3 of IT Rules 2021 contains one of the patterns of information that is manifestly false or misleading. Not only are government bodies allowed under Rule 3 to report the grounds to the Platforms, but individual complaints can also be directed to the Platform under Rule 3(2).
Read also : Everything for a clean sweep
Thus, the neutrality of the content becomes an integral part of the mechanism, but what is interesting to observe is that it is not possible to regulate the content by a single entity. Thus, it takes more hands and more eyes to regulate it and cannot be left to the platforms alone to garner the mechanism guaranteeing the neutrality of the content.
Interestingly, the Supreme Court of India in Shreya Singhal vs Union of India (2013) said that it would not be possible for the intermediary to industrially vet their platform for any illegal content. Its duty is to suppress illegal allegations of which it becomes aware through a court order or appropriate government instruction.
Even Facebook (now Meta) reiterated that it cannot act as a “super-censor” in violation of the Supreme Court’s decision in Shreya Singhal (“Meta’s” submission in Maatr Foundation vs Union of India, pending in the MP High Court) knowing that there are billions of pieces of content posted on its platform every day.
There needs to be a synchronization of platform, consumer and government efforts to ensure content neutrality.
—The author is Assistant Professor of Law, Dharmashastra National Law University, Jabalpur