India's Ministry of Electronics and Information Technology (MeitY) has proposed draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandating the labeling of AI-generated content on social media platforms. These proposed rules aim to address the increasing concerns surrounding deepfakes and misinformation spread through synthetic media.
The rise of AI-generated content has been rapid, permeating social media, advertising, and entertainment. This surge has brought with it worries about the manipulation of reality, particularly through deepfakes, which can be leveraged for malicious purposes like political propaganda, financial fraud, and reputational damage. Prime Minister Narendra Modi has described deepfakes as a new "crisis," emphasizing the urgent need for regulatory intervention.
The draft amendments outline key provisions for creators and platforms. Users uploading content to social media platforms such as YouTube, Instagram, and X (formerly Twitter) will be required to declare whether their content is synthetically generated. Platforms must ensure that AI-generated content carries dual visible markers: an embedded label or watermark within the content itself, covering at least 10% of the visual or audio duration, and a platform-level label displayed wherever the content appears online. For visual content, the label should cover at least 10 percent of the total surface area, and in case of audio content, it should cover the initial 10 percent of the total duration.
According to the draft amendments, social media platforms would have to get users to declare whether the uploaded content is synthetically generated. They will also need to deploy “reasonable and appropriate technical measures”, including automated tools or other suitable mechanisms, to verify the accuracy of such declaration. Where such declaration or technical verification confirms that the content is synthetically generated, platforms must ensure that this information is clearly and prominently displayed with an appropriate label or notice.
The definition of "synthetically generated information" includes any content "artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true". This encompasses AI-generated videos, audio, images, and text.
Non-compliant platforms risk losing their legal immunity under Section 79 of the IT Act. Intermediaries offering tools to create or modify synthetic content must embed permanent, unique identifiers or metadata into every piece of AI-generated material.
These proposed rules have sparked discussions among creators and industry stakeholders. While companies like Meta and Google already have some form of AI labeling, enforcement has been inconsistent. The new rules aim to provide a more standardized and rigorous approach.
India's proposal aligns with similar AI-labeling initiatives in other regions, including China, the EU, and the U.S. China, for example, has already rolled out AI labeling rules requiring providers to clearly identify AI-generated material. These global efforts reflect a growing consensus on the need to address the challenges posed by AI-generated content and deepfakes.
While the draft amendments aim to enhance transparency and protect users from misinformation, enforcement challenges and potential gaps in agency capabilities could complicate implementation and compliance. It remains to be seen how effectively these rules will curb the misuse of AI-generated content and safeguard the digital landscape.
