In response to rising concerns about misinformation and the potential misuse of AI-generated content, India has proposed amendments to its Information Technology (IT) rules, mandating the labeling of AI-generated content on social media platforms. The proposed changes aim to increase transparency, ensure accountability, and safeguard users from deception in the digital space.
The draft amendments to the IT rules require social media platforms to obtain a user declaration regarding whether the information they are uploading is synthetically generated. These platforms must also deploy "reasonable and appropriate technical measures," including automated tools, to verify the accuracy of such declarations. If the declaration or technical verification confirms that the information is synthetically generated, the platforms must clearly and prominently display a label or notice indicating this.
The proposed rules also introduce visibility and audibility standards. For visual content, the label should cover at least 10% of the total surface area, while for audio content, it should cover the initial 10% of the total duration. Furthermore, intermediaries that offer computer resources for creating or modifying synthetically generated information must ensure that such information is labeled or embedded with a permanent unique metadata or identifier. This label or identifier must enable immediate identification of the content as synthetically generated information.
The Ministry of Electronics and Information Technology (MeitY) has stated that the rules would "ensure visible labeling, metadata traceability, and transparency for all public-facing AI-generated media". The ministry is seeking public and industry feedback on the draft framework by November 6, 2025.
The government's move is motivated by the growing misuse of generative AI tools to spread misinformation, impersonate individuals, and potentially influence elections. With nearly one billion internet users, India is a large market where the stakes are high, and AI-generated deepfakes and fake news could incite violence and conflict.
These proposed regulations are in line with similar actions taken by other countries, such as China and the European Union. Experts have noted that the 10% visibility threshold is among the first explicit attempts globally to prescribe a quantifiable visibility standard. If the policy is approved, AI developers and social media platforms would need to embed automated labeling and metadata tagging systems to mark synthetic content at the point of creation. Social media firms may lose their safe harbor protection if violations are not flagged proactively.
