In response to the rising threat of deepfakes and synthetic content, the Indian government is considering amendments to the IT law to address misinformation and election manipulation. The Ministry of Electronics and Information Technology (MeitY) has proposed new legal obligations for AI and social media firms to label AI-generated content. The proposed rules aim to increase transparency and accountability in how AI-generated content is shared online.
The government's concern stems from the increasing misuse of generative AI tools to cause user harm, spread misinformation, manipulate elections, or impersonate individuals. With nearly a billion internet users, India faces a high risk of harm from manipulated media, especially given its diverse population where misinformation can incite communal tensions or disrupt democratic processes.
The proposed regulations mandate that AI developers and social media platforms label any content generated by AI. Social media companies would also be required to ensure that users declare if they are uploading deepfake material. The IT ministry has prepared draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The draft defines synthetically generated content as information that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that appears reasonably authentic or true.
The draft rules mandate platforms to label AI-generated content with prominent markers and identifiers, covering a minimum of 10% of the visual display or the initial 10% of the duration of an audio clip. Significant social media platforms (those with 50 lakh or more registered users) will have increased accountability in verifying and flagging synthetic information through reasonable and appropriate technical measures. Platforms will also be required to obtain user declarations confirming whether uploaded content has been created using AI tools and implement technical mechanisms to verify authenticity.
IT Minister Ashwini Vaishnaw stated that the government has been receiving requests to take steps against synthetic content and deepfakes to contain user misinformation. The government aims to ensure that users know whether something is synthetic or real through mandatory data labeling.
To avoid overreach, MeitY clarified that these obligations will apply only to content publicly shared or published on social media platforms, not to private or unpublished material. The rules also clarify that the definition of information in the IT Rules now explicitly includes synthetically generated data, so AI-created misinformation, defamatory content, or fraudulent impersonations will be treated no differently from their real-world counterparts under the law.
The ministry has opened the draft for public consultation, inviting feedback from stakeholders, industry players, and citizens until November 6, 2025. If the content turns out to be AI-generated, the platform must ensure it is clearly labeled or accompanied by a visible notice. Platforms which remove or restrict access to harmful synthetic material based on user grievances or internal detection will continue to enjoy safe harbor protections under Section 79(2) of the IT Act.
India is not alone in bringing in rules to control AI-generated deepfakes. Governments worldwide, including the United States and the European Union, are weighing or enacting similar labeling requirements.
