India's AI regulation plans: Addressing deepfake proliferation through stricter guidelines and enhanced oversight.

India is planning to tighten regulations around artificial intelligence (AI) to combat the increasing threat of deepfakes and AI-generated misinformation. The Ministry of Electronics and Information Technology (MeitY) has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, marking India's first comprehensive effort to regulate synthetic media. The proposed changes aim to empower users to distinguish between authentic and synthetic content while ensuring the accountability of social media platforms.

The draft regulations necessitate that all AI tools and major social media platforms prominently label AI-generated content. Platforms will be prohibited from enabling users to suppress or remove these identifiers, which will make it harder to disguise the origin of AI-generated material. Significant social media intermediaries, defined as large platforms, must ask users to declare whether the content they upload is synthetically generated before publication. These platforms will also be required to deploy automated detection systems to verify such declarations, and all verified or declared synthetic content must carry clear labels or visible notices.

MeitY has provided a formal definition for "synthetically generated information," defining it as content "artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that appears reasonably authentic or true". This definition brings AI-generated material under the same obligations as unlawful online information.

Under the draft framework, companies offering AI generation tools must embed permanent visible watermarks or metadata identifiers on all synthetic content. For images and videos, labels must cover at least 10% of the display area, while audio content must carry identifiers during the first 10% of playback duration. The ministry has clarified that these obligations will apply only to content that is publicly shared or published on social media platforms, not to private or unpublished material.

Platforms that fail to comply with these rules risk losing safe harbor protections under Section 79 of the IT Act, 2000, and could face regulatory penalties. However, platforms that remove or restrict access to harmful synthetic material in good faith, based on user grievances or internal detection, will continue to enjoy safe harbor protections under Section 79(2) of the IT Act.

The ministry has invited public feedback on the proposed amendments, with submissions due by November 6. According to the ministry, the amendments aim to promote user awareness, enhance traceability, and ensure accountability while maintaining an enabling environment for innovation in AI-driven technologies.

These proposed regulations come at a time of growing concern over the misuse of AI technologies. Deepfakes and other forms of synthetic media have the potential to cause user harm, spread misinformation, manipulate elections, impersonate individuals, and facilitate fraud. By implementing these new rules, India aims to mitigate these risks and ensure a safer and more transparent online environment.


Written By
Aahana Patel is a detail-oriented journalist who approaches sports coverage with analytical depth and creative flair. She excels at turning key moments and performances into compelling narratives. With a focus on fairness, accuracy, and emotion, Aahana’s work resonates with both casual fans and seasoned followers. Her mission is to make every story memorable.
Advertisement

Latest Post


Advertisement
Advertisement
Advertisement
About   •   Terms   •   Privacy
© 2025 DailyDigest360