India's directive: Social media platforms must expedite deepfake removal to combat disinformation threats.

India is cracking down on deepfakes and AI-generated misinformation with new, stricter regulations for social media platforms. The Ministry of Electronics and Information Technology (MeitY) has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, setting a three-hour deadline for platforms to remove flagged deepfakes and other AI-generated content. These changes will take effect on February 20, 2026.

The updated rules mandate that social media companies like Facebook, Instagram, and YouTube clearly label all AI-generated content, ensuring that such synthetic material carries embedded identifiers. Platforms are also barred from allowing the removal or suppression of AI labels or associated metadata once they have been applied. The government will treat AI-generated content on par with other information when determining unlawful acts.

MeitY has formally defined AI-generated and synthetic content as any audio, visual, or audiovisual information created or altered using AI that appears real or authentic. Routine editing, accessibility improvements, and good-faith educational or design work are excluded from this definition. The rules explicitly prohibit AI-generated content that includes child sexual abuse material, non-consensual intimate imagery, false documents, or misleading depictions of real individuals or events.

To ensure compliance, social media platforms must deploy automated tools to detect and prevent the circulation of illegal, sexually exploitative, or deceptive AI-generated content. Platforms must also regularly warn users about the consequences of violating rules related to AI misuse, issuing such warnings at least once every three months. The government has also shortened user grievance resolution deadlines to ensure faster responses.

The government's enforcement push is initially focused on leading social media intermediaries with five million or more registered users in India. These platforms must now ask users if their content is AI-generated before it is uploaded and deploy automated tools to cross-verify the content's format, source, and nature. If flagged as synthetic, the content needs a visible label.

These new regulations reflect growing concerns about the misuse of AI and deepfakes to spread misinformation, damage reputations, manipulate elections, and commit financial fraud. By mandating labeling, faster takedowns, and proactive monitoring, India aims to create a safer and more transparent online environment.

Advertisement

Latest Post


Advertisement
Advertisement
Advertisement
About   •   Terms   •   Privacy
© 2026 DailyDigest360