Union Minister for Information and Broadcasting and IT, Ashwini Vaishnaw, has recently emphasized the government's commitment to strengthening the legal framework to combat the growing threat of deepfakes and misinformation. Speaking in the Lok Sabha, Vaishnaw addressed the severe implications of fake news and AI-generated content on democracy, highlighting the urgent need for stricter regulations and faster enforcement mechanisms.
Vaishnaw acknowledged the rapid evolution of misinformation, manipulated content, and deepfake videos, particularly with the increasing speed of AI-driven content creation and distribution. He noted that certain online ecosystems are deliberately bypassing constitutional values and parliamentary laws, necessitating decisive intervention.
The government has already implemented provisions mandating the removal of flagged content within 36 hours of reporting. Furthermore, a draft rule specifically targeting AI-generated deepfakes has been circulated for public and institutional consultation. Vaishnaw also expressed appreciation for the Parliamentary Committee's detailed report, led by Nishikant Dubey, which offered recommendations for structural improvements to the legal ecosystem governing digital misinformation.
Addressing concerns about balancing freedom of speech with democratic responsibility, Vaishnaw assured lawmakers that the government is approaching this issue with sensitivity. The aim is to protect democracy without compromising free expression, recognizing the empowering role of platforms for citizens while also acknowledging the avenues for harm they can create.
The rise of AI-enabled image and video manipulation has intensified the need for regulation. Deepfakes can influence public opinion, electoral narratives, and national security considerations. Vaishnaw stressed the importance of robust rules and compliance to ensure accountability from tech platforms, influencers, and content creators. The government is actively consulting with stakeholders on deepfake regulation, signaling that updated, technology-centric laws may soon follow. The priority remains strengthening institutions and rebuilding public trust, which Vaishnaw described as fundamental to social stability in the digital age.
The proposed amendments to the Information Technology (IT) Rules aim to extend the rules' applicability to AI-generated content, treating it similarly to other content when it is obscene, infringes intellectual property, deceives users, or impersonates someone. Intermediaries are expected to exercise due diligence regarding synthetic content, and significant social media intermediaries will be required to identify the first originator of information and deploy technology to identify unauthorized synthetic content.
Platforms that enable the creation or modification of synthetic information must ensure that all such content is clearly labeled with a permanent, unique metadata tag or identifier, covering at least 10% of the visible surface area for visual media or the initial 10% of the duration for audio content. Altering, suppressing, or removing these labels is prohibited. SSMIs must also obtain user declarations on whether uploaded content is synthetically generated and use technical measures to verify these declarations.
India is also exploring technical solutions to combat deepfakes. Under the IndiaAI mission, projects are underway to develop deepfake detection tools, including frameworks by IIT Jodhpur and IIT Madras, IIT Mandi and Himachal Pradesh’s Directorate of Forensic Services, and IIT Kharagpur.
These efforts align with a global trend of addressing deepfakes through legislation and regulation. Countries like the US, South Korea, China, and the UK have implemented various measures, including criminalizing deepfakes in specific contexts, imposing identity verification requirements, and mandating labeling of AI-generated content. The European Union's AI Act also requires labeling of synthetic content.
