Global Tech Reviewing AI-Labelling Mandate as India Curbs Deepfakes
The global tech industry is closely evaluating India's new policy mandating clear labels for AI-generated content on social media platforms. This move by India is designed to combat the rising threat of deepfakes and misinformation. The policy is pushing tech giants to re-assess their approaches to AI governance in a key market.
India's Ministry of Electronics and Information Technology (MeitY) announced the proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These amendments seek to establish legal guardrails around synthetically generated information. The rising concerns over deepfakes, where AI is used to create convincing but false audio, video, and images, prompted this action. These deepfakes can be weaponized to spread misinformation, damage reputations, manipulate elections, or commit financial fraud.
Under the proposed regulations, social media platforms must ensure that AI-generated content is clearly labelled. For visual content, the label should cover at least 10% of the display area, while for audio content, it should be included within the first 10% of the playback duration. These labels should be visible, audible, non-removable, and easily recognizable. Platforms will also need to require users to declare if their content is synthetically generated and use technical tools to verify these declarations.
This initiative follows growing concerns in India about the misuse of AI. During India's 2024 general elections, fake videos of politicians spread across social media, eroding public trust and posing risks to democratic integrity. A deepfake video of an actress also went viral in 2023, highlighting the potential for harm to individual privacy and reputation. The government aims to make the internet "open, safe, trusted, and accountable for everyone" through these proposed amendments.
Several major AI firms have announced expansion plans in India, including Anthropic, OpenAI, and Perplexity. These companies and others are now scrutinizing India's AI labeling mandate. The policy requires platforms to label AI-generated content with prominent markers and identifiers. They must also verify and flag synthetic information to curb user harm from deepfakes and misinformation.
India's move is part of a broader global trend toward AI accountability, though approaches vary across countries. China has implemented stricter rules requiring platforms to detect watermarks and prompting uploaders to declare AI-generated material. The European Union's AI Act focuses on a risk-based approach. The United States lacks a comprehensive federal AI law but has state laws targeting deepfakes and their misuse.
Experts emphasize the need for carefully designed regulatory safeguards to prevent misuse of such provisions, which could inadvertently restrict legitimate expression or artistic uses of synthetic media. Balancing authenticity and accountability with freedom of speech is crucial for the success of this framework. The public and industry stakeholders have until November 6, 2025, to provide feedback on the draft rules.
