India is taking a multi-faceted approach to regulating Artificial Intelligence (AI), primarily focusing on transparency, user protection, and platform accountability. Recent developments include mandates for labeling AI-generated content, swift removal of objectionable material, and leveraging existing legal frameworks.
The 10% Visibility Rule (Shelved)
An earlier draft of the AI regulations included a proposal that AI-generated visuals should have watermarks covering at least 10% of the screen space. The purpose was to ensure that AI-generated content was easily identifiable. However, this specific proposal was ultimately shelved after pushback from industry stakeholders, including organizations like IAMAI and its members such as Google, Meta, and Amazon. They argued that the 10% watermark rule was too rigid and difficult to implement consistently across various content formats.
Permanent Metadata and Labeling
Despite shelving the 10% visibility rule, India is moving forward with other measures to ensure transparency. The core requirement is that platforms must clearly label all "synthetically generated information" (SGI) so that users can instantly recognize it. This includes AI-generated or AI-altered audio, video, and visual content that appears real or authentic. The regulations define SGI as content "artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that such information appears to be real, authentic, or true". Routine editing such as color correction, noise reduction, compression, or translation is exempt, as long as it doesn't distort the original meaning.
Furthermore, platforms must embed persistent metadata and unique identifiers to trace the content back to its origin. These labels and metadata cannot be modified, suppressed, or removed. Before a user uploads content, the platform must ask: "Is this content AI-generated?". Platforms also need to deploy automated tools to cross-verify the content's format, source, and nature. If the content is flagged as synthetic, it needs a visible disclosure tag.
Three-Hour Takedown Rule
The updated rules also mandate strict takedown timelines for objectionable content. Platforms now have just three hours to act on certain lawful orders, a significant reduction from the previous 36-hour window. The 15-day window has been reduced to seven days, and the 24-hour deadline has been halved to 12 hours. The government has clarified that AI-generated content used for unlawful activities will be treated like any other illegal content. Platforms are required to prevent their services from being used to create or disseminate synthetic content involving child sexual abuse material, obscene or indecent content, impersonation, false electronic records, or material linked to weapons, explosives, or other illegal activities. Platforms must also warn users at least once every three months about penalties for misusing AI content.
Legal Framework and the DPDP Act
India is leveraging existing legal frameworks rather than creating new, standalone AI legislation. The Information Technology Act, 2000, serves as the primary legislation governing digital platforms. The Digital Personal Data Protection Act, 2023 (DPDP Act), is a significant piece of legislation that addresses the collection, processing, and storage of digital personal data. It mandates consent for personal data processing, imposes purpose limitation and data minimization requirements, and empowers the Data Protection Board to investigate AI-driven profiling harms. The DPDP Act's rules on consent and lawful processing require that personal data used for AI training or model development be collected with explicit consent or on a legal basis. This ensures that AI systems do not rely on datasets obtained without user awareness or approval.
Aim and Challenges
These measures aim to support the growth of India's AI ecosystem while ensuring responsible AI deployment. The government's approach is guided by principles of trust, people-first design, innovation, fairness, accountability, understandability, safety, resilience, and sustainability. However, challenges remain, including data privacy concerns, skill shortages, and ethical considerations. Some civil society groups have expressed concerns that the rules may expand state access to personal data and reduce transparency for citizens. Despite these challenges, India is committed to fostering technological trust through digital public infrastructure and tackling socioeconomic issues with a bottom-up approach to AI.
