Label all AI media: Draft rules to fight deepfakes
Meity has invited public feedback on the proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
The government on Tuesday released draft regulations that would require all artificial intelligence tools and major social media platforms to prominently label AI-generated content, marking India’s first comprehensive attempt to regulate deepfakes and synthetic media.

The ministry of electronics and information technology (Meity) has invited public feedback on the proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, with submissions due by 6th November.

In its official notice released, Meity said the amendments were prompted by growing concerns over AI misuse. “With the increasing availability of generative AI tools and the resulting proliferation of synthetically generated information (commonly known as deepfakes), the potential for misuse of such technologies to cause user harm, spread misinformation, manipulate elections, or impersonate individuals has grown significantly,” the ministry stated.
It added: “Recognising these risks, and following extensive public discussions and parliamentary deliberations, MeitY has prepared the present draft amendments.”
Under the draft framework, companies offering AI generation tools must embed permanent visible watermarks or metadata identifiers on all synthetic content. For images and videos, labels must cover at least 10% of the display area, while audio content must carry identifiers during the first 10% of playback duration.
The proposed amendments formally set out for the first time a definition for “synthetically generated information” as content “artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears reasonably authentic or true.”
By introducing this definition, the government seeks to bring AI-generated material under the same due diligence and takedown obligations that currently apply to unlawful online information.
Platforms will be prohibited from enabling users to suppress or remove these identifiers, making it harder to disguise the origin of AI-generated material.
Platform obligations
Significant social media intermediaries—large platforms designated under the IT Rules— must ask users to declare whether uploaded content is synthetically generated before publication and deploy automated detection systems to verify such declarations, the proposed rules state.
All verified or declared synthetic content must carry clear labels or visible notices, allowing users to distinguish authentic media from manipulated material.
Platforms that fail to comply risk losing safe-harbour protections under Section 79 of the IT Act, 2000, and face potential regulatory penalties.
The safe harbour provisions for intermediaries will remain. A new proviso to rule 3(1)(b) clarifies that platforms removing synthetic content through grievance redressal mechanisms will retain their legal protections from liability over such content.
The ministry said the rules aim to promote user awareness, enhance traceability, and ensure accountability while maintaining an “enabling environment for innovation in AI-driven technologies.”
It emphasised the rules apply only to publicly available content, not to private or unpublished material.
Dhruv Garg, partner at the Indian Governance & Policy Project, said: “It is interesting to note that India has implicitly chosen to regulate the generative AI platforms as intermediaries giving them plausible safe harbour protections. While some other jurisdictions have already established regulations around disclosures and labelling, it is essential that these requirements balance transparency with need for scalability, innovation and creative expression.”
Ambiguity over AI text
While the definition of “synthetic information” is broad enough to encompass AI-generated text—such as content produced by chatbots or AI writing tools—the draft rules do not specify if these are excluded or how such text-based content should be labelled.
The regulations detail labelling requirements only for visual content (minimum 10% of surface area) and audio (first 10% of duration), leaving unclear whether text would require visible disclaimers, metadata tags, or some other mechanism.
This gap is not unique to India. The European Union’s AI Act, which takes effect in 2026, similarly mandates labelling for “synthetic audio, image, video or text content” but provides no specific guidance on how text should be marked beyond requiring machine-readable formats. China’s labelling measures, which came into force in September 2025, are more prescriptive—requiring visible labels such as “AI-generated” to be placed at the beginning, middle, or end of AI-produced text, alongside metadata embedded in files.
In its explanatory note, the ministry noted that concerns over such content have been raised in Parliament following extensive public discussions and parliamentary deliberations, prompting it to issue advisories in recent years urging social media intermediaries to act against deepfake-related harms.
(With inputs from Binayak Dasgupta)