Draft regulations on AI-generated content: securing the digital space
Draft regulations on AI-generated content are a first step toward eliminating deepfakes & synthetic media
The just published draft regulations on Artificial Intelligence (AI) generated content mark India’s first statutory attempt to address deepfakes and synthetic media. The proposed amendments to the IT Rules will require AI tools and social media platforms to label manipulated content — a response to concerns over poll interference, misinformation, and impersonation. The framework gets several things right. It provides legal clarity by defining “synthetically generated information” for the first time in Indian law, bringing AI-generated material under existing takedown obligations, and laying out specific labelling requirements. The proposed rules state that images and videos must carry labels covering at least 10% of display area, and audio must be identified during the first 10% of playback. The enforcement rests on two entities — companies creating such products, and platforms that host user-generated content. Platforms must ask users to declare whether uploaded content is synthetically generated, deploy automated detection systems for verification, and take down content when flagged through grievance redressal mechanisms.
These are important first steps. Deepfakes of public figures have been used for scams — the Union finance minister was recently targeted — and to create sexually suggestive videos of celebrities that stop short of explicit content, evading harsher moderation. Tools such as Sora and Dall-E evolve almost quarterly, producing increasingly convincing but entirely fictional images and video clips. Celebrities have sought court injunctions against the unauthorised use of their “likeness”, recognising how easily their face, voice, or mannerisms can be replicated without consent.
But the first step will need more strident follow through. The draft’s definition — “all synthetically generated information” — appears to cover AI-generated text, such as ChatGPT output. Yet, it offers no guidance on how such content should be labelled or fingerprinted. The draft is also silent on how the Centre plans to address media created by underground tools where checks against nudity or gore are easily breached. These may require standalone legislation — an approach many regions are increasingly adopting.
The consultation period offers an opportunity to address these gaps. More avenues to curb harm — including text-based misinformation — must be considered, while steering clear of heavy-handed approaches that stifle AI adoption and innovation. India is right to act proactively, but a domain as complicated as AI demands precision. The harder work of building public digital literacy and institutions of media trust — the only true antidote to a slide into alternative realities where facts are increasingly contested — lies ahead. The draft is a foundation, the rest of the legislative structure must now be planned.
The just published draft regulations on Artificial Intelligence (AI) generated content mark India’s first statutory attempt to address deepfakes and synthetic media. The proposed amendments to the IT Rules will require AI tools and social media platforms to label manipulated content — a response to concerns over poll interference, misinformation, and impersonation. The framework gets several things right. It provides legal clarity by defining “synthetically generated information” for the first time in Indian law, bringing AI-generated material under existing takedown obligations, and laying out specific labelling requirements. The proposed rules state that images and videos must carry labels covering at least 10% of display area, and audio must be identified during the first 10% of playback. The enforcement rests on two entities — companies creating such products, and platforms that host user-generated content. Platforms must ask users to declare whether uploaded content is synthetically generated, deploy automated detection systems for verification, and take down content when flagged through grievance redressal mechanisms.
These are important first steps. Deepfakes of public figures have been used for scams — the Union finance minister was recently targeted — and to create sexually suggestive videos of celebrities that stop short of explicit content, evading harsher moderation. Tools such as Sora and Dall-E evolve almost quarterly, producing increasingly convincing but entirely fictional images and video clips. Celebrities have sought court injunctions against the unauthorised use of their “likeness”, recognising how easily their face, voice, or mannerisms can be replicated without consent.
But the first step will need more strident follow through. The draft’s definition — “all synthetically generated information” — appears to cover AI-generated text, such as ChatGPT output. Yet, it offers no guidance on how such content should be labelled or fingerprinted. The draft is also silent on how the Centre plans to address media created by underground tools where checks against nudity or gore are easily breached. These may require standalone legislation — an approach many regions are increasingly adopting.
The consultation period offers an opportunity to address these gaps. More avenues to curb harm — including text-based misinformation — must be considered, while steering clear of heavy-handed approaches that stifle AI adoption and innovation. India is right to act proactively, but a domain as complicated as AI demands precision. The harder work of building public digital literacy and institutions of media trust — the only true antidote to a slide into alternative realities where facts are increasingly contested — lies ahead. The draft is a foundation, the rest of the legislative structure must now be planned.
One Subscription.
Get 360° coverage—from daily headlines
to 100 year archives.
Archives
HT App & Website