India’s deepfake problem: Scale and urgency
This article is authored by Alkesh Kumar Sharma, member, Public Enterprises Selection Board (PESB) and former secretary, MeitY.
The incident of a celebrity becoming victim of a deepfake gave a wakeup call to policy makers demonstrating how a generative Artificial Intelligence (AI) could be used as a weapon to affect image and prestige of a person and therefore the need to take steps to counter the proliferation of synthetically generated content like AI-generated images, audio, videos and deepfakes. The Draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 (Draft Rules) come at a moment when misinformation on global social media platforms is at an all-time high: deepfakes worldwide have surged from 5,00,000 in 2023–24 to more than eight million in 2025–26. India, now the world’s largest connected society, is especially vulnerable. Nearly 75% of Indians report having encountered deepfakes online, and one in four have come across political deepfakes only to realise later that they were fabricated. Cases have grown up 550% since 2019, with projected losses running into tens of thousands of crores. India features among the countries most vulnerable to deepfake harms.
This context makes the amendment well-intentioned and well-timed to contain invasion in privacy, defamation and loss of dignity. Yet the Draft Rules, in their present form, may be setting up a regulatory mechanism that is too broad and difficult to implement. The draft defines synthetic content as anything “created, generated, modified, or altered using a computer resource” that appears authentic. In practice, this sweeps in far more than malicious manipulation in the scope such as routine photo editing, colour correction, animated explainers, parody, satirical reels, memes.
By regulating all synthetic content instead of targeting harmful or deceptive manipulation, the system will drown in volume. Based on current interpretations, the definition could be broad enough that a very large share of online content may require labelling. Platforms may be forced to handle an unmanageable flood of declarations and labels, making genuine threat detection harder rather than easier. This can also create label fatigue, where users become indifferent to AI tags simply because everything is labelled.
The rules mandate detection of such content when such technologies are still nascent, unreliable, non-interoperable and a potential risk to privacy. The verification process may be challenging as it could require platforms to collect detailed user data such as behaviour patterns, device fingerprints, editing tools and content histories, raising concerns of over surveillance. Such verification requirements may also intersect with the DPDP Act, since broad scanning would test principles such as purpose limitation, storage limitation, and data minimisation. These requirements also increase compliance costs, which will hit smaller platforms hardest, deepen monopolies by a few big tech companies while stifling innovation and pushing startups to geo-fence India out, weakening India’s goal of building an indigenous AI ecosystem.
Under pressure to avoid liability, platforms may then over remove content as there are no legal standards for “proportionate and reasonable” technical measures of fake detection affecting satire, dissent, political commentary and general criticism.
The e-commerce sector shows how this overly broad definition can potentially pull even ordinary and harmless digital practises into a framework meant for harmful deepfakes. Product images on online marketplaces often rely on basic editing techniques such as lighting adjustments, background removal or AI assisted clean ups to clearly display items. Under a broad interpretation of the draft, even these could fall within the scope of synthetic content, and may require visible labels or watermarks. This can clutter product listings, reduce visual appeal, and undermine the shopping experience. Smaller sellers who depend on polished visuals to attract buyers may face higher compliance costs, while marketplaces would need to build verification and audit systems that add little real value.
Compared to emerging global practice, these draft rules take a more expansive approach to synthetic content. The draft mandates that synthetically generated content carry prominent labels at least 10% of the screen area for visual media or a disclosure lasting at least 10% of the duration of audio media, along with persistent metadata and uploader declarations. Many global regimes targeting deepfakes and AI content such as those in countries like Singapore under the Online Criminal Harms Act (OCHA), Australia’s Online Safety Act or the UK’s Online Safety Act concentrate their strongest obligations on clearly defined harms such as election interference, impersonation, non-consensual intimate imagery and criminal harms. By regulating all synthetic content regardless of its risk such as harmless AI-assisted edits to benign images, videos, or audio, rules depart sharply from this emerging global risk-based governance approach. This makes it prone to over-regulation of routine digital expression, and losing focus on genuinely harmful manipulation.
India has built a strong digital system which is based on trust and consent. The real deepfake problem also need a solution that create trust; not noise, cost and confusion. We need sharper tools, advanced technology, digital awareness and literacy and a mechanism that is technically feasible and viable to fight malicious deepfakes without stifling creativity, commerce and criticism. Effective regulation demands accuracy, restraint and a focus on real harm. Without that, the regulations risk becoming an example of good intentions derailed by poor design.
This article is authored by Alkesh Kumar Sharma, member, Public Enterprises Selection Board (PESB) and former secretary, MeitY.
E-Paper

