Establishing predictable AI liability in India
This article is authored by Nayan Chandra Mishra, research assistant, New Delhi.
As Artificial Intelligence (AI) increasingly permeates everyday life in India, the risks of misuse and AI-driven harm are rising in parallel. This has pushed policymakers to consider how these emerging risks can be tackled through a predictable, liability-focused legal framework. A crucial first step is to identify the liability principles on which such a framework should rest. A practical approach in this regard lies in leveraging and extending India’s existing laws by stretching their interpretability to encompass AI-related harms. This article, therefore, outlines how a graded approach to liability can allocate responsibility across the AI value chain while keeping the system predictable and flexible.
Employing existing laws successfully requires clarity on who the actors are and what their roles and responsibilities are at different stages of AI development. An AI chain involves multiple actors with differentiated roles and responsibilities, creating a complex chain where achieving modularity (components connected but separable) is difficult, unlike in traditional manufacturing.
While comprehensive discussion on defining the AI chain in India is scarce, western researchers have identified four types of actors: infrastructure providers (e.g., data centres, chips, and cloud services), developers (foundational models), deployers (who build AI applications on top of foundational models and interact directly with users), and finally the end users (commercial and non-commercial). These actors, having clear roles and responsibilities, interact with each other to build the AI model for end users.
Once the AI chain is identified, apportionment of liability becomes predictable based on the type of harm that occurred. Now, the next step involves bringing equitability in the imposition of the most relevant law and in order to achieve it, this article has identified four sequential levels of liability in the same order of preference:
* Contract law
* Torts law
* Civil law
* Criminal law
These categories broadly encompass the type of law applicable to different types of harms. Given there are AI harms that are unknown or not covered by existing law, this gradation will ensure clarity to regulators and courts in providing relief to the victims. While each case will present its own factual nuances, a graded approach will involve analysing every layer on a sequential basis to clearly identify the basis of obligations of the actors and corresponding liability for the harm that occurred.
Contracts are the first point of legal contact, which sets the rules of interaction among multiple actors at different stages. For instance, an Indian AI application developer (Startup X) may contract with a foundational AI model developer and other service providers, such as data providers, auditors, project management tools, and risk mitigators, to acquire specialised services and fulfil safety obligations. The developer can, in turn, indemnify Startup X against third-party suits on copyright infringement or misleading datasets. Therefore, contract law provides the requisite flexibility by outlining the roles and responsibilities of the parties in various scenarios that may arise. This specificity can guide courts in taking appropriate action within existing legal bounds.
Torts law provides the next line of redress where contractual clauses are silent. Tort law, in the absence of a specific law, allows courts the flexibility to impose liability based on the peculiar facts and circumstances of a case. Given AI is still in its initial stages and actors have not yet achieved expertise in identifying and mitigating risks, tort law can act as a lodestar for courts to navigate through the learning curve and for actors to amend their unintentional mistakes and avoid future liability.
Moreover, within tort law, there is a gradation of liability based on the gravity of harm, starting from mistake and negligence, reaching up to strict or absolute liability. The concept of strict and absolute liability supersedes the complex AI chain to straightforwardly impose liability even if the actors are not negligent. This approach is relevant because it shifts the burden of proof from the plaintiff to the defendant, recognising that the affected party is unlikely to be aware of the internal intricacies of the value chain.
While contract and tort law can address a majority of issues, the third and fourth layers are activated when the harm is specific, intentional and results in a tangible societal impact. Examples include misinformation, copyright infringement, or harm, such as suicide by users after interacting with chatbots. The imposition of civil law will primarily involve the civil procedure code, consumer protection law, intellectual property laws, data protection law, and information technology law. Statutory civil law mechanism allows regulators and courts to impose compensation, injunctions, and corrective measures without needing a new AI-specific statute.
Whereas criminal law is more expansive and includes general laws like the three criminal codes alongside special laws protecting women, children, or dealing with money laundering and national security. However, it should remain a last-resort layer and be triggered only where the conduct involves knowledge, intention, or a high degree of recklessness. Over-criminalisation risks making the process itself the punishment, creating a chilling effect on innovation and AI development.
We must recognise that no gradation is foolproof, suffering from limitations inherent in both the technology and existing implementation challenges. The application of these laws in India has historically been lacklustre, often resulting in the process itself becoming the punishment. Therefore, principles of liability and implementing them are two separate worlds with connected nerves. When both aspects operate in harmony, these principles will lead to the necessary predictability in the actions of the actors, regulators, and courts.
This article is authored by Nayan Chandra Mishra, research assistant, New Delhi.
E-Paper

