Govt committee lays down guidelines for AI governance in India
India's AI Governance Guidelines propose practical measures for accountability, grievance redressal, and human oversight, aiming for safe AI deployment.
The government-appointed committee behind the India AI Governance Guidelines has laid down a set of practical guidelines for industry and regulators, along with a detailed accountability framework, grievance redressal mechanisms, and an action plan for how India should implement AI governance over the coming years.
“This [report] makes the rollout of AI in a very dynamic, safe and innovative way. We need to see how we take it forward especially when they say that artificial general intelligence (AGI) is just two years away,” said Principal Scientific Adviser Ajay Kumar Sood. “We should be seeing how really we prepare ourselves for AGI… I don’t think it can be linearly scaled the way it is happening… to keep on pumping more GPUs. The oceans will boil before we do that. That cannot be the answer.”
The recommendations of the committee, currently not enforceable in law, urges companies developing and deploying AI systems compliance with existing laws and voluntary principles on privacy, fairness, safety and transparency. It recommends that firms update their service terms to reflect accountability commitments, maintain audit trails, publish transparency reports, embed human oversight where appropriate, and build privacy-enhancing and bias-mitigation tools directly into their systems.
Regulators, meanwhile, are encouraged to take a proportionate, risk-based approach, focusing on harms that threaten life, livelihood or safety, and to coordinate across ministries and agencies through the committee-proposed AI Governance Group (AIGG) to ensure consistent oversight.
The committee says accountability should be clearly distributed across the AI value chain. “Accountability should be clearly assigned based on the function performed, risk of harm, and due diligence conditions imposed. Accountability may be ensured through a variety of policy, technical and market-led mechanisms,” it notes.
Organisations are expected to update internal policies and governance structures to define responsibilities at each stage of development and deployment. The report recommends publishing transparency reports, maintaining audit trails, and offering grievance channels that are easy for users to access.
“The report is clear that while these measures are essential foundational steps, they are not alone sufficient. Transparency reports offer value by subjecting company practices to public and peer scrutiny, and updated internal policies signal corporate commitment to responsible AI. However, in the absence of additional mechanisms, such as third-party audits, external certification, accessible grievance redressal channels, and regular government oversight, the impact of such measures may be limited. The guidelines thus recommend that transparency and voluntary compliance be complemented by market incentives, sectoral guidance, and the development of independent evaluation and grievance mechanisms, thereby creating layers of accountability that are both flexible and enforceable if necessary,” said Jameela Sahiba, Associate Director at The Dialogue, a tech policy think tank.
The report also calls for human oversight in high-risk AI systems. This includes building “human-in-the-loop” features at key decision points so that AI outputs can be reviewed or overridden by human judgment. In fast-moving contexts where direct human oversight may not be possible, the committee recommends safeguards like circuit breakers, automated checks and system-level constraints. It also calls for regular monitoring, testing and audit trails in critical sectors to ensure that AI systems operate within defined limits and that potential risks are detected and managed early.
On grievance redressal, the report says companies should set up clear and easy-to-use complaint systems, separate from incident reporting. These should be available in multiple languages, respond within fixed timelines, and be accessible to all users, including those with limited digital skills. It adds that feedback from complaints should be used to fix problems and reduce future risks. The report also urges regulators and the proposed AIGG to create common formats and escalation procedures so that grievances, especially in critical sectors, are handled quickly and consistently.
“The idea is to recognise that there are different players that are going to be across the value chain and the AI ecosystem. So we have to think about implementing greater liability and make sure that the existing laws are visibly and consistently enforced so that they know, when they are introducing AI, that they are in compliance of existing laws,” said Balaraman Ravindran, Professor at IIT Madras and chairman of the committee.
The committee also outlines a phased action plan to carry out its recommendations. In the short term, it calls for setting up the AIGG, the Technology and Policy Expert Committee (TPEC), and an AI Safety Institute (AISI), while developing risk frameworks, voluntary commitments and clearer liability norms. AISI, recently established under the India AI Mission, has been recommended by the committee to act as the main body responsible for guiding the safe and trusted development and use of AI in India.
In the medium term, the report suggests implementing common standards on safety and fairness, operationalising the national AI incidents database, and starting regulatory sandboxes in high-risk domains. Over the long term, the committee envisions full integration of AI with India’s Digital Public Infrastructure and expanded international collaboration on AI safety and policy.
While the recommendations are not binding, the report notes that they are intended to guide both public and private actors. “These recommendations are advisory in nature. The Government of India may consider their implementation through appropriate policy measures, standards or regulations,” it says.