AI regulation must reinforce trust | Hindustan Times

AI regulation must reinforce trust

Updated on: Jul 04, 2025 04:36 PM IST

This article is authored by Hemant Krishna, partner and KS Roshan Menon, principal associate, Shardul Amarchand Mangaldas & Co

 

Representational Image(Pixabay) PREMIUM
Representational Image(Pixabay)

The emergence of novel AI regulations remodels commercial relationships, particularly in the financial sector. Recent SEBI regulations require AI deployers to build privacy and security safeguards around the AI they use. These rules were followed last week by a consultation paper, portending guidelines for using AI in the securities market. Relatedly, the Reserve Bank of India, is mulling over a framework for responsible AI in the financial sector. Such responsibility, it notes, shall draw not only from conventional financial regulation, but also from the nascent domain of AI ethics. 

Theoretically, the interplay of ethics, privacy and security inspires trust. Privacy-preserving systems encourage greater data sharing; ethical systems augment access to services. AI deploying credit scoring systems when adjusted for bias will likely connect more people to loans. A secure robot advisor can encourage individuals to query AI systems for a diverse array of investment-related queries. Taken together, these interventions help firms establish pathways to build novel, personalised commercial services.

Such pathways are optimally formed, however, in the presence of narrowly tailored obligations that enforce privacy or ethics. A closer examination, however, reveals that our first significant wave of AI regulation has opted for an alternative path, focused on enforcing principles, and not obligations, as part of law. This approach risks regulation going overbroad, stifling the capacity to build safe commercial relationships. This, in turn, requires our careful consideration. 

In 2024, the SEBI published the Consultation Paper on AI Governance. At the heart of the paper was the understanding that AI systems had come to play an increasing role in market analysis and portfolio building. This, they noted, necessitated regulatory tweaks, to ensure that users remain protected during the uptake of such systems. 

Consequently, the Consultation Paper recommended amendments to three regulations. These regulations covered five stakeholders: intermediaries, stock exchanges, clearing corporations, depositories and participants. For context, an intermediary facilitates investment into capital markets, such as a stockbroker, a merchant banker or a credit rating agency. A clearing corporation clears and settles trades on a stock exchange. A depository acts as a custodian for securities. 

The amendments recommended that these stakeholders be accountable for three aspects of the AI lifecycle. First, the stakeholder is responsible for complying with all applicable law, ostensibly in the context of the use of AI system. Second, the stakeholder is responsible for the output generated by the AI system. It appears that this responsibility arises in the event of reliance only; a malfunctioning AI system is subject to corrective intervention if the stakeholder does not rely on the output produced. Third, stakeholders are liable for the use by the AI system. To this end, they are responsible for the privacy, security and integrity of stakeholders’ data. This obligation seemingly extends to both personal and non-personal data, and to the entire AI lifecycle, irrespective of how the stakeholder utilised the system. 

Data privacy and output provenance are laudable governance objectives. However, the framing of these objectives, almost as open-ended principles, risks harm and unfairness. Particularly, three features within the rules may benefit from a reconsideration and strategic inputs.  

First, regulators may consider offering more specific guidance on privacy and security. While requiring entities to ensure these are reasonable objects for a law, the SEBI does not clarify the meaning of such terms within the context of the regulation. In the absence of specific direction, firms contend with applying and interpreting privacy as part of both the SEBI’s regulation and India’s enacted data protection law. Non-alignment between the two risks requiring firms to travel beyond the data protection law.  

Second, and relatedly, the regulations could be clearer. Observably, the regulations do not define privacy, security or integrity. In the absence of such a definition, or lodestar guidelines that help firms interpret the meaning of these terms, these requirements do not lend to uniform interpretations among businesses. Moreover, they also risk imposing a highly onerous compliance burden.

Third, diligence under the regulations is distributed to the last actor in an AI lifecycle. While convenient, reliance placed on the final actor for ensuring systems-wide integrity shall prove difficult and costly to enforce. Consider M, an online trading platform. M relies on an AI-powered identity solutions platform to verify the credentials of User A, who wishes to use M. Now, per the regulations, M must ensure that data belonging to User A must be safeguarded, even while being processed by the identity platform. This creates the need for invasive oversight, diminishing trust in the relevant AI ecosystem. 

For privacy, regulators may consider alignment with the Digital Personal Data Protection Act, 2023. Security safeguards may be aligned, similarly, with those under India’s Information Technology Act, 2000 or other international best practices. 

Remedying the law’s approach to diligence is more difficult. This is because diligence must not be approached through a, ‘If not X, then Y’ approach. Regulators will benefit more by engaging with an approach that attempts to distribute AI governance obligations across the AI ecosystem. This approach codes responsibility into all actors involved in building AI systems, baking trust into the value chain. Concurrently, a more effective approach would provide stakeholders with clear guidance on what constitutes sufficient safeguards—such as implementing contractual obligations, conducting continuous risk assessments, maintaining audit trails, and certifying compatibility with industry standards. By offering such guidance, the regulatory framework can enable stakeholders to understand with greater clarity and certainty which measures are appropriate in various contexts, thereby reducing the need for excessive oversight and fostering greater trust throughout the AI value chain. Thinking around this may develop gradually, a consultation paper can study how value chain governance works in jurisdictions such as the European Union or the state of Colorado. 

A principle-led AI governance framework is a good idea. However, regulation must animate these principles through targeted and well-defined obligations. Such animation serves as a first step towards building a trustworthy AI ecosystem where each actor assumes clear and proportionate responsibilities.  

This article is authored by Hemant Krishna, partner and KS Roshan Menon, principal associate, Shardul Amarchand Mangaldas & Co

All Access.
One Subscription.

Get 360° coverage—from daily headlines
to 100 year archives.

E-Paper
Full Archives
Full Access to
HT App & Website
Games
SHARE THIS ARTICLE ON
SHARE
close
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
Get App
crown-icon
Subscribe Now!
.affilate-product { padding: 12px 10px; border-radius: 4px; box-shadow: 0 0 6px 0 rgba(64, 64, 64, 0.16); background-color: #fff; margin: 0px 0px 20px; } .affilate-product #affilate-img { width: 110px; height: 110px; position: relative; margin: 0 auto 10px auto; box-shadow: 0px 0px 0.2px 0.5px #00000017; border-radius: 6px; } #affilate-img img { max-width: 100%; max-height: 100%; position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); } .affilate-heading { font-size: 16px; color: #000; font-family: "Lato",sans-serif; font-weight:700; margin-bottom: 15px; } .affilate-price { font-size: 24px; color: #424242; font-family: 'Lato', sans-serif; font-weight:900; } .affilate-price del { color: #757575; font-size: 14px; font-family: 'Lato', sans-serif; font-weight:400; margin-left: 10px; text-decoration: line-through; } .affilate-rating .discountBadge { font-size: 12px; border-radius: 4px; font-family: 'Lato', sans-serif; font-weight:400; color: #ffffff; background: #fcb72b; line-height: 15px; padding: 0px 4px; display: inline-flex; align-items: center; justify-content: center; min-width: 63px; height: 24px; text-align: center; margin-left: 10px; } .affilate-rating .discountBadge span { font-family: 'Lato', sans-serif; font-weight:900; margin-left: 5px; } .affilate-discount { display: flex; justify-content: space-between; align-items: end; margin-top: 10px } .affilate-rating { font-size: 13px; font-family: 'Lato', sans-serif; font-weight:400; color: black; display: flex; align-items: center; } #affilate-rating-box { width: 48px; height: 24px; color: white; line-height: 17px; text-align: center; border-radius: 2px; background-color: #508c46; white-space: nowrap; display: inline-flex; justify-content: center; align-items: center; gap: 4px; margin-right: 5px; } #affilate-rating-box img { height: 12.5px; width: auto; } #affilate-button{ display: flex; flex-direction: column; position: relative; } #affilate-button img { width: 58px; position: absolute; bottom: 42px; right: 0; } #affilate-button button { width: 101px; height: 32px; font-size: 14px; cursor: pointer; text-transform: uppercase; background: #00b1cd; text-align: center; color: #fff; border-radius: 4px; font-family: 'Lato',sans-serif; font-weight:900; padding: 0px 16px; display: inline-block; border: 0; } @media screen and (min-width:1200px) { .affilate-product #affilate-img { margin: 0px 20px 0px 0px; } .affilate-product { display: flex; position: relative; } .affilate-info { width: calc(100% - 130px); min-width: calc(100% - 130px); display: flex; flex-direction: column; justify-content: space-between; } .affilate-heading { margin-bottom: 8px; } .affilate-rating .discountBadge { position: absolute; left: 10px; top: 12px; margin: 0; } #affilate-button{ flex-direction: row; gap:20px; align-items: center; } #affilate-button img { width: 75px; position: relative; top: 4px; } }