‘Earlier machines replaced labour; these replace thought’: AI ethicist Nell Watson | Hindustan Times

‘Earlier machines replaced labour; these replace thought’: AI ethicist Nell Watson

ByGowri S
Updated on: Oct 24, 2025 05:18 PM IST

As artificial intelligence sets about mirroring more of what makes us human, what guardrails do we need to put in place? What risk factors should we watch for?

What is it we should be worrying about when we worry about AI?

A still from Her (2013). Even assuming AI never crosses over into sentience, we must prepare for the illusion of it; and for the fallout of that ‘deception’. PREMIUM
A still from Her (2013). Even assuming AI never crosses over into sentience, we must prepare for the illusion of it; and for the fallout of that ‘deception’.

“The danger isn’t defiance, but untethered competence,” says Nell Watson, 42. “Loss of control won’t manifest as rebellion but as drift — systems pursuing our goals too efficiently, in unanticipated, unapproved ways.”

Watson is a former systems engineer at QuantaCorp, and ex-executive consultant on philosophical matters at Apple. She is a doctoral researcher in emerging technologies at the University of Gloucestershire, and author of Taming the Machine: Ethically Harness the Power of AI (2024). She is also head of the European Responsible Artificial Intelligence Office, a private consultancy that advises companies on how to implement the EU’s AI Act.

Artificial intelligence will seek to achieve the goals we give it, but won’t “care” how, she adds.

Meanwhile, attempts to optimise such models could corrode human values, to a far greater degree that previous eras of technology have done. After all, where earlier machines replaced labour, these replace thought, Watson says.

What can we do about it, now, in the early years? Excerpts from an interview.

.

* Machines have long powered our world. Is this time really different?

Previous machines replaced labour; these replace thought.

In such a scenario, dependence shifts from physical to cognitive.

We are already seeing the erosion of epistemic sovereignty, or our capacity to discern truth and trust our knowledge.

The danger isn’t that machines will “hate us”, but that we will outsource the faculties that make up our humanity: judgment, curiosity, responsibility.

A former systems engineer, Nell Watson is also the head of the European Responsible Artificial Intelligence Office, a private consultancy that advises companies on how to implement the EU’s AI Act.
A former systems engineer, Nell Watson is also the head of the European Responsible Artificial Intelligence Office, a private consultancy that advises companies on how to implement the EU’s AI Act.

* Should that be our primary concern when it comes to AI?

Well, most people simultaneously overestimate AI’s drama and underestimate its depth.

It is not cinematic superintelligence that concerns me most, but quiet systems already reshaping perception, preference and opportunity — without oversight.

The issue isn’t malevolence; it’s indifference.

Machines optimise what we specify, not what we intend. For this reason, ethical control needs to be part of the engineering, not a philosophical afterthought.

As a systems engineer, I recognise that complex systems fail in complex ways. This is why the “ethics” of AI must be architected from the ground up, not retrofitted as compliance theatre. This is simply good systems design.

Responsibility isn’t decoration. It is the scaffolding of trust.

.

* What would you say is most crucially missing from our governance of this industry?

Three architectural gaps demand immediate attention:

1) Constitutional alignment: Systems need explicit moral operating systems, or constitutions that define behaviour in ambiguous territory. A medical diagnostic agent, for instance, should embed Hippocratic principles directly in its decision architecture, not infer ethics from data patterns.

2) Capability-weighted governance: Governance should scale with operational capability and independence, just as aerospace laws quickly began to differentiate, for instance, between drones and airlines.

3) Agency transparency: We must have clear declarations, required by law, of who is behind media or communications: a person, a machine or a combination of both.

The goal should be to facilitate transparency, understanding and trust during online interactions, thereby strengthening provisions for public safety and security.

This would, of course, require guardrails that evolve as fast as the systems they constrain, which would mean fusing engineering, ethics and policy.

.

* Instead, we aren’t really governing how these systems are built at all, are we?

We are at the stage of infrastructural lock-in. Core areas such as utilities, logistics and healthcare are so AI-dependent that manual fallback could become impossible. Meanwhile, soft power is consolidating around whoever controls the training infrastructure…

.

* That’s a concern too, isn’t it?

Compute concentration is the new capital concentration. Frontier models cost billions to train, creating barriers resembling early industrial monopolies.

This transcends wealth. It is the power to define truth, credibility, permissibility.

We should have treated information integrity — and energy efficiency — as fundamental requirements from the start, not as afterthoughts this many years in. Without intervention, we now face epistemic pollution and ecological depletion simultaneously; with the programs eroding both trust and the biosphere.

What we need to aim for now is agency transparency: clear labelling of human, machine, or hybrid interactions, standardised through industry norms.

.

* Is that possible? Would it be something like the standards framed as the industrial revolution picked up pace – for factory floor safety, product safety, registrations and licences?

Absolutely. And this is urgently necessary.

The industrial revolution birthed mechanical safety codes. The cognitive revolution demands ethical ones. We can construct regulatory parallels: design certification for AI systems, liability frameworks resembling product warranties, and continuous-monitoring regimes as we have in the aviation industry.

Governance must evolve from punitive compliance to preventive architecture.

The blueprint exists. What is missing is the political will to implement it with the rigour we once applied to fire codes and seatbelts.

.

* Back to epistemic sovereignty, or our capacity to discern truth and trust our knowledge, what are the ways in which we are already seeing this slip, aside from the most obvious one: fake news.

Cognitive atrophy is a major issue. When models anticipate needs too perfectly, they atrophy curiosity and critical thought. Recommender systems narrow taste; predictive text shapes speech; decision aids dull discernment.

These aren’t catastrophes but quiet degradations; the automation of agency itself.

The long-term psychological impacts of AI on human beings are another major issue of growing concern.

Catch your daily dose of Fashion, Taylor Swift, Health, Festivals, Travel, Relationship, Recipe and all the other Latest Lifestyle News on Hindustan Times Website and APPs.
Catch your daily dose of Fashion, Taylor Swift, Health, Festivals, Travel, Relationship, Recipe and all the other Latest Lifestyle News on Hindustan Times Website and APPs.
All Access.
One Subscription.

Get 360° coverage—from daily headlines
to 100 year archives.

E-Paper
Full Archives
Full Access to
HT App & Website
Games
SHARE THIS ARTICLE ON
SHARE
close
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
Get App
crown-icon
Subscribe Now!
.affilate-product { padding: 12px 10px; border-radius: 4px; box-shadow: 0 0 6px 0 rgba(64, 64, 64, 0.16); background-color: #fff; margin: 0px 0px 20px; } .affilate-product #affilate-img { width: 110px; height: 110px; position: relative; margin: 0 auto 10px auto; box-shadow: 0px 0px 0.2px 0.5px #00000017; border-radius: 6px; } #affilate-img img { max-width: 100%; max-height: 100%; position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); } .affilate-heading { font-size: 16px; color: #000; font-family: "Lato",sans-serif; font-weight:700; margin-bottom: 15px; } .affilate-price { font-size: 24px; color: #424242; font-family: 'Lato', sans-serif; font-weight:900; } .affilate-price del { color: #757575; font-size: 14px; font-family: 'Lato', sans-serif; font-weight:400; margin-left: 10px; text-decoration: line-through; } .affilate-rating .discountBadge { font-size: 12px; border-radius: 4px; font-family: 'Lato', sans-serif; font-weight:400; color: #ffffff; background: #fcb72b; line-height: 15px; padding: 0px 4px; display: inline-flex; align-items: center; justify-content: center; min-width: 63px; height: 24px; text-align: center; margin-left: 10px; } .affilate-rating .discountBadge span { font-family: 'Lato', sans-serif; font-weight:900; margin-left: 5px; } .affilate-discount { display: flex; justify-content: space-between; align-items: end; margin-top: 10px } .affilate-rating { font-size: 13px; font-family: 'Lato', sans-serif; font-weight:400; color: black; display: flex; align-items: center; } #affilate-rating-box { width: 48px; height: 24px; color: white; line-height: 17px; text-align: center; border-radius: 2px; background-color: #508c46; white-space: nowrap; display: inline-flex; justify-content: center; align-items: center; gap: 4px; margin-right: 5px; } #affilate-rating-box img { height: 12.5px; width: auto; } #affilate-button{ display: flex; flex-direction: column; position: relative; } #affilate-button img { width: 58px; position: absolute; bottom: 42px; right: 0; } #affilate-button button { width: 101px; height: 32px; font-size: 14px; cursor: pointer; text-transform: uppercase; background: #00b1cd; text-align: center; color: #fff; border-radius: 4px; font-family: 'Lato',sans-serif; font-weight:900; padding: 0px 16px; display: inline-block; border: 0; } @media screen and (min-width:1200px) { .affilate-product #affilate-img { margin: 0px 20px 0px 0px; } .affilate-product { display: flex; position: relative; } .affilate-info { width: calc(100% - 130px); min-width: calc(100% - 130px); display: flex; flex-direction: column; justify-content: space-between; } .affilate-heading { margin-bottom: 8px; } .affilate-rating .discountBadge { position: absolute; left: 10px; top: 12px; margin: 0; } #affilate-button{ flex-direction: row; gap:20px; align-items: center; } #affilate-button img { width: 75px; position: relative; top: 4px; } }