...
...
...
Next Story

Neural Dispatch: Microsoft Copilot’s failed intrusion on LG TVs, and looking back at AI in 2025

Published on: Dec 24, 2025 05:23 PM IST

The biggest AI developments, decoded. December 24, 2025.

Cognitive warmup. Microsoft seems to be living in a bubble, aren’t they? There seems little in terms of any realisation, or even attempts at introspection, concerning how badly they’ve dropped the ball on AI. From claiming as much as 30% of code in Microsoft is now written by AI, to having a series of botched Windows 11 updates in subsequent months, which broke critical functionalities on millions of PCs. The aspirations of making Windows 11 in an “agentic OS” received backlash, as soon as those dreams were published on X—something I classified as chaos.

The image of Microsoft Copilot appearing on LG TVs, shared by a user on Reddit

The never-ending saga now adds Microsoft’s Copilot and LG’s webOS TVs.

The TV maker recently rolled out Copilot to users’ TVs, in a way that it was impossible to disable or uninstall the AI. First, LG quietly installed it hoping no one would notice. Secondly, why not give users a choice? After some backlash, LG says it’ll now be possible to delete the Copilot shortcut from your Smart TV home screen, and the next webOS update will allow Copilot to be uninstalled completely. I’m afraid AI debris will still be discovered long after the hype has subsided.

PAST PERSPECTIVE

2025 wasn’t the year AI became sentient, autonomous, or unstoppable. It was the year the excitable, and loud AI industry discovered where the real bottlenecks lie. Trust, relevance, supervision, economics, integration, geopolitics, and electricity. Models supposedly improved. Claims of intelligence and smartness grew louder. Yet, reality quietly caught up, and the supposedly smartest beings on the planet were left floundering more often than not.

OpenAI’s GPT-5 and an illusion of “thinking by default”

OpenAI, amidst much fanfare for its late summer release, positioned GPT-5 as the model where reasoning became intrinsic. Smarter than anything before it, and the PhD-level intelligence claims were being dropped regularly in the months leading up. Multistep problem-solving, tool use, project coherence—AI finally learning to think, not just respond, we were told.

“Thinking by default” for GPT-5 refers to its new architecture, which uses an internal router to automatically use deeper reasoning models for complex tasks, moving towards step-by-step problem-solving. It was supposed to underline unprompted agentic actions, and a shift to intelligent conversations, integrating reasoning, large context memory windows, and multimodality.

The reality: GPT-5 didn’t invent machine reasoning, but feels more like it hid the scaffolding. What felt like “thinking” was still probabilistic inference, just better orchestrated and more confidently packaged. The real shift was psychological, where users likely stopped questioning outputs because the model sounded so confident and deliberate.

It’s also dangerous, since GPT-5 tried its best to normalise trust, even though the AI industry is quite far from solving hallucinations, verification, or accountability.

Microsoft Copilot integration in shambles

Microsoft’s $13-billion OpenAI bet was supposed to pay off through Copilot, in the form of AI woven into every Office app, Windows OS, and enterprise workflow. The narrative was simple—Microsoft would own AI’s integration layer while competitors fought over models, and thereby dictated even more expensive Microsoft 365 subscriptions.

The reality: Copilot became a case study in premature and haphazard productisation. The fact that at certain points of time in Outlook for the Web, you can see the Copilot icon twice within the same interface, would lead one to believe that every team within Microsoft seems to have targets for Copilot integration in some way. You couldn’t blame enterprises who paid premium subscriptions, and felt these were half-baked. Microsoft’s rush to monetise its OpenAI trump card meant shipping these integrations before genuine productivity gains, if any, could be locked in.

A bigger miss, in my book: Microsoft’s exclusive positioning with OpenAI’s models began to unravel with a partnership with Anthropic announced in late 2025, to bring Claude models to Microsoft 365. The ‘Copilot everywhere’ strategy looks less like an inevitability, and more like an expensive technical debt.

DeepSeek and a geopolitical AI scramble

At the beginning of 2025, China’s DeepSeek-V3 arrived as a wake-up call for all AI companies worldwide. It redefined competitive performance, built at a fraction of training costs that had defined the AI models till then. Add a geopolitical touch, this was built in an era of export restrictions. The message was clear—AI development couldn’t be contained through chip bans alone. But for AI companies, there was truly something to worry about. Perhaps little did we know, that set the tone of a rather stressful 2025 for them.

China’s DeepSeek-V3 arrived as a wake-up call for all AI companies worldwide.

The reality: DeepSeek exposed uncomfortable truths about the AI race. First, compute efficiency matters as much as raw scale. Second, that regulatory moats are weaker than Silicon Valley assumed. Third, that a global AI landscape is fragmenting faster than any single jurisdiction can control.

Satya Nadella wrote about Jevons paradox (it is a concept of economics, where efficiency causes the cost of a resource to drop, thereby increasing consumption). OpenAI’s Sam Altman insisted such competition will be “invigorating”.

A collective brave face is one thing, but the hits simply kept coming.

By late 2025, DeepSeek wasn’t just a model—it was proof that AI competition had become genuinely multipolar. At a time when most AI companies and AI hardware makers hoped the world wouldn’t see through their attempts at circular economics.

Circular funding carousel

Speaking of which…it soon became apparent that AI is staying afloat by deploying a lifeboat called the circular funding loop. Think of it this way, as just one example: OpenAI signed a $300-billion deal with Oracle Corp. for powering OpenAI’s AI infrastructure. Oracle is providing that by spending billions on Nvidia’s chips, while Nvidia itself plans to invest up to $100 billion in OpenAI, who in turn committed to using Nvidia’s systems to build 10 gigawatts of data-centre capacity. It’s not illegal, but it’s definitely creative accounting dressed up as organic growth.

The reality: This isn’t just about Nvidia or any particular AI company. They all seem equally in on it. It’s about an entire ecosystem where growth metrics have been beautifully decoupled from actual value creation. Everyone is measuring inputs (compute spent) instead of outputs (problems solved). What happens when someone demands ROI? That’s the knife twist, and AI companies are simply kicking the can down the road. Because in 2025, they convinced a lot of businesses worldwide that this is still the ‘spend money to figure out AI’ phase, not the ‘tangible AI results’ phase.

AI, say hello to physics

Throughout 2024, AI discourse centred on model capabilities. By mid-2025, it shifted to infrastructure limits—electricity grid capacity, water usage, cooling constraints, land permits, and, of course, lots and lots of investments (with some government protection if all goes bust, thank you very much).

The reality: It is the first time that the AI hype train has encountered non-negotiable constraints. You can’t prompt your way out of power shortages. You can’t scale compute faster than regulators approve substations. AI’s future is now shaped as much by power grids they want to tap into, and climate policy, as much as it may be by latest model architectures. Silicon Valley is learning that physics always has the last word. Hopefully in 2026, common sense will enter the fray too.

 
Check for Real-time updates on India News, Weather Today, Latest News on Hindustan Times.
Check for Real-time updates on India News, Weather Today, Latest News on Hindustan Times.
SHARE THIS ARTICLE ON
Subscribe Now