LLMs may be the hype, but cybersecurity doesn’t work on hype: Sujatha S Iyer
In a world where AI is increasingly powering both hackers and defenders, ManageEngine’s Sujatha S Iyer details why explainability, responsibility, and human oversight will decide who stays ahead
“How much of success or how much or return on investment and organisation gets from artificial intelligence (AI) will depend on how digitally mature they are,” Sujatha S Iyer, Head of AI Security at ManageEngine, a division of Zoho Corporation, minces no words as she talks about the hype and reality of AI, particularly large language models, and enterprises trying to catch the wave. Iyer, who has worked on many roles within Zoho Corporation over the years, makes it a point to note that none of their AI products, including the home-grown LLM released this summer, Zia, are trained on customer data — just commercially licensed open source data sets. This is in three variants, with 1.3 billion parameters, 2.6 billion parameters and 7 billion parameters, and can be tuned to the specific context of where it’s deployed.

Iyer talks to HT about using AI to combat threats which are increasingly gaining sophistication with their own use of AI to create malware and phishing attempts, the shifting balance between threats and defensive tools, does AI need human oversight, and which industries in an Indian context are most at risk from AI powering cyber threats. Edited excerpts.
Q. Cybersecurity has always been a game of cat and mouse. With AI now powering both defense and attack, how do you see the balance shifting in the next few years?
Sujatha S Iyer: The landscape has changed so much, people are much more security and privacy aware. And organisations have also woken to the fact that security is no longer just a checklist in their to-do lists. Any security lapse puts a huge dent on their reputation and not to forget the monetary fines that come in the form of regulations and so on. So now the shift has become more of taking a proactive stance rather than being reactive. The idea is to try to stop an incident even before it takes the entire flow, rather than doing a root cause analysis of what really led this security mishap to happen.
AI has transformed so much over the years. The way AI is being used in the hands of defence, it’s also powering the attackers. Case in point, phishing emails or malware that was there a decade back, has changed. In the past, phishing emails which I’m sure are the ones you also went through, were poorly worded. Today, if you look at it, if someone’s trying to phish a leading bank, they would ensure the same theme is followed. In fact, they’ll ensure the wording is right. In the pre-AI era, it took more time to draft a convincing phishing email. But now with AI, it can be done at scale. The fact that you have hyper-personalised phishing emails landing in your inbox, just sticking to the good old methods of combating security attacks, would mean we’re in trouble.
Just because LLMs (Large Language Models) are in a hype, though LLM is giving you the power to create a lot of content, but in defence, you don’t go by the hype. What we have done at Manage Engine is take a cautious approach on where to use an LLM, and where to prefer traditional statistical machine learning techniques. The reason, security is one field where especially in the enterprise landscape that we operate in, black boxing is not acceptable. We cannot randomly say there’s an 80% chance that something detected is malware while not really giving you a reason behind it. It has to be backed up with reasons. The way this works in the consumer landscape is different. A wrong friend recommendation comes up in your social networking site, it is easy to brush that aside. But it’s not unacceptable here.
The models that we choose, must be accurate, and that are plugged in from endpoint management to monitoring. They are carefully curated, and ensure we are explainable. When we have a model explaining the chances of why a certain issue may point to a malware because it is trying to meddle with the system’s registry keys, creating multiple copies of herself, or if we are detecting files being encrypted at a very rapid rate — there is some explanation that gives the user faith in the adoption The shift I see will be more towards proactiveness, using AI to combat AI.
Q. Which industries are the most at risk, and is the Indian scenario any different from the global threat landscape?
Sujatha S Iyer: I’ll say every sector is at risk. Talking about those sectors which are in high stakes, BFSI (Banking, Financial Services, and Insurance) and healthcare have very stringent regulations. The RBI and SEBI have some of the strictest regulations, for example. In healthcare there is the HIPAA (US Health Insurance Portability and Accountability Act) compliance in the US. That’s where you know any mishap is a huge dent, but there are a lot of monetary fines, and a lot of regulations. What makes India interesting is that it is one of the fastest growing economies, and in terms of population too. The digitalisation is very rapid with billions of people, almost the cheapest data plans anywhere in the world, more and more people have a smartphone, but the real question is, is everyone really aware? Case in point, hospitals. Most of them have adopted an internal portal for their employees to log in. But how many employees really know that the login page that they are visiting is genuine or not? Or if phishing is in progress? Most likely the phishing attempt would look the same as the genuine web page.
Not everyone is going to have that attention to detail, considering we are such a rapidly growing economy and digitisation is an unprecedented speed. One more thing that makes India very unique is the number of local languages that we have. Every few hundred kilometres, we may have a different language, and even the same language with different dialects. A lot of tailor made attacks are happening in these languages. The phishing emails aren’t just happening in English or Hindi, they’re happening in Tamil too, for instance. That’s the reason we hear so much about banking frauds and OTP frauds, as attacks are also done in local languages.
In an enterprise context too, it is close to impossible to have people sit and manually see what everything is. That’s where AI helps you take advantage of it. The government is also taking all the right steps in the direction. The DPDP (India’s Digital Personal Data Protection Act) is coming up beautifully, and an initial version is expected to roll out soon. It has covered things extensively, in fact, much more than the GDPR.
Q. You’ve spoken about “using AI to combat AI.” Could you unpack what that means in practice—what kind of AI tools can realistically counter AI-driven phishing, malware or other adversarial tactics?
Sujatha S Iyer: When I say combat AI using AI, it’s not to just have one model that gets all. At every step, it has to be a multi-layered approach, with AI coming in and helping you be proactive. I’ll walk you through an example. At ManageEngine, we have multiple solutions for monitoring, endpoint monitoring and detection, as well as security and event management. Let’s start with a log-in when an employee comes to the office. Just because the credential is valid, you don’t let someone gain access to your system, because the credentials could have been compromised. I’m sure a lot of us are using the same password for a lot of websites, and that’s only human. Given the case that password compromises and data breaches happen, it is important to validate a suspicious login which may be a different time zone or a different machine than usual.
The next step is to identify that one a user has gained access into the system, is their usage deviating from standard routine? That’s where the AI powered browser security and tools come into play. They profile the links that you visit, but not track you. Is there any possibility of them being phished? Is it a genuine page? Third, just in case a user happens to visit a page, download some materials, there are some executables or some applications that they might want to install. What if one of them turns out to be a malware? That’s where the endpoint detection response tools come in, monitoring the apps that are being downloaded, in the background.
Q. Do you believe AI will ever be trusted enough to make autonomous security decisions, or will human oversight always be non-negotiable?
Sujatha S Iyer: The malware today is much more intelligent than what was circulating a few years ago. The AI models try to recognise the malware’s behaviour in advance, because any malware or ransomware will try to create havoc in an infected system — probably create multiple processes of itself, attempt data exfiltration, encrypt files and so on. The model tries to learn this sort of behaviour, and gives you an immediate alert that there is something suspicious in the system. And once you get an alert, you can immediately quarantine the system instead of letting the malware spread to the other systems in an organisation’s network. We see that at every layer, there is an AI model that’s coming in to help you strengthen your security. This is what I say, that you use AI to combat AI attacks that are becoming much more sophisticated.
Let’s say there’s a financial transaction that’s happening, and there’s a normal detection model running to catch any anomalies in these transactions. Let’s say you usually transact to the tune of ₹500 or ₹1000 but all of a sudden, there’s a transaction that is around ₹50,000 and even multiple ones while at it. Now, the AI models immediately alert the financial institution, saying this is a very high deviation from the baseline model. The question is, do you block this transaction right away?
There are two cases here. Assume it’s a high net-worth customer, and if they ask why you blocked a transaction, the response cannot be my AI model said so. You will be at the risk of losing your customer, but you also don’t want a fraudulent transaction happening too.
So that’s where the human comes in, because they know the domain better, and can then call the customer’s relationship manager, or branch manager. to check if they had intimated about an expected transaction. That’s when they will be able to take a better call on whether to block the transaction or not. AI helps proactiveness, but human intervention is where you are giving the best value with the human-AI partnership.