AI on the rise

0
840

Ramprakash Ramamoorthy, director of AI research at ManageEngine discusses the increased role of AI in ManageEngine’s solution offerings

How has the inclusion of AI unfolded across your strategy and your product lines?

Last year, we talked about getting AI on the edge. No, now, we are adding more models to the edge. We started off with ransomware. Now we have models for malware. We have end-user entity and behavior analysis that is happening on the edge. And we see AI spreading horizontally across the stack. AI is no longer a unique feature, but it has become a key ingredient to all our products. So, there is no product without at least one AI feature today. And this year, we have been seeing faster adoption, especially in the Middle East. Middle East is the fastest-growing adopter in terms of the number of people using the features. Upon comparison, we have been noticing that the Middle East has the highest adoption of AI, across our products, and across the regions we cater to.

We have added more features to the conversational assistant. Previously it was all about reading such as giving a status reading for instance. Now the right features have also been enabled, for example, such as being able to apply patches to all vulnerable computers, restart servers under the domain, and so on. This has become possible only with the confidence we gained. Users also have a feel for how to use it rightly, so that nobody goes and gets the wrong patch applied and so in. This of course comes with authentication so that only privileged users can do this. So, with our conversational assistant, called Zia, we have made a lot of progress. Over the last year, many of the features added have seen mainstream adoption. For instance, in our monitoring stack, we had a special AI-based Anomaly report. Now that has become the default report.

Can you elaborate on how AI helps detect anomalies?

If we consider the monitoring suite or line of products, Applications Manager, Operations Manager, and even site 24/7, which helps monitor all entities in your organization, traditionally, there would be a three-sigma role to find out anomalies. But here, every monitor and every entity are different, some will have a weekly seasonality, and some will have monthly seasonality. With AI using past data, we can identify at a given point of time what is normal at a given point in time and what is anomalous. For example, the number of failed logins per minute, on a Monday morning around 9 am could be an average. If I get 10 failed logins per minute, it’s normal, but the same thing on a Saturday morning at 3 am would need an alert because somebody is trying to brute force and get into my system. This contextual relevance comes when we learn from past knowledge. This is on a single variable, but we can also do with multiple variables, learn the dependence across these variables, and how these change with each other. Now, for example, if you have multiple variables to consider, you could be more effective. If let’s say you’re monitoring a machine, a CPU at 80% might not be an anomaly, but the CPU at 80% and RAM at 80% usage and free disk space at 5% could be. The combination is clearly an anomaly. So we are able to identify which combination is an anomaly and also given how we have always set up precedents on Explainable AI. We will be able to say why I flagged this as an anomaly and why you will have to go look into it. Because here whatever action you do is recorded, it is put on a paper trail so you are answerable to your boss and you need to at least maintain a paper trail. So Explainable AI has also helped adoption, from a user and entity behavior analysis as well. If a user, let’s say particularly logs in from this time to this time on different days of the week but if suddenly, let’s say the weekends have changed from Friday to Sunday. In a non-AI world, the changes will need to be manually configured to account for the new weekends, but here, the system will auto-learn from the patterns and moreover, things are constantly changing with remote work and hybrid work. The AI system is more flexible to accommodate the changes. Finding the needle in the haystack becomes very easy with these powerful AI based anomaly records.

Please elaborate on Explainable AI

AI models even today are 90% accurate; we started off with 75 but slowly moved to 90. I need to understand why an AI model has flagged this as an anomaly. And if I understand that, I will be able to take appropriate actions. And that will be added as feedback to the model itself. This ‘why’ part of the way AI has evolved, is very correlation heavy and does not have the notion of causation. An example, force is equal to mass multiplied by acceleration is right. But you can also say acceleration is equal to force divided by mass, so you don’t know what is causing what. In general, the assumption that we take is that whichever event happens first, causes the event that happens later. This might be true 90% of the time. But in complex interactions, like monitoring solutions, where there is a network effect, there is a cascade effect and not necessarily the first thing is the cause just as symptoms don’t cause diseases although they are heavily correlated. With Explainable AI, we are trying to understand the causation and present it as a decision-making point for the user. The user can also automate that by saying, if your model is more than 80%, confident, you make the decision. If you’re 70%, confident you must let a human look at it; have a human in the loop, click a button, and then get it done. If you’re less than 2%, confident don’t even consider. So that brings in the trust factor of AI. That in fact, is one of the reasons we see adoption to be high. After we added the explanation, we saw the adoption increasing even for the same very same use cases that we did.

Is Explainable AI an additional element in terms of pricing?

We don’t even charge separately for AI. And of course, there’s no standalone pricing for explanation. We try and add explanations wherever possible. For example, in our ransomware model, we still don’t explain why we marked something as ransomware because that might be too complicated for the user. And the neural network that we have deployed basically does not support any form of causation. For all anomaly forecasting, outage prediction, and root cause analysis, we have given explanation elements. Wherever possible, we are trying to add, even in our OCR stack. Wherever technically feasible, we have added explanations, and there are no separate charges for them.

How does all the R&D in AI convert into commercial value?

Over the years, there has been a lot of fear of missing out on the AI wave amongst our customers. One thing we want to enable for our customers is identifying the next best actions at every point where AI can be in anything, in your CRM, or be your helpdesk, but identifying and giving a recommendation on what to do next. That will be the key value add for the users. And we are in the business of automating repetition, automating mundane tasks so that the human is productive elsewhere. For example, in service delivery, how many users refer to the knowledge base before raising a ticket? Now if I can have a chatbot, where I can ask the question, and if there is no answer in the knowledge base, then it can hand it over to a human, and ensure productivity to both the person asking the question and the person answering the questions. It all becomes a matter of productivity, putting your human capital to better use and automating repetitive stuff. So that is where we see the value coming. I see AI becoming an integral part of the ecosystem, it’s no longer one groundbreaking technology that goes through cycles of summers and winters. Now it’s mainstream and you must embrace it. AI is not going to replace a human, but a human who has access to AI tools is going to replace a human who does not upgrade themselves.

 

 

Leave a reply