Breaching new AI frontiers

0
375

Ramprakash Ramamoorthy, Director of AI Research at ManageEngine discusses ManageEngine’s focus on enabling digital maturity and developing LLMs

What should be the current focus of IT or AI strategies at organizations and how is ManageEngine helping address this?

We’re focusing on enabling AI maturity. This comes down to harnessing the ROI from AI investments. In a lot of organizations over the last three to four years, AI has been heavily hyped and driven. Last year, we saw consumer-focused generative AI taking the world by storm. Thereafter a lot of discussions have been around if AI is going to take jobs away or replace people, but the question is where is the real applicability in enterprises, especially within IT? While there’s a lot of hype about using AI in the product, in the company, and in the processes, there is still the challenge of not being able to reap the benefits of AI as expected. This is possible with digital maturity, which is a rolling target.

How do you attain Digital or AI maturity?

The first thing is that you’re bringing out process maturity, where you document everything, you digitize everything. Even today, let’s say an incident happens in an IT department. Even though you have digital tools to do it, let’s say the story of how you roll back a change or how you fix the issue is not documented, because it’s not part of your process. So do a process study, and ensure your processes are completely digitized, which is the first step. In the second step make your data ready, because AI is very data focused. If you get your process points, your data automatically will be AI-ready. Very simple things like using similar date formats and ensuring your data stack is analytics and AI-ready are very important. And the third aspect is about how you use your current automation. Do you have all the alerts in place? Do you have a way to monitor these technologies?

So process maturity, data maturity, and finally, your typical alerts and analytics are the three aspects to take care of and that is when your AI will become very mature, and it will start delivering ROI.

Today, the goal is very simple. Find out what could be the potential solution and at every given point in time, find out what’s the next best course of action. You can determine if you want to go ahead. It is highly contextual and very connected. That is where the enterprise will see value in AI.

Discuss your focus on LLMs?

At ManageEngine, we are building our large language models, and trying to fine-tune foundational models into assisting these different touchpoints. It won’t be a general-purpose model. There are a lot of contexts and a lot of first-party data access that we have in it. We are trying to agree that the large language models are very efficient, and they have an emergent capability, which is not available in narrow models. We want to use these emergent capabilities in places where it would make an impact, but at the same time, it’s also expensive. You need lots of GPUs but we don’t want to pass the GPU tax to our customers. The idea is to use it contextually. I see these larger language models becoming a commodity. You have so many open-source foundational models that anybody can get access to, but the value is in contextually combining these models in giving very relevant contextual suggestions.

If you look at all the foundation models that you have, they are possibly trained on the same training data. While you can give a prompt to generate a video, you end up creating similar content as we are all reliant on the same data. You cannot go and make a differentiation in just the models. Rather, the value is in context, by combining these models with your first-party data access.  We are fine-tuning these foundational models with specific contexts. We are calling it contextual intelligence that leads to decision intelligence as well.

How can the top management be convinced of the ROI of AI?

When you invest in process maturity, data, maturity, and your analytics and typical automation that you do, you will see a baseline improvement in things like mean time to resolve the number of incidents that you have caught, or the average downtime, and one better way to put us to quantify it in terms of revenue. For instance, instead of saying we experienced downtime for two minutes a day, give the Board the business impact in terms of potential revenue lost because of the outage caused by the incident. Inform them that if we were to invest in the newer technologies that we have today, we could potentially avoid such instances. One way is to link it with dollars and show the impact because IT has moved from the back office to the boardroom. Nobody wants an IT failure and reputational loss. So how do you help with reporting that helps negotiate such challenges? That’s why we have built an analytics manager that has a very comprehensive analytics platform where you get data from services, from your security platform, from your monitoring platform, with all these converging into analytics. And that data becomes available via natural language.

You can run simulations, and what-if analyses, ask questions in natural language, find the highlights, and so on. You can combine insights from your business, from your IT, and your operations, because all these three things have become very agile with Cloud. We see all these possibilities for Businesses converging into one big analytics pool.

How have you improved Zia, the AI Assistant?

Zia is now bringing in the power of large language models. All our models were either small models or medium models last year. I would even say we didn’t have medium models but only narrow models and we had small models. Narrow models could just do one thing at a time like anomaly detection or forecasting. And then came smaller language models which are multimodal and can contextually extract relevant information. While these are not very groundbreaking yet are very subtle fine-tuning that enhances the productivity of the employees.

How do you see your AI focus from the perspective of go-to-market engagement and sales opportunities?

Today, every company is looking at its AI strategy. When they engage us as a vendor, they want to look at what we can offer them in terms of AI. There are more sales conversations because of maturity. And, with the emergence of Chat GPT, we are also seeing increased adoption in terms of AI usage. Previously, it was just on the purchase checklist but now we are seeing the usage graph going up.

How has the challenge been on the R&D front?

It’s been a very bumpy ride. Because things change so fast, it’s very difficult to stay up to date with it. For example, with the larger language model, we were able to see the value because it has an emergent behavior, which narrow models didn’t have.

As we are building our large language models, this requires a lot of computing that involves working with Nvidia, AMD, to get their cutting-edge hardware into our data centers. It is expensive. We have expertise in AI training models, and in collecting data sets over the last 12-13 years but the newer challenge is about hardware. How do you get all these CPUs? And how do you ensure they work in tandem? How do you ensure the switches in your data center are capable of sharing data at the speed at which the GPUs can execute the computing. We have had to upgrade. Now, high-performance computing has become an integral part of the development process. It’s no longer a game of just data scientists. It involves data scientists, application teams, and customer-facing product management roles on what to do next how to contextually integrate, and very importantly, it’s also related to hardware that has to work very hard in the background.

 

Leave a reply