BTOES Insights Official
December 27, 2021

RPA & IA Live - SPEAKER SPOTLIGHT : Proprietary AI algorithm driven “Zero Touch” fault resolution for mobile network alarms


Courtesy of Nokia's Rajesh Rao, below is a transcript of his speaking session on 'Proprietary AI algorithm driven “Zero Touch” fault resolution for mobile network alarms' to Build a Thriving Enterprise that took place at BTOES RPA & Intelligent Automation Live Virtual Conference.



Session Information:

Proprietary AI algorithm driven “Zero Touch” fault resolution for mobile network alarms

RPA teams are often challenged by the limitation of the existing RPA toolset, in being unable to automate knowledge based tasks. Many RPA products have incorporated the magic terms “AI” or “Analytics”, however on closer examination, one quickly finds that most solutions are either too narrow or there is misrepresentation of AI/ML capabilities.

In 2017, our RPA CoE at Nokia, set out to create a general solution for Nokia RPA 2.0, which would integrate Nokia’s AI/Analytics platform (used as “Brains”) & RPA technology (used as “Hands”) and bring in the capability of automating judgement driven complex processes. We selected this use-case as our pilot as it had the required cognitive complexity to necessitate use of deep-learning algorithms, sufficient volumes to produce a viable business case & was scalable across multiple customers.

This session will summarize this story of conceptualizing, implementing, improving and achieving success, including lessons learnt during this journey.

Key Takeaways

  • How to visualize and set-up a general solution for RPA augmented with AI/ML algorithms
  • Business case set-up & managing organization’s expectations/response – what is unique for such systems
  • Nuances and operational challenges of implementing such systems
  • Outlook for such systems

Session Transcript:

Hello, everyone, welcome back to RPA and the Intelligent Automation Live. During this session, like the previous ones, will take your questions. So for any of you who are joining this session for the very first time, Very quickly, there is a Questions box on your Control panel, and that's where you go to submit questions to our presenters. And and those questions will be relayed to the presenter during the live Q&A at the very end of the presentation. So, if you're joining us for the first time right now, please make sure you become acquainted with the questions box and the go in there and you may say yes or type anything there. I can see that the presenter cannot see that until the very end when I relate to him.

Questions about the session, so we have had very engaging presentations and interactions through the questions box and and I'm looking for another great presentation with ... Rao. ... currently has the RPA and conversational AI chatbot, ..., and Nokia. He is responsible for global, implementing this technologists to deliver extreme automation benefits. Prior to this role, ... was heading platforms automation for HSE House Business Services division, where he deliver automation solutions to multiple large global companies. He has held global leadership positions in this areas with people and P&L responsibilities, including board positions. He holds a business management diploma from XLR I, and a Bachelor of Technology from the Indian Institute of Technology, ..., India, .... Really a pleasure to have you here. I'm gonna let you share your presentation and gone. Thank you.

Thank you, Jose.

Is everybody able to see my screen?


Great. Thanks. So this is a story that I am going to tell in four chapters. This is in form of a story. First, chapter will be around how we used standard RPA to solve part of a very key process that is there in nokia's, landscape, operational landscape.

The second one is, once we solve that problem, using the standard RPA products and methods that were available, we wanted to do more. And then, what was the journey like, in terms of figuring out, how do we do a lot? And I will tell you some more details about the problem itself.

Screenshot (21)When we are in the second chapter, the third chapter is a more interesting chapter, where we hop with, which basically details, how we solve the problem, And what were some of the results? What? And what does the outlook for such problems look like?

And I will round it off by the fourth chapter, the last chapter, which is more around reflections.

Sitting back the last three year journey, that we took around this, and then, what were some of the somebody distillates from the learnings that we had, So, without further ado, let me just launch into this, just to give you sense of the locale and the atmosphere of where this story is occurring.

So this, this this happened at Nokia and Nokia as you know, is a technology giant. It has a history of innovation.

It is an innovation pioneer, even today with five G. It is leading the pack with breakthrough technologies.

In terms of one dimension of aspect of how Nokia is internally, one sense you can get of the innovation culture by the Nobel Prizes. Nokia has one. So lots of prizes. Look, it has one line Nobel Prizes in total for touring prizes.

The Turing is the highest price that one can get four in areas of computing and, actually, even an Oscar. So, this was way back in 19 thirties but one Oskar, as well?

The process itself is the main character of this story. And then, this process occurs in an area of Nokia business called Network Monitoring. This is part of the services business of Nokia. And I'll talk more about the process in the subsequent chapters.

So diving into the first chapter, Which is the road to the beginning of the tunnel, right?

So, so, essentially, how we started out solving this particular process by the standard RPA products that we had on boarded at Nokia and then, what happened until the point when we solved the problem using the standard products that we had?

Now, the process itself is alarm monitoring, right? So, you and I, all of us use cell phones for both mobile and data.

Btog CTAAnd, this is backed by a very robust, typically very robust, large networks of mobile towers, which have lots of devices.

And then, what happens is, that, the, within this mobile network, mobile, data network, if there are errors, there are alarms that are generated, right, and these alarms are essentially fixed me alarms, and they essentially signal that there is something wrong in the network.

And then, in the next step, once these alarms are generated, they are all collected centrally in a system.

Then, they are passed on to a team of humans, human operators, which is essentially a low-level Elven team, which, whose job is to correlate these alarms that are coming from the network, and then create trouble pigments.

Now, some of these trouble tickets are solved by the elven team themselves, these are the easy problems and the easy alarms.

And, once, that, the elven team cannot solve the, pass, those on to the more expert in which is the level two team, in our operational parlance.

But this process, what we wanted to do then was to automate this part that is circled that you see on the screen. So, correlates alarm correlation.

So, take a bunch of dams and correlate to that to a a single problem and then create a ticket out of those alarms. We wanted to automate this and part of the solution itself. We wanted to automate the one the part that everything was done.

And with the standard product that we had, we had onboarded way back in 2016 at Nokia. We solved this problem.

I mean, if you look at some of the results that came out of this in terms of number of tickets that were being handled automatically by the robot, we had an increase of 40% more tickets and now this is a reflection of the average handle time.

That was shorted because of using robots.

And in terms of turnaround time, it was almost, so, 50% more tickets we were able to solved.

Now, once, once we had solved this and this was, this was quite one of the first implementations of the standard RPA product that we had done within Nokia in this particular area. And then we were pretty much very pleased with what we had done. However, we started looking at the process of what we had automated.

So, in this particular process of alarm monitoring, we had automated, by using, the standard ought to be a product, only to about 30% of the process. And there was still this 70% of the process, There was not automated, The RPA standard RPA product was giving great results.

And we were able to have commensurate reduction in terms of number of human beings that we, we're able to reduce and redeploy two other processes. So, so the question to us was what we can do to grab the rest of the pie.

What we can do to grab the rest of the, automation potential that is sitting out there, beyond the 30% that we had automated using IP.

And that straightaway bought products to the State. What, on reflection, I would, I can only say, We're vaillant of the seven gates of ****.

Now, we only had two big problems, or maybe it only two kids, but, but let me talk a bit more about that.

So going back to the process that I had shown you earlier, if you see on your screen, on the left side, the box, the light peach colored box that you see, that was the point that we solved using the standard RP 1.2 a product in the part that we wanted to automate.

Now, what's the right side of this slide that you see, the L two team? So we wanted to see how we can solve the work that the L two team is doing of solving the hard problems, hard tickets, the tickets for which the solution was not easy.

Now, if I were to tell you a little bit about this problem solving heuristics and the methods that the cell to T was using, essentially, this team is a very experienced team, comes with a lot of expertise, and a lot of training, and a lot of what they do is not codify it, in the sense that it is not, it's problem solving on the fly, right, backed by experience, and expertise, and the learnings, and the trainings as esteem has.


And then, but, but, but the thing was that automating this bug was extremely would have been extremely rewarding, because alarm monitoring, globally, all telephone networks, needed this process.

Email Graphic Virtual Conferences (4)-1Then, in L one, part one. So almost everybody in the industry, at this point, at least in 20 20, as automated, the first part, and I believe some would have automated some parts of what I'm going to talk about, but we're talking about 2017 here right now.

Right, So, the rewards were going to be very, very high in terms of automating the process. We would automated from 30% of the 11 teams, what? We would have automated something like 60 to 80% teams work. And almost 30% of the L two teams work.

Now, to give you a sense, and the size of the complexity of the problem, that, we had set out to solve, right, and this is the first gate that I will stop. On your screens, you see two boxes on the top right. So, these are two D.

These are, this is information about two of the alarms that the network might generate as errors right, So, let's take the box on the left. Now, what you see is a some sort of a flow diagram, like it looks like a flow diagram.

Now, this flow diagram is the one of the solutions.

Just one of the solutions of solving an alarm.

Now, there could be scores of these flow diagrams, that one would need to solve, potentially given an alarm.

Right and similarly, for this box on the right.

So, again, this box has a resolution state diagram, this is one of the states of the resolution, and there can be multiple resolution states that can be there for solving a particular law.

Now, this is the first part of the problem, actually, the problem is more complex, but that that is out of the scope of this.

This particular session that we are having, This is just one part of the problem. The second part is even on an 80, 20 principle, The number of alarms, that one would have to solve to solve this L two team automation problem would be hundreds.

And then imagine all of us have all of us, most of us do RP. and we have seen such diagrams on process modelers that we have most of our products give also these process models. But imagine, in RPA 1.2, for hundreds of alarms, each of those alarms can have scores of solution states coating that an RP 1.2.

I mean, practically you can not have a viable business case.

If you wanted to do it the standard RP and the problem that we were trying to solve was how we could replicate what the L two team, the expert team was very easily doing.

So in a sense, if I were to equate, we wanted to, I mean, we wanted to do part of what the expertise of the L two team that was enabling them to solve this problem. just by thinking.

And to codify that into a, a computer solvable problem.

So then, you know, at at the same time, 2017 was a big year for.

Step change.

So to speak in the Marketplace for RPA Products' almost everybody had started talking about the kind of coalition, the kind of automation, which is AI and machine learning driven type of automation that they have. Right? So we started the markets, I mean, there were two threads of the search. one was, how well would then solve this problem using AI? We looked started looking at AI, and we also started looking at what these products have to offer. On the bottom right of your screen.

You see a summary of one of the analyst, the reports, that, that, that is being presented here, in a different form.

Now, if you look at this product, look at this chart that is even, there's even cognitive computing, and some products, are also mentioned as cognitive computing. Our research, this is our research. So, this is a reflection on the products themselves. There may be more to the product since 2017.

And now, our research showed that either these products did not have the kind of cognitive capabilities, whether we were looking at or if they had, did the, the applicability was very, not not, and certainly, not applicable to us. A one product, I would like to mention, that was, that came extremely close to what we wanted to solve.

But, of course, all great, but no cake was arrived.

Now, ...

uses a graph, uses graphs essentially, to solve problems, it has some genuine AI built into it, but the applicability refound would have been in solving more infrastructure type of problems, rather than the telecom network type of problems that we want.

So, what did we have, then, after months of demos, scores of reviews with analysts, and many men, when spent in research, we basically had this, so it was very uphill task. And then, we did not really have a solution.

Our thinking at that point had come to a point where everything was not looking very good, and this is where we came to a realization not this realization, is what brought us out of the tunnel, the problem states that we were in.

The realization was that Nokia as it also artificial intelligence, machine learning capability inbuilt in the inside the house itself.

And, uh, this was not an aha moment, so to speak. So this was, there was no Eureka that we can do the settlement. Is slowed realization coming to a point that, that, that, you know, there is nothing in the market, that is off the shelf, that is available. And if you really wanted to do anything about this problem, we would have to develop something from ground up.

Right. And then, that was quite enthusing, in the sense, we were dealing with unstructured data.

Patterns that were there, if you looked at the data set of solution, and of, the decision was then to ... this, get the development of algorithm. I mean, at that point in time, we were even thinking of how we could do self learning. How, how would it look like if we wanted to do self learning and self designing robots, which can have multi block applicability?

Especially in complex areas like networks and high technology areas.

Like, now, this slide is going to be a bit of an anticlimax, because I'm not going to get into the details of exactly what was the process of doing the algorithms one, because of trade secrets, and second, you would very quickly realize what Bumbling fool vivre in the beginning. So, what I am going to show you is a distillation of experience of the period in which we solve the problem. However, I am going to show you what those, what the solution designed for the POC, and then, and then the eventual product design in some sense of design. I will get you.

So if you wanted to undertake something like this, you would do free.

You have to start by doing a POC or an MVP, minimum viable product, right?

So, the goal here is to convince the business that the technology folks, now, once, and then you have to run a parallel thread, saying that what you are trying to do is not a one-off.

So, you create you choose a problem, you select a problem that is scalable. In our case, the problem was, is definitely scalable across multiple customers.

So Alarm monitoring is for every network that, that, that, Nokia managers, right?

And so that there is certain scalability in the problem that promises that has the promise of high benefits as far as, uh, this particular use cases concern. And then we combine that with other potential problems that would be solved by this. Formulation, where we are using RPA as well for orchestration, not for the logic part. That one can build an IP. I'd be only for orchestration, and as we like to call it using RPS hands and the algorithm as the brain.

There are scores of, even in your company and you're in your industry. If you look through what you can solve through this combination, you will definitely find a lot of examples. That thing would be this number two.

Screenshot (4)What number two is that visual? These problems will yield you a business case.

That is why.

Then after that, third is, of course, to go and delivered. Once. once, once, you have convinced the leadership and the management and you've secured funding to go ahead and deliberate. Now, one point that I will emphasize now, and I will emphasize again as well, is to set the right expectations.

Now, this, this, this basically, what I am trying to say here, is that any AI ML based solution is inherently statistical in nature.

Alright. So, there is nothing, like 100% in AI.

So there is this, this lack of predictability can, can manifest itself in delivery timelines being missed.

The results, not coming to, up to, What do you what you would like within the timelines that you want to deliver the results. And so, setting the right expectations with the business is extremely important.

There are certain some points to remember. Right? So, you have to scope the POC just large enough to show that it works. Right. And then you combine that with the business case. A larger business case of which PUC is just a small part. The technology demonstration.

It's just a small part.

You have to select the algorithm.

Well, then, and then, Once you start solving the problem, and as that as it happened in our case, from 2017 till last year, was that you have to be prepared to change the algorithms if your results are not coming to the liking that you have that you want or the results that you expect.


I mean, the algorithm that is currently implemented for this particular problem is as no resemblance bears. No resemblance to the first ... algorithm that we implemented. So there is.

So there has been so much change in the algorithm, not only in terms of fine tuning, but in the fundamental way the algorithm was designed and the algorithms that were picked.

And the third one is more around how you go and do the drumbeat around the company.

So you have to Gwen Memorialize, the POC that you're doing and the results that you get from the pews.

Again, like I said, you have to, even the POC, you should demonstrate that there is the business benefit, either in terms of operational and if, if, if, if it works in your case, Also, haag benefits, right, memorialize the POC.

So that, then, let me come to how the working prototype for the initial POC in 2017 looked like.

Alright, so there are, why five boxes that you see the first box. Here is the ticketing system. This is the place where the alarms are getting created by the RP a 1.2 solution. Follow my pointer.

Then the second is the RPA robot, itself, which is doing the orchestration on the, quote, unquote, hands.

The third one is the AI engine, which is actually generating the solution using the algorithm.

The fourth one is Operations Team. And I'll come to it, why they are part of the solution flow itself.

The fifth one is where the alarm was generated, the network elements, where the alarms were generated, the place where the editors.

So the first step in the first step, the ARPI, a robot goes and fetches the alarm details from the ticket.

And then, uh, passes onto the AI, into the AI engine, then provides the solution.

In our case, we are, you have to remember, even in the POC there is you can't simulate a country sized, global sized network to essentially whatever we were doing, we were doing on live networks. And live networks basically means that if you create a solution and you go and your solution goes ahead and brings the network down or part of the network, Donald creates an issue, that is definitely not acceptable. I mean, in practical terms, it would mean that calls, for example, one of the, one of the things that can happen is calls with Trump.

So we put a safety step, which was an authorization step. At the POC states, Any solution that the AI engine recommends, RPA, then presents through a web form to the operations team.

Operations team that authorizes, if the operations team authorizes, then the RP A robot executes that solution. In our case, the solution essentially comprised of to solve the problem, fighting a bunch of an ML commands.

Alright, so this, this, this is the working prototype of the POC that we generated.

Now I will show you what the pilot was, the pilot, and the first use, first, first, base base use case that we implemented that was then scaled up, right? So this is slightly complicated use, what you see on your screen is slightly complicated.

So I will go in a sequence, so pay attention to where the circles in the bullets are coming in, and where my point that is.

So in the first step, the standard RPA solution, the ticket is pulled by the robot, and the alarms details are extracted by the RP 1.2 robot.

This, this guy here is the RP 1.2 robot, then, uh, depending on the ticket details, there can be some very basic solutions to give you, and it's very basic actions that this robot can take, that can solve that problem.

So for example, if your computer goes bad, one of the solutions you can always do is to try to restart the solution to some basic actions like resetting or, in other words, restarting That robot RP Robot 1.2 takes.

Now if that is low, solution that is available, a rule based solution that is available with RP 1.2. If there is no basic solution, then all the problem details are then passed on by the RP a robot to the AI engine.

The AI engine, the engine, this is step number two, if if the air down to two scenarios can occur, the first one is that the engine has a solution, and the AI engine is 100% certain of the solution. Like I mentioned before.

This is a blue moon with pink dots, so that this never happens.

Likely what would happen is that AI is the AI engine would recommend a solution that has the highest degree of confidence.

Then passes passes that onto this guy, which is that point at Circle number three for approval.

If the person approves, then.

Then then the human being confers the solution, the human operator confirms the solution, and the solution is then sent to the robot, but auto execution.

And then the robot obviously carries out the auto execution.

What then also might happen, is that the human operator does not like the most probable solution that is offered by the engine, the human rejects the solution, and does the manual execution, in terms of solving that alarm manually by directly acting on the networking network element.

The other next two steps are important steps in this first step. And then they involve learning. So, whatever solution that was, whatever alarm that was our ticket, that was not solved by the AI engine, and for which the human being proposed a solution.

That answer is that that solution is passed back to the AI engine.

Steps six and seven, passed back to the engine for learning. And this is where the feedback loop is created and continuous learning that happens.

Now, in all of this, whether it is 3 or 3 A, four A, and 6 and 7, the recommendation is to automate as much of it as possible, like, so the key part is this says step six, and seven, if you can automate that, that will be, that is great.

OK, then, so what do we, what do you do when you have done a POC, and then you also have a solution that you can do in your hand and your engine that you have created?

Sort of working and then it's giving the solutions that you like, right? So be essentially your project manage the heck out of the nylon. That's what we did.

Again, the thing to remember in this, is that the AI ML solution, this is not like us, it is for, on every other aspect, it is like a normal software development project. Or any RPA implementation process, right. Except for the fact that it has a component.

That is probabilistic nature in terms of the solution that it provides is probabilistic in nature. Which then means that, even in runtime, you might have situations where results dot com, or if the results come, then they come late. Or they never come, and you need to make fundamental changes.

Your algorithm, or your input. So again, setting the right expectations with the business and operations is key in RPA 2.2 type of solutions.

OK, so then some results, right, so in runtime, our solution essentially predicts each of the commands to an accuracy of 90%.

So like I said, what is happening here is that the network, again, the revisiting, the problem statement and the solution.

The network is generating errors, and you and the either the solution or the person is fixing the error by firing certain commands to the network. Right.

So, but a single command prediction of the solution set, the accuracy is about 90%.

To solve the alarm end to end, typically 6 to 8 different commands are fired, are to be fired in the right sequence.

Then our accuracy hovers around 60 to 70%. didn't believe me, This is the big result.

Exact, how it looks like in runtime, in terms of how we track and monitor. Just focus on the top, top top part of this chart that you see.

Then, there are doc, there are dark blue lines, and then there are gray lines.

Screenshot (21)The dark blue line is, essentially, whichever alarms that are in scope and, uh, the gray lines are the alarms that are solved by the robot. So, pretty much.

Pretty much good.

So, then reflections, right.

So, in terms of reflections, let me just round it off, Let me start by Proffering, what a standard general solution of RPA 2.2 may look like, solution in, which there is RPN.

Then, you have an algorithm augmenting as brains, right? So, So, there are two parts to the resolution. one is the learning process. And the second one is the execution process. And there are four actors here.

one actors are the human operators. The second is the AI engine itself.

Third one is RPA robot, your standard RPA Robert Robot, and then you have the human expert, which does the dissolving of the engine does not offer the solution.

And then recording that solution that that the human export has provided, and cycling that back to the ....

So you start with the manual transaction that are being processed by the human operators.

Your first step would be, once you've decided how you would solve the problem, In terms of selecting your initial algorithms, is two collect data, which will form your learning dataset, which would essentially help the algorithm learn to get to your final solution. So you can collect this data manually, so in our case, it that they were logs, they could be conducted manually. You can automate that log collection. Or even still, in your cases, in your process cases, there are many RPA products, today, that come with combat process record, as you can use those recorders, to collect this learning data set.

Use that learning training dataset to help your algorithm learn.

And then you end up with an AI ML Engine that can that is capable of providing solutions to whatever learning set that you had the right.

Then the fun part of execution starts essentially, in the execution process, a transaction is initiated by the RP A robot.

In our case, it was fetching the alarm ticket details, and the alarm details in the ticket, by the ARPI robot from the ticketing system.

So the solution fetch by the, I'd be a robot from the AI engine. Then the ARPI robot executes the solution.

This is essentially, you may have an approval step here, but this is essentially in our case that we are discussing acting on the network elements to solve the problem if the RP, if the solution, is not acceptable for the controls that you put in in terms of the approvers step or what not. The problem then goes to a human expert if the solution exists.

Of course, it does not come, but if it does not exist, then the expert provides the solution and then solves the problem. And whatever it has solved, those actions are recorded, and then that are fed back to the learning part of the learning part of your AI system.

Very create a feedback loop and enhance. Keep enhancing your AI ML.

And so this is how a general solution might look like.

So what are the key takeaways?

Key takeaways, the first one, the biggest one, is that you need to get the business mind that your AI ML solution will never be 100%. I mean, the current thought is that, like, human beings in any process, or any actions, they can produce damage. So can AI systems.

And you have to accept some damage and do steps to mitigate it, except that the AI solutions will prove we'll do some damage, and then put controls around it, So you can solve that. And this, this buy in, is important.

So do a bit to do a business that is used to definite solutions.

This is not an easy step, But, but, but if you can convince them by the steps that I mentioned, do a POC show that technology works?

Put steps, safety steps like approvers, etcetera.

The second one is data. So any AI solution essentially depends on having a lot of data.

And so in our case, what was very important was to have a practical collection strategy.

Typically operations are not staffed to collect lots of data for you.

And in our case, the alarm volumes and the monetary during monitoring are so high, that this collection itself as a task, and the recommendation then would be that from the get go, you have a very practical collection strategy, that has a lot of automation.

Bradley, today, our collection strategy has an automation, has automation built in.

The third one is delivery expectations. I've been harping on that. This is extremely important, this is not only important to the operations underground, but also to the business that is funding.

That they may, they may not.

There may be scenarios in which they will not get the, the results on the time that you promised.

The fourth one is business case.

So you have to have business case, right from the POC stage, if possible, that you show some hard benefits that your business case. Your POC itself is showing some husbands.

Always remember that, That, depending on how you're organized within your, the organization or your company, you mean how automation is organized, or Harpy has organized, or how AI ML beans are organized.

You may be the RPA team that is trying to do AI ML, which is not your domain, right. So, this partnership with Business. And then showing hog business benefits is extremely important.

Third one is the fifth one is around choosing the right RP, if not, like so, like I mentioned, before, there are some products that, obviously, the basics have to be done.

But there are some RP products that, if, for example, come with process recorders.

And this, this can help in your data collection immensely because collecting data isn't a continuous process.

Right? Because environments change, inputs change, sometimes the process, jane, KPIs change, and then you have to keep fine tuning your AI algorithm as time goes by. Just not a static thing.

So collecting data is extremely important.

The sixth one is slightly counter to what I have been talking all along, is that from the time when we started and today, there are a lot of cloud based AI ML platform that are available. So while you may choose depending on what capabilities you have in your company, you may choose to do this algorithm by yourself. And you may just have to, because, you know, you may not, your problem may be very unique.

Email Graphic Virtual Conferences (4)-1Every problem is unique, but you yours may not have a solution that is already made available.

So potentially, you can consider using all these ready made platforms, right. So I can give you an example.

Some of the big ones are Azores Cognitive Services, or IBM Watson's services, and then even there, they are not extremely generally isolations, right? So you have to pick the right problem that you want to solve, and then match it to the cloud solution that is available.

And then you save a lot of the spin that you can potentially save, depending on your problem, a lot of pain that you would have, right. So these are the six key takeaways.

If six is too much, then just remember these two.

Your first use case has to bring in hard business benefits, right? Your POC has to do with some operational benefit, clearly other than the technology demonstration.

And you have to, then, you know, you top that off with, potentially, a scale up type of plan or other weibel plans. I don't promise too much. Don't end up with a hurry.


Az, they say that you deliver that you promised something, then you do not deliver the new province something more, So choose choose your problem very carefully.

Both in terms of business benefits at all.

And also, in terms of how solvable that problem is going to be. And the second one that that I would like you to take away from this would be to set the right expectations set, the right expectations from the get go to the business.

Oh, I am at the end and I had a very nice Dilbert cartoon that I wanted to put here.

But, unfortunately, the corporate Masters of tilburg have licensing agreements, so, you can go yourself and they'll buy dot com and search artificial intelligence, and you would get a bunch of those.

That, that, that ends my presentation, and I think we can take some questions if the audience hasn't.

That's fantastic. Fantastic. Right, Josh, thank you so much for such depth and details on the on the approach and the very insightful, very helpful to understand the behind the scenes on the technology and the implementation, the approach you had. I have to ask, first of all, you know, when I look at what you have done there, I see some parallels with potential applications for IT support. Do you think a solution like the one you have developed here, can be applied to IT support?

Yes. It can be, in principle, it can be, because IT, a lot of. To give you an example of IT support infrastructure monitoring, for example.

Like to infrastructure monitoring also has a similar deal with similar problems. You have to really do some housekeeping tasks like cleaning up memory, overloads, etcetera. So, yes, one can. But, but for infrastructure monitoring, there are already really good solutions that are available.

And depending on the problem, one may not need to develop and develop an algorithm from the ground up. But there are many solutions and very, very, very good products that are available today.

Very good. I'm going to pose some questions from the audience here. And Susanna asked about do you assess how the human operators feel about only conforming or rejecting RPA proposals? There is that process and the step there where they're confirming or rejecting our pre RPA proposals or our choices. How has the people side of this change in a broader context?

The people side of this change is not very different from any automation project that you do in, and that is the problem of automation.

In the sense of You have people who are doing work, can then you automate their work. So, this is it not specific to this.

But, the solutions are, as they are, for the general problem, they are for this. So, you have to figure out a rescaling plan, your communication needs to be extremely important.

You have to, in our case, we, essentially, these Are masters of that automation. This AI engine, if I may put it that way, Rachel. So that aspect is taken care of that. And somebody but, yes. The scaling, that is one, communicating well. That is another.

And, then pushing them to more value, add tasks, that, that, that is something that I would recommend.

But, that was a great question, and V J asked that question about the naming of the bots Are there any recommendations for naming the bots?

No, no.

We didn't have any and any name, but I mean, I would take inspiration from tanti, at least for our buck.

So, so, so no, we didn't have any names for these bots, but but but you can have some really great ones. Of course, bucket has very good. Louis Philippe had a question about SLA compliance and how does the solution perform in terms of SLA compliance?

That is, again, a very excellent question and let me then touch upon the aspect of this slow solution and the way we setup.

So each of these trouble tickets also has a priority on it, right? So in our case, they were priority one, which was the highest priority. And then the lowest priority fight, our solution was only doing priority foreign priority fight.

Screenshot (4)

So we have lot come to a point, not only for our solution, but even otherwise, in general, in the industry, where when you are working with a live network, you can, you know, essentially, priority one, tickets, if you go wrong, in solving a priority, one ticket.

Of course, they have stringent SLAs, but you can Charles will bring the network down.

Others were 4 and 5 and SLA compliance were very lax.

So in case AI Engine missed the solution, we, all of you, of course, could then The Air Force would then pass them on to the human operator.

Very good. Very good error when I asked the question about the probabilistic approach. Hold on, let me take a look at what he asked here again. He says you talked about the probabilistic component of the system. And he was curious about how do you explain that probabilistic component to the stakeholders? If you have a simple analogy, to explain it, to people in the business? And how to deal with that explaining this probabilistic approach to show the stakeholders?

Well, in our case, we started with, we started, it's who you go first and create your sponsor, so to speak, and the person who would evangelize from your from on your behalf. So, you have to pick somebody understands this nature of this inherent nature of AI algorithms, that your solutions will always have a confidence level attached to it.

Right? So once that person is onboarded, then then you can together with that person. You can go and say, this is an inherent feature of this technology. If I may use the term feature for this, that your solutions are not going to be 100%.

So, then, it is a leap of faith, that your business and operations, folks need, that they need to take in terms of working with the, working with AI, ML in general.

And then, uh, counter balance that with the business benefits that they can potentially get.

Finding that sponsor, who understands, and then use, taking help of those sponsor responses to go through the organization, would be the way I would recommend.

I mean, in our case, we were lucky, in the sense that the other business sponsor, where the process like, mister Dhingra, he was, right, quite quite proficient.

And then, And in this particular detail, that's good. That's certainly helps a lot when you have a champion like that behind the project. Exactly. Can we have time for one more question here, and I'm going to pick this one from camera. And the camera is asking if you can elaborate a little bit more on your data strategy, and that you use for the machine learning algorithms. Did you use historical versus real time, current data? What type of data did you use to train your algorithms?

But like I explained to, it is a combination.

So to begin with, you have to use historical data to come to make the algorithm come to a point where we can start solving the type of problems that you have defined. So, we defined, these are the tickets that we want to solve.

We use the historical data, use the historical data to, for those particular tickets and alarm types, to solve the, obtained the algorithm. And then, as time goes, time went by. There is a feedback loop that is created, which is then constantly increasing the learning on the fly.

So, the graph that I had shown, which had lots of bar charts, the bottom one, which I didn't go into today, at least in this session, is about how the learning is happening on real-time. So it's a combination of both one initial training data with the problem definition that you have historical data. And then as one goes along, that, create a continuous automated feedback loop.

So I would stress on that, not depending on people, but creating an automation that treats the data back, which is what we have today.

Very good, ... one more time. Thank you so much on behalf of everybody who's attending today. Are a wonderful presentation, great insight on a on a very challenging but rewarding project that you have a lead in for Nokia. This is really terrific. Thank you so much for sharing that with us.

Thank you. Thank you again for giving me the opportunity to say thank you very much for your support.

Thank you. Ladies and gentlemen, we will have finished the session, and we'll start the next one on top of the hour, and you do not want to miss the next session. We're going to have Raissa, K, Solutions, Architect, and specialists on RPA, and he's not talking about a lot of different aspects about scaling RPA and RPA solutions.

And a lot of very interesting topics on his, on his presentation, including the big question, which is, does robotic process automation, takes jobs away. And he has done a bit of an evaluation on that, who's going to share with us. So, again, as you close this session, close, don't don't don't close your browser. Close the Go To Webinar window, and when you do that, we're close your control panel. There's going to be a popup box, and there'll be a short survey about any feedback related. Should this session, or any additional questions you may have, you can, you can place them in there, and we'll see you again in the top of the hour. five minutes to the hour. We'll restart the new session.

Thank you very much.


About the Author

more (5)-1Rajesh Rao,
Global Head of the RPA & Conversational AI (Chatbot) CoEs,

Rajesh currently heads the RPA & Conversational AI (Chatbot) CoEs at Nokia. He is responsible for globally implementing these technologies to deliver extreme automation benefits. Prior to this role, Rajesh was heading Platforms/Automation for HCL’s Business Services Division, where he delivered Automation solutions to multiple large global companies.

Rajesh brings strategic and operational expertise in Technology & Process led Transformation, Automation, Outsourcing and Emerging Technologies, and has held global leadership positions in these areas with people & PnL responsibilities, including board positions.

He holds a Business Management diploma (PGDBM) from XLRI and a Bachelor of Technology from Indian Institute of Technology (IIT) at Kanpur, India. He is a trained Six-Sigma Black Belt and a certified Information Security Lead Auditor for ISO 27001.


The Business Transformation & Operational Excellence Industry Awards

The Largest Leadership-Level Business Transformation & Operational Excellence Event



Proqis Digital Virtual Conference Series

View our schedule of industry leading free to attend virtual conferences. Each a premier gathering of industry thought leaders and experts sharing key solutions to current challenges.

Download the most comprehensive OpEx Resport in the Industry

The Business Transformation & Operational Excellence Industry Awards Video Presentation

Proqis Events Schedule

Proqis Digital

Welcome to BTOES Insights, the content portal for Business Transformation & Operational Excellence opinions, reports & news.

Submit an Article

Access all 75 Award Finalist Entires
Subscribe to Business Transformation & Operational Excellence Insights Now
ATTENDEE - Proqis Digital Event Graphics-2
ATTENDEE - Proqis Digital Event Graphics (2)-1
ATTENDEE - Proqis Digital Event Graphics (1)-1

Featured Content

  • Best Achievement of Operational Excellence in Technology & Communications: IBM
  • Best Achievement of Operational Excellence in Oil & Gas, Power & Utilities: Black & Veatch
  • Best Achievement in Cultural Transformation to deliver a high performing Operational Excellence culture: NextEra Energy
Operational Excellence Frameworks and Learning Resources, Customer Experience, Digital Transformation and more introductions
  • Intelligent BPM Systems: Impact & Opportunity
  • Surviving_the_IT_Talent_deficit.png
  • Six Sigma's Best Kept Secret: Motorola & The Malcolm Baldrige Awards
  • The Value-Switch for Digitalization Initiatives: Business Process Management
  • Process of Process Management: Strategy Execution in a Digital World

Popular Tags

Speaker Presentation Operational Excellence Business Transformation Business Improvement Continuous Improvement Process Management Business Excellence process excellence Process Optimization Insights Article Process Improvement Award Finalist Case Study Digital Transformation Leadership Lean Enterprise Excellence Change Management Premium Organizational Excellence Lean Enterprise Lean Six Sigma Execution Excellence Capability Excellence New Technologies Changing & Improving Company Culture Enterprise Architecture Agile end-to-end Business Transformation Execution & Sustaining OpEx Projects Culture Transformation Leadership Understanding & Buy-In Lack of/Need for Resources Adapting to Business Trends Changing Customer Demands Failure to Innovate Integrating CI Methodologies Lack of/Need for Skilled Workers Lack of/Need for Support from Employees Maintaining key Priorities Relationships Between Departments BTOES18 RPA & Intelligent Automation Live BTOES From Home Financial Services Process Mining Customer Experience Excellence Process Automation Technology Cultural Transformation Healthcare iBPM Healthcare and Medical Devices Webinar Culture Customer Experience Innovation BTOES Video Presentations Exclusive BTOES HEALTH Strategy Execution Business Challenges Digital Process Automation Report Industry Digital Workplace Transformation Manufacturing Supply Chain Planning Robotic Process Automation (RPA) BPM Automation IT Infrastructure & Cloud Strategies Artificial Intelligence innovation execution AI Lean Manufacturing Oil & Gas Robotic Process Automation IT value creation Agility Business Speaker Article Systems Engineering RPAs Insurance Process Design Business Process Management Digital Speaker's Interview data management Intelligent Automation digital operations Awards thought leaders BTOES Presentation Slides Transformation Cloud Machine Learning Data Analytics Digital Transformation Workplace Banking and Capital Markets Data Finance Professional Services Six Sigma Education IT Infrastructure IT Infrastructure & Cloud Strategies Live Blockchain Interview Solving Cash Flow with AI BTOES White Paper investment banking Analytics Insight BTOES19 Consumer Products & Retail Enterprise Agile Planning Government Operational Excellence Model Project Management Algorithm Automotive and Transportation Banking Business Environment Digital Bank Enterprise architecture as an enabler Hybrid Work Model Primary Measure of succes Relationship Management Sales business expansion revenue growth Adobe Sign Agile Transformation CoE Delivery solution E-Signatures Electricity Global Technology HealthcareTechnologies Innovation in Healthcare Reduce your RPA TCO Transportation Accounts Receivable (AR) Big Data Technology CORE Cloud Technology Cognitive learning Days Sales Outstanding (DSO) Logistics Services Operational Excellence Example Risk Management business process automation transformation journey Covid-19 Data Entry Digital Experience Digital Network Digital Network Assistant (DNA) Digitization Drinks HR Internet Media NPS Net Promoter Score Program Management Portal (PgMP) Sustainability TechXLive The Document is Dead The New Era of Automation Automated Money Movement Banking & Financial Services Biopharmaceutical Blue Room Effect Building Your Future Workforce in Insurance Business Process Governance Capital Market Creative Passion Digital Transformation Workplace Live Digital Workforce Digitalization ERP Transformation Effective Change Leaders Finance Global Operations (FGO) Financial Services Software Frameworks Hoshin Planning Human Capital Lean Culture Natural Gas Infrastructure Natural Language Processing Organizational Change Pharmaceutical Pharmaceuticals & Life Sciences Project manager Supply Chain Management Sustainable Growth The Fully Automated Contact Center Transformation Initiatives Workplace Analytics eForms eSignatures 3D Thinking BEAM BFARM BTOES17 Big Data Processing Business Analytics Business Growth Centralized Performance Monitoring System Communication Creativity Digital Technologies Digital Technology Educational Psychologist Energy Management Health Insurance Health Maintenance Organizations Hospitality & Construction Human Centered Design Integrated Decision Approach Integrated Decision Making Intelligent Document Processing Kaizen Medicare Moodset for Excellence Natural Language Processing (NLP) Offering Managers Oil and Gas Optical Character Recognition (OCR) Pharmaceuticals and Life Sciences Photographing Price and Routing Tracking (PART) Process Design Document (PDD) Product Identifier Descriptions (PIDs) Python Quote to Cash (Q2C) Resilience SAP Sales Quota Team Work Telecommunications Text Mining Visually Displayed Work Culture master text analytics virtual resource management