Courtesy of Micro Focus's Michael Procopio, below is a transcript of his speaking session on 'Five steps to improving business processes by delivering AIOps' to Build a Thriving Enterprise that took place at BTOES iBPM Live Virtual Conference.
AIOps, artificial intelligence for IT Operations, can significantly improve IT operational processes and,thus, business applications and processes.
According to Gartner, 40% of organizations will be maximizingAIOps by 2023! Join us and learn how to implement our five-step strategy, including:
What is AIOps – from a simple definition to the impact on your IT Operations
Hello, everyone. Welcome back to Intelligent Business Process Management Live, Where Technology Meet people and Process. Our next guest is going to talk about the technology piece. So I'm very excited about welcoming Michael .... He is a product marketing manager for Micro Focus. And the Michael. Hello. And he's Mike, who's going to talk to us about five steps to improving business processes by delivering AI ops. Michael has over 20 years of experience in the industry. He's a product marketing manager for AI Ops and Micro Focus, and a wealth of knowledge. And we're really looking forward to your presentation, Michael.
Well, thank you, Jose, Joseph.
Well, let's get to it.
So, processes, of course, are becoming more and more digital, and what I'm going to talk about is basically the fact that as we move these processes too, woops, as we move these processes, from old ways of working to New ways of working, basically digital.
You've got to monitor these processes, because, if you don't, you're in danger of not being able to deliver the process that you so carefully designed, and brought into production.
And I'm also going to talk about how automation becomes very important in keeping those processes up.
So, this slide shows the DevOps processes, I'm guessing that many of you have heard about DevOps. But I'll just give a quick background.
Um, in the old days, developers would write up their code and basically just hand it over to the operations team to deploy, without much of working together ahead of time.
So in the Deff part, what happens is you either take an existing manual process that you have, you know, an example might be filling out an HR form and paper.
And then, you know, sending it in.
And moving that to now, HR hands you a URL says, Go here, login, shallot everything on the web form. So that's an example of moving an existing process.
An example of a: what we would call Digital First process, might be something like Uber. You couldn't run Uber on paper, even if you wanted to. So, once you've designed your process, or modified your process, then it goes to somebody to do detailed design.
Probably in co-ordination with you as the process owner, then it gets coded up.
Then it gets tested, and I didn't put this in the diagram, but, basically, it's when it gets to the test phase, This is where the dev team will bring operations into the fold.
And once that's done, it moves into deployment at this point.
The operations team has some knowledge about what's going on.
They start monitoring the process.
And I've never seen a process that at some point didn't fail, or poorly perform.
And in some cases, that's going to be because of the, the code that was written.
In some cases, it's going to be because of the underlying technology that is running that process. So, when one of those occurs, it moves over to find the problem and fix the problem. Sometimes, operations can fix the problem.
For example, you've got an issue with a server, and I'll show you an example of a problem with a database.
Near the end of the presentation, sometimes, as we said, it's the code. If it's the code, then it goes back to the Dev team to fix.
Um, if it is something that the operations team can take care of, then they fix it. Then the next goal for operations should be to go ahead and optimize what happens in the infrastructure, and automate the fix.
And I'm going to show you, an example of a company that saved $4 million in 2019 by implementing this automation process.
OK, with that, here's what I'm going to cover.
OK, I am going to talk a little bit about RPA and how these operational ayaan steps can help with RPA.
Because I'm sure RPA is a huge topic for this event.
It's been around for years, but it seems to have become all the rage in the last, I don't know, a year or two years. I'm going to show you some examples of how AI ops.
How's help folks keep their processes up and running, and then we'll take questions.
So first, I wanted to give you a little background on AI ops.
AI Ops has gone through some iterations.
Um, it started out well, I should say, the precursor to it all was big data analytics. And this happened in around 2005. And big data, at the time, basically said, I've got more data than I can handle at one computer or one database.
And so I need to figure out some way to crunch through all that data. And then, the beginnings of machine learning were applied to that.
In 20 14, Gartner, which is pretty much the leading analyst in the area, coined the term IT operations, analytics.
And this was basically the application of big data to IT operations.
The, the big issue with IT operations analytics, was, it only worked from a historical point of view.
So the data would be, in this big data analytics field, they would apply operations to find insights, But those insights took it quite awhile to happen to find. And so, you did get to benefit from it, but not in time to actually fix a problem.
Then, in 20 17, Gartner revised this whole thing, and called it AI Ops, which is artificial intelligence for IT operations. And the big shift there was, we got enough processing power to be able to work this into real-time.
So, basically, you could now have the system find a problem, and provide you with information that you could use to fix the problem while it was actually happening. And the most recent thing is what we call multi domain apps.
And I'm going to talk about that in detail.
So, from a more practical point of view, what is Apps? This is also search from Gartner.
Basically, it boils down to this, collect every piece of data that you can in the IT operations world, and put it somewhere that a machine learning algorithm or algorithms, plural, can act on it.
And analyze it. And provide you with some kind of insight.
These happen to be the four big ones, digital experience monitoring, which four.
Digital processes is one of the most important things.
Um, and the reason this term is another one that's fairly recent, and the idea of digital experience is not only humans, but also, we've now gotten into a case where machines are interacting with our processes.
And so, it's the measurement of whether it's a human or whether it's a machine acting, I'm looking at that.
networks, very important, application performance monitoring, which is actually the term.
That digital experience monitoring grew out of, includes not only user experience monitoring, so looking at it the way human would, but also looking at any of the infrastructure with regard to that application. And then of course, IT infrastructure monitoring.
So those are the four biggies right now that that play into this.
So, Analyst EMA did some research and looked at, what is it that you should be looking for in an AI app solution?
Um, the first one is cross domain sources and in high volumes.
So, by definition, if, if you're talking about very low volumes of data, humans can deal without just fine.
When you're talking about huge volumes of data, and, I mean, in the hundreds, thousands, or millions of pieces of data that might come in in a day, no humans that I know of can, can deal with that.
Now, we're going to talk about this cross domain capability, multiple types of data.
So in the early days, it was events and metrics.
So my CPU is at exit.
Next level, or the, the response time of an application was one second versus NaN.
Subsequently, it has included log files because many applications, that's where they put their error messages as log files.
Self learning, that's pretty obvious and the ability to support multiple infrastructure's. I'm sure you all know and probably in your organizations, there's been a move to the cloud.
Interestingly, if you've done much reading about IT and Code 19, the all the predictions say that the occurrence of covered 19 is going to push Cloud development much faster than we've seen before.
So here is a technology maturity model, and we're going to be getting into a process maturity model next, but I just wanted to go through this.
So in the beginning, you look at event correlation, then you move up to detecting patterns.
Then you go to baseline and anomaly detection.
Anomaly detection is very interesting.
And one of the things that almost impossible for humans to do, it's looking at multiple metrics.
Comparing them to baselines of what's normal, and letting you know that something's happening probably shouldn't be.
And also, proactive alerting and proactive alerting is basically taking a look at these patterns and predicting the PAF of that pattern to see how it might affect things before the actual effect happens.
Then you move into the Machine Learning, and, finally, integrating all of this with trouble ticketing systems, and doing risk analysis.
So these are the, the steps that one would go through, or at least, historically, have gone through, to get to where we are today.
So, now, with that background on apps, I want to do a quick poll. So, just say if you could put up the poll.
And what I'm curious to see is, are you on an AI ops journey?
So, if you can indicate the level of your plans, whether you don't have any plans, or you've already started implementation, or some timeframe of when, when you are looking to do implementation, if you are working on it.
Go ahead and cast your vote on one out of this five options. We have about 25% of you who have voted so far.
We'll give you another few seconds here to cast your vote, where are you on your AI ops journey?
Very good. Very good. Most of the responses coming in. Outstanding.
I'm dying to know, I'm closing it soon here, Do it quickly, alright. Got it.
I'm closing and I'm sharing those results, and, and I'll read them back to you, Michael. What we have here is that we had 54% of the results we're investigating. They're investigating right now.
And 25% was the next. category four was the next one with no plans. So, investigating, at 54%, no plans, 25%, And then 11% have started implementation, 7% plan to begin in the next couple of years. And 4% plan to begin in the next 12 months.
OK, that's great to know that, that so many of you are in the investigation stage, and no one should feel bad about not not starting implementation according to Gartner.
Um, only about 5% of organizations right now are implementing apps, but by 20, 23 or 4, can't remember which, the, it should be up to 25%. So, for those of you who are investigating, you are, you are right in with the major crowd.
OK, so, let's now go through the five steps to getting, to having a full app solution.
So, the first one is monitoring and consolidating.
And most of you, I'm sure, are already doing at least some of this.
So, I split the monitoring up into the front end and the back end.
So on the front end, and what I mean by that is, looking at it from the user's point of view, or as we've commented, um, with D E M, it could be a human user, or it could be a, a bot, or other digital process.
Looking at that, just a quick example of that.
If you're using an app, and you ask it for a map, and it's not Google Maps, it's probably going to put a request into Google Maps to get that.
So there's a case where a digital process is talking to another digital process. And you want to measure that responsiveness.
And APM, as I talked about, application performance management takes into account not only digital experience management, but also can look at at some things in the infrastructure.
Um, either from the outside, which there are products that can do that, or combines things with infrastructure monitoring, which is at the bottom.
So infrastructure monitoring and NPM DEA's network performance and two other letters I've honestly forgotten.
But anyhow, networks, as we all know, are an important capability.
And each of the tools that, that you implement almost undoubtedly has a database that they use to store whatever they're doing.
And if you, if we stop at this point, these are what we basically call silos.
So, you can have a team doing APM and other team, doing D, M, et cetera, and they're in their own little silos.
And the first step is to integrate those siloed data into a tool that can do that can consolidate those events and look at the performance metrics.
And I'll refer to that at some point as a single pane of glass.
This can shift your operations capabilities from having to look at many screens, one for each of the tools, to being able to look at a single screen, where all the data that is collected, and then that gives two advantages.
If you have a system, it's a single pane of glass system, can compare events, that's one major benesh that we'll talk about that in just a second.
But the other one is that, that I said people only have to look at one screen, and even if they're the ones who have to do the comparison, the two things that they're having to compare right there.
Now, just to give you a sense of how many silos one might have, MA in one of their studies determined that on average, there were 23 monitoring tools used.
Now, one of our systems integrators told us that one of their customers had over 100 tools that they were using to monitor their IT environment.
OK, So I indicated that there's a lot of benefit from just doing this consolidation.
And here are some examples of the benefits that you can get from doing event consolidation, with some correlation capabilities built-in. And you can see, these are pretty significant. So Van City, which is a credit union in Canada, 70% improvement in root cause analysis.
By the way, most of the time in fixing a problem, is in finding where the problem is.
And then you can see over a day, a bank which is in Turkey, 30% improvement, in meantime to repair.
And I'm sure you all know the cost of downtime for your application, or you certainly seen the numbers. It can get into the hundreds of thousands of dollars, you know, per minute.
OK, I mentioned, which, when I talked about the, the technology maturity curve, I mentioned that we're going to have one on on people and process.
This is that capability, this comes from Analysts Ovum, and in the handout, you will get access to their report, which I pulled this from.
So, at its simplest, this is the you crawl. Then you walk, then, you run.
On the left-hand side, you know, basically, each individual person is doing whatever they want.
If they have a favorite tool, that's what they use.
Um, then, as you you progress, you get teams, who suttle on, what it is they want to use, then you get some cross team, capability, DevOps.
And so, where the ops teams are talking to, the dev team, and using IT service management, basically, or trouble ticketing system, start coming into play, then you move into a full DevOps capability.
We're basically, before any application ever gets put into place, well, into the coding phase, Operations teams are brought in to provide their perspective on what it is they need.
And then ultimately, you get to this Nirvana which is customer centric operations oriented where you've got a full AI ops solution, and life is wonderful.
Very few organizations that I've seen are, are at that level. So, most of them are sitting between the the the purple and the DevOps.
OK, so, we talked about introducing AI and I mentioned the domain agnostic Model or cross domain. We're gonna get to that in a second.
But before you get there, most likely what you're going to do is be at this domain centric model.
And that is the tools that you have got, as they are progressing in their evolution, they're adding AI.
And so, for every one of these tools, or tool categories that I've listed here, I've seen those vendors start to implement AI and machine learning. And when I say, I can basically translate that to machine learning.
Because on track, AI is a hugely broad topic, covering everything from visual analytics, speech, analytics, emotional analytics.
And what we're really talking about is machine learning and deep machine learning.
We're basically looking at data.
It can, it can find patterns and then alert you when things don't match the patterns that it determined are normal.
So, this is, is the next step that many companies go to.
Now we get to a more AI ops centric model. And this is what's called the domain agnostic model.
And the two key elements of this are that all of your, those Sinaloa databases are, now, instead, ******** into, a data lake, a data lake, by the way, is just, a really big database or Hadoop kind of structure probably deals with structured and unstructured data.
What's not, I haven't mentioned so far, but a lot of the AI ops solutions take into account social media data, so you might have Twitter feeds coming in, and looking for negative keywords for your application.
As one of the alert metrics, we never really want to be notified by human beings that that's the problem, we'd like to catch it before that, but as one of my old bosses like to say, human beings are the last step in quality control.
Once you've got all this data together, then you apply the AI and you can understand the benefits pretty quickly and I will show you example of this where I can now start comparing, for example, network data, server data to what users are experiencing.
Put those all together and then provide some actionable insights.
So the final step in the AI ops journey.
And so, interestingly, when the analyst's started talking about apps, they did not include automation.
And over about the last 6 to 12 months, if you read the analysts reports on AI ops, they have now started talking about automation.
That is something that we felt was very important from the beginning.
So, basically, once you've now gained that actionable insight, you want to trigger automation to fix the problem that the insite gave you.
And we're going to see an example of a company that did this as I mentioned before.
OK, lest you think that, uh, that people are afraid of doing these automated actions.
In another study, the EMA did.
They asked, you know, to what degree you're going to allow, uh, automation to happen or AI to drive automation without human intervention.
And, as you can see, 96% are willing to use automated actions, and those are split to half that, we'll let it go by itself.
And, the other half who want to have a human at least look at it.
And, then, you can see only 5%, don't allow automated actions.
Um, except when they've all been, you know, worked as a batch process, and the human is the one, to actually initiate those actions.
So, this idea of doing automated actions, driven by insights that come from somewhere, is, is being accepted.
And, you know, I think we'll, if we were to take the survey five years from now, those numbers would be shifted in favor of doing it without human innovate intervention, and just human review.
So, I've mentioned this a couple of times, so we have a large oil company, All services company.
That's one of our customers and, by the way, they have, they just presented these results out, our big user conference, um, those presentations are online for Viewings so if anybody's interested enlisting too, this presentation, you actually get to find out who the company is. But he goes through all the steps that they took to get here.
This was all driven for them.
By the recession that happened in 2000, I can't remember. There's 2000 or 2008.
But in any case, because of that recession, their revenue really dropped, and they needed to do a major headcount reduction. And they realized the only way that that was going to be viable for the company was to start automating.
And so they went through the process of identifying the most common problems that already had solutions.
So in other words, the siloed team had already written the script, and then went to implement those in a more standardized system, and they just kept. Chipping away at the problem.
And here are the results for 2019. So, they've been working on this for four years.
When we first talked to them, their savings in their first 12 months of operation was one point two million.
In this latest presentation, they gave, they're up to four million, And while they don't listed here, you can imagine that if you can carry out 97% of the known problems, fully, automatically, and, by the way, this is one of the companies that does not require human oversight.
That, that is going to reduce your downtime significantly.
And I'm not sure whether that number is factored into the four million or whether that's really just the demand hours.
And they know these numbers as a certainty, because one of the things that they did when they built all this automation was to build a dashboard that tracked every automation item that ran and a new, what the human cost was to implement that.
OK, so Josie? It's time for poll number two.
And what I want to find out here is, of all the things that I've talked about, not all of, which are technically apps, but they're on the journey to apps, which of these youth your organization has implemented.
So, you know, do you have a single plate, plaint, excuse me, single pane of glass solution, something that collects events and metrics and consolidate those, Are you doing digital experience monitoring or what when you might know it as application performance monitoring or APM because that was the old name.
Are you able to correlate and reduce alerts?
And then we go to the machine learning and have you actually done automation of your of your IT processes? I'm sure many of you have implemented RPA.
So, ladies and gentlemen, the poise active right now, just start casting votes, Again, in this, in this case here, can select all that apply to your context in your organization, which functions are currently implementing.
I'm going to let you have a little bit more time here to cast.
We have about half of you who have guessed votes at this point.
Just five more seconds and we'll, And we'll close.
Very well very well.
So, here we go.
Close it and lattes now share these results.
So, while we have here, Michael, is that which functions are currently implemented? six to 7%. So the biggest category by far is IT process automation. So 67% IT process automation. The second is at 29% machine learning in AI, digital experience monitoring, APM monitored and integrated, and then Allard correlation reduction, 21% and single pane of glass event metric consolidation as 17%. So it looks like a real big focus on IT process automation. six to 7%, and then a fairly even distribution on on the other categories.
I gotta say, I am surprised at how many people. I'm not surprised by the process automation, because, frankly, process automation started when pretty much the first server admin wrote a script to fix a problem.
What I am surprised about is how many people are in the machine learning and AI, given that the i-ops number, um, was, was so low, but, so, be it. So, that's, that's good news. OK, I mentioned, I'm going to talk about RPA and AI ops. On the left here.
You can see, this is a typical RPA implementation.
So you have robots at the bottom, and you've got the, the servers.
So, first off, those robots have to run on servers.
And then you've got tasks that are orchestrating all of these robots, Um, And all of this stuff, of course, runs on an infrastructure.
Uh, the robot's themselves will almost guaranteed write log files to indicate how the process that they're running is going.
And some of those are going to be error messages. So there's a few things here.
First off, your absolution should be able to discover all of the new services that you've put in.
It should be able to discover the robot's as they start up.
And then automatically start monitoring those.
it should also be monitoring the log files. And there's two benefits to monitoring the log files.
one is, it may be able to correlate if there are any common errors, and give you.
an alert, an insight, that, hey, this error that you're seeing is very common.
Um, and it might be something you want to look at.
The other thing is, the beauty of using, and this basically goes back to the centralized single pane of glass, is the people who write these processes most likely don't work 24, 7, or their group doesn't.
And so, if there is something that's critical that happens, it can also provide information to an alerting system that, if required, could wake somebody up to fix something.
OK, I mentioned I would give you a real-life example of a process that AI ops fixed.
So, here, we're going to talk about a drug interaction application.
So, back in the old days, when a doctor would prescribe something for you, he'd look up that drug and a book and see what drugs should not be used in combination.
Of course, quite awhile back, that was moved to an app that runs on a phone, I know that's what my doctor does. He pulls out his i-phone and, you know, puts in the various medications I'm taking when he wants to give me a new one. So, we had this situation happening, and one of our customers.
They had this periodic situation where the app was just slowing down, and the doctors were calling in saying, you know, Hey, I can't use the app, it's, it's running so poorly, I've gone back to my book.
So what they did was, they put a digital experience monitor out to test the application.
And rather than it being sort of a random occurrence, they now knew exactly when it occurred.
And by having the app system take that information, and look at all of the things that supported that service, or that business process, what they found was there was an index in a database that was getting stale. And that's what was slowing things down.
Now, again, I said before that fixing the problem, finding the problem, is the biggest part of fixing the problem.
And here's a perfect example of that.
The fix was so simple.
They simply wrote a batch process that ran at the middle of the night that refresh the index problem solved.
So there's an example of how getting all the data in and looking at it together.
So just to wrap up, I'll just talk quickly about some of our capabilities.
So I talked about being able, looking at the bottom. I talked about being able to collect a lot of data.
And you can see all the collectors, the things that we have, not to mention, we have third party integrations, and we support over 200 tools and technologies.
Our store once, or kozo, which goes for Collect one store once. That is the data lake that I mentioned.
And then we have a lot of analytics that sit on top of that.
And then finally, we provide many visualizations for both business and IT users.
So this little, found one here, whoops, that is a railroad company. And what that shows is the number of tickets sold and the rate at which they're being sold and compares them to some basic IT metrics.
So, business person could look at this, and if the ticket sales match a slowdown in the IT infrastructure, can go pound on their IT people to get it fixed.
So finally a real life example of how this all works.
Um, TOMM Brazil is a telecom provider and I'm sure you like me when you're calling in many companies.
Now what you get is an interactive voice response system and they were able to figure out what was causing misdirected calls.
So calls pointing to the wrong agent and reduce that by 40%.
And I know about 15 years ago, I saw a study that said, every time somebody has to pick up the phone, that's about $25. And I'm sure that number has gone way up. And not, suddenly that costs, but just think about the frustration.
I know, I get incredibly frustrated when I get pointed to the wrong place. And then that person has to send me somewhere else, and frequently asked to get back into acu there.
So, a lot of benefit, again, too, being able to look and evaluate these problems.
So, without GSA, I'm going to pass it over to you for questions.
And, while that's going on, I'll just mention that these are the various assets that we're going to have as a handout.
So, just write questions.
Terrific. Terrific presentation, Michael. Thank you for sharing that with us. I wanna, I wanna start by asking you on, based on some of the pulse survey results that we saw here today. And, some of the, that group that, that's getting started on AI ops. They have not really going deep into it, they're getting started, and you have, you've presented so many great use cases, and have so much experience what, you know, based on what you have. Lauren: Implementing AI, ops. what is, what is the best way of, starting, if you're new and you have some basic infrastructure in place, but you have not really leverage what you have shown so far, What is, what are the best practices to get started.
OK, so, um, as I mentioned in step one, that's probably the easiest way to get started, which is to integrate the tools that you have.
So, some, some people will come in with the, basically the Be all end all solution until you, Yeah, just get rid of everything you have.
That's just not practical for any organization I have ever met.
So you need to take an attitude of where you're going to integrate the tools that you already have and bring those together.
And as we showed, that brings huge benefits.
The next thing is to start a pilot project with an AI machine learning situation, and get a very well defined use case. You want to work on one use case. And as we talked about, in the organizational changes, you want to get somebody to do an executive to buy into this.
Because, as you move from your use case, to broadening this out, you're going to be hitting more and more organizations, and that's going to require an executive sponsor. So, those are the the two places I would go to for best place to start.
That, that's great, that's great, Great insights, Michael! I'm going to ask you, Michael, that you stop sharing your presentation if you can go so that they can see from this screen.
And then while Michael is doing that, I'm still monitoring the questions are coming in, on the, on the panel here. So please, you know, feel free to continue to provide those questions and I'll relay them to Michael. The next one here has to do about definitions and deals of AI Ops, Michael, and are, are these definitions and deals of AI Ops converging, Do you see them converging right now?
So, so, basically, when this started out, as I showed you, the evolution model.
We didn't even really call it AI ops.
And various analysts have taken they're, you know, provided their own views on this.
And I think that one of the there's a couple of things that have happened.
one is the, the definition of, hey, opps keeps growing. So they're not subtracting anything.
But it's, it's as if the analysts talk to somebody and somebody says, well, what about this? And they go, oh, that's a good idea, I'll add data.
And so what we've seen is more and more data types being brought in, perfect example is, and I didn't spend much time on, this is the topology of your, your IT environment. That originally wasn't taken into account, what they've come to realize is having the topology information makes it much easier to solve the problem. And that's something that the machine learning algorithms can use to speed up the process. The other one, which I did mention, is automation.
So in the early days of analysts writing about apps, there was no mention of automation. And I don't know whether they're each reading each other's papers and going, Oh, yeah, that's a good idea. I'll include that.
But yes, there, there is now a pretty agreed upon definition of what AI ops is. And it boils down to, you've got to do all sorts of infrastructures, public, private, cloud, as well as on prem. You've gotta take in all the data types that are available.
And you've got to be able to automate the fixes, once you've got the insights.
Very good, very good. We have time for one quick, quick, final question here. And this one, it has to do with a business that's, that's, that's not a large business, but does not have a very large IT workgroup. And the question is related to staffing for, to get value out of AI. Do they need to have a data scientist on staff to get the value of out of AI, and out of AI ops?
So this is one of the the big differences between AI ops, and the use of AI in general in a company.
So, when you look at AI used, for example, sales modeling, uh, financial modeling, et cetera, you have pretty much no place to start. So, you have no choice, but to get a data scientist, to look at the data, to try and figure out, OK, what kind of model are we going to build?
When you're looking at a i-ops, did the, The space that you're trying to analyze is pretty well defined.
You're gonna get alerts. You're gonna get metrics.
You're gonna get topology data and what you're looking for are no, basically performance problems.
Because if something goes down hard, you usually know about it pretty quickly from traditional monitoring. And all of that's been built into these systems. Some systems require a little bit of, you know, some training.
Other systems, like the one that we provide, are configuration free.
You just install it.
Let it run for about a week, and it will start being able to figure out what's going on in your environment and start providing insights. But basically, it's a data scientist in a box.
That's, that's impressive. Michael, thank you so much for sharing your thought leadership, your practical experiences with us today. It's being really, really instructive and insightful to see what you have shown us today, So thank you very much. Let's just say thank you so much for inviting me, and I hope this was useful to everyone.
All right, ladies and gentlemen, we will close the session. There will be a short survey that will pop up when you close the session so we appreciate your feedback and you do not want to miss our very last session of the day. So we're going to have victim and the who is the head of quality capabilities in consulting and Nokia talking to us about how to identify and resolve points of failure in business processes. So Vic is an expert, a seasoned professional and leader in this area, and really looking forward to his presentation. So we'll see you back up at the top of the hour. Thank you.
Product Marketing Manager, Operations Bridge,
Michael has over 30 years’ experience in IT infrastructure marketing in high-tech organizations. He has launched ten application, system, and network monitoring products.
Michael has held positions as asolution architect, technical marketing manager, and product manager, before his current role asproduct marketing manager. He has a BS in in Electrical Engineering and Computer Science. You can find him on Twitter @Michael_IT2.
Search for anything
November 9, 2021
11:00 AM - 12:00 PM ET
January 13, 2022
1:00 PM - 2:00 PM ET
January 27, 2022
1:00 PM - 2:00 PM ET
Watch On-Demand Recording - Access all sessions from progressive thought leaders free of charge from our industry leading virtual conferences.Watch On-Demand Recordings For Free
Courtesy of DC Government's Ernest Chrappah, below is a transcript of his speaking session on 'Going Digital To Enhance The Customer Experience' to ...
Courtesy of 's Anu Senan, below is a transcript of his speaking session on '' to Build a Thriving Enterprise that took place at Enterprise ...
Courtesy of Tasktop's Dr. Mik Kersten, below is a transcript of his speaking session on 'Project to Product: Driving Digital Transformation Insights ...
Courtesy of Nintex Pty's Paul Hsu, below is a transcript of his speaking session on 'Improve employee productivity during and post-COVID by ...