Subscribe
BTOES Insights Official
By
October 22, 2021

Process Mining Live - SPEAKER SPOTLIGHT: From Process Mining to Process Intelligence: How to get more from your process data.

Courtesy of Abbyy's Richard Rabin, and Trinet's Carolyn Dobie below is a transcript of his speaking session on 'From Process Mining to Process Intelligence: How to get more from your process data' to Build a Thriving Enterprise that took place at the Process Mining Live Virtual Conference.

BLOGS COMPANY LOGOpillar%20page%20line%201

Session Information:

From Process Mining to Process Intelligence: How to get more from your process data

In today’s changing business environment, continuous process improvement is a must. To remain profitable and competitive, there’s no room for inefficiencies. That’s why so many leaders are turning to process mining. It lets you make real-time process improvement decisions based on facts — not emotions. 

In this session, we’ll show you how to move from diagnostics to continuous process improvement. Hear from Carolyn Dobie, Business Operation and Automation Manager at TriNet and Richard Rabin, Process Intelligence Lead at ABBYY as they discuss:

  • An established method to get the most out of process mining
  • First-hand lessons learned as TriNet embarked on this journey
  • The challenges and opportunities that are often overlooked in process mining plans

Session Transcript:

In richer Robin, for you, he has worked with process intelligence for over 10 years. He is the product marketing manager for Abby Timeline Process Intelligence. He would speak today on both typical blind spots in Processing Intelligence, and often overlooked opportunities presented by Processing Intelligence. Now, during Rachel's presentation, we're going to have a feature appearance by Carolyn Doby, and she is the Business Optimization and Automation Manager for tri net. Carolyn will talk about ... experience with bringing process intelligence into the organization. And our methodology, and lessons learned. And that experience, Richard, it's a real gift to have you with us. Thank you so much for sharing your expertise for global audience today.

Thank you very much.

What we're going to cover today, as was just mentioned, we're going to start by talking about some of the typical blind spots and process mining and what you can do to eliminate these blind spots. Carolyn will talk about methodology for getting started. and I'll come back and talk about some of the often overlooked opportunities the Process intelligence provides. So, first, when we talk about process intelligence, we're really talking about process mining, so plus some additional capabilities. So, frequently, we are talking about process mining, combined with task mining. We also, when we talk about the analyzes, offered by process mining, process intelligence, offer some additional numerical analysis types that don't rely on a visualization of the process flow. So, it can handle more complex processes, then, process mining is typically good for.

We'll also talk about opportunities, including the fact that with process intelligence, you have the ability to do real-time monitoring of your process, so that you can do both operational and compliance monitoring, is also an opportunity to do predictive analytics. So, let's get into a little bit more details on all of these.

So, of course, when you start with process mining, it's a general idea that you want to pull in all the data artifacts from all of your relevant systems of record into one area, so that you can then recreate what the process flow looks like.

It's more than just an equivalent to pulling all the data into a data warehouse. Because the process, mining, or process intelligence tool will then format all that information in a way that you can both visualize the flow of the process, as well as defined process behaviors, and be able to analyze if a given process instance follows those behavior. So the combination of the visualization and the numerical analyzes that are available are what gives you the total process visibility on top of having pulled all of your information into a process intelligence engine.

Now, when we talk about blind spots, the first blind spot I would refer to is that, in many cases, not all the information you need is in one of your systems of record.

It may be a document focused process.

And some of the information may be in your documents so that if you have unstructured data in a document, the first thing you should plan for relative to that is if any of that data is important to you for dimensional analysis or other reasons, you need to make sure that you've got access to the unstructured data, as well as what you would typically get out of the systems of record.

ABBYYThe next area we'll look at is that, when you're looking at the results in process mining, you might find that there are areas where there appear to be gaps, where there's nothing going on for a fairly long period of time.

As was mentioned, I've been doing this for over a decade now. And on the early portion of that, people would talk to me, and I would ask, well, what do I do about the information that's not recorded in any of these systems?

And at that point, the answer was, well, if it's not recorded in any of these systems, there's nothing we can do about that.

Well, that's not the case anymore. Now, with task mining, combined with process mining, we're able to record the user's desktop activities, and use these desktop activity recordings, these log files, to understand what's going on during that period of time. Now, of course, there are many things that need to be dealt with. Our privacy concerns, so we need to be sure that we're only recording the appropriate applications that we care about.

We need to make sure that we have a way of obfuscating or hiding the confidential data that might be recorded. And beyond that, of course, the real secret sauce is the recording is a record of the clicks and the typing options. But what you really need is away from that low-level recording top-up a higher level task definitions. And they're algorithmic and machine learning tools in task mining that help. But what you end up with is both the process mining information coming from systems of record and log files. The task mining information recorded off a user desktops, so that you get a more comprehensive view of the overall process, and was previously available.

The next thing to be mentioned here as far as the different types of analysis that are offered, would be that when you're looking at process mining, typically it focuses very heavily on the area of the schema. The idea that if I take all of the events for my process flow, I can put that all together and create a single visualization that shows the key steps of the flow in my process.

And these can be very informative where this can be augmented with timings and counts and other information you can And you can, not only annotate these, but you can also show the flow in, as it's coming from one system to another with animation. But the limitation regarding process mining, in this case, is that when you have a process, it's fairly straightforward, with a low number of different options of how the process can flow. You end up with a diagram like this. It's very useful. But as you get a more complex process, you end up with a larger diagram that can become unwieldy so that you then start using filtering tools to remove parts of this. And, of course, the limitation there is as you start removing some of the less use cases, you might be losing information that's required. So we start looking for, what else can we do to eliminate that blind spot, where you don't have the ability to visualize complex processes?

Well, one option on that is to use not just a schema, but what we refer to as a path diagram, where each column here represents one of the possible flows.

Through your process. Typically on the left-hand side, you might see the most common process flow. And as you move to the right, you see less common process flows. You can also annotate this type of a diagram. You can't see the details.

But you'll note there are various numbers above each of these flows.

Well, in this case, that might be used to show, what does a number of times we follow this path versus the other past? Or what is the average amount of time it takes for our process, when we do one path versus another. Or you might want to include other metrics, and Evaluate, When I do it this way, versus this way, what are the results in those various metrics. But, you still have the problem here that, if I'm talking about an ad hoc process case, management type process, where there's a lot of human decision making at each step, you may have virtually every process himself flowing a little bit different from every other one. And you have no easy way of putting them together in one schema diagram, or a small number of different paths. So then we start to look at other forms of numerical analysis, where you focus on ways of looking at the data that doesn't rely on going back to this picture of the flow. So in this case, for instance, we might want to look at a bottleneck.

scan through all of the different steps and show me those that are taking the longest Or you might want to pick a particular transaction from one step to another and want to see a histogram showing me that.

Most frequently, this happens very, very quickly, but on rare occasions, it can take a lot longer, or you might want to use dimensional analysis. Take this same information showing how often it takes various amounts of time for this transition and break it down by dimension such as operator. Clearly, you can't see the details of these charts, but you should be able to see that the very different from one operator to another. So in other words, you focus in on the part of the process you want to explore. You can do dimensional analysis to break it down anywhere you want, and you start to get towards a root cause. You can see, in this case, what different operators might account for differences in behavior. And there are many other ways that you can do aspects of analysis of your process that don't rely on any kind of a schema or diagram. You might define different process behaviors that we'll talk about in a moment, and use them in different ways, or have tools that allow you. Once you've refined your view and you're looking at the cases you care about to look at a given case in detail, Look at the sub processes happening within their case.

Btog CTAHow the different attributes, values change as you go through the process, But other ways of analyzing your process data that work equally well, regardless of how complex the processes.

And, of course, you'll want to create custom dashboards, where you can take the metrics that are important to you, and lay those out, So if you've got an interface designed for your users to show them what they need. And, of course, there are many different forms of analysis that can be made available on this environment.

But you do away with the concern of having to filter out data in order to be able to generate useful information. Regardless of how complex your process flow is. When you're looking at process intelligence, you still have the ability to be able to get the level of information you need from that.

So next, we're going to have Carolyn, and she's going to talk about some of the methodology and lessons learned within her organization.

So, let me bring this up for you.

Thanks, everyone, for joining us today.

I'm Carolyn Joey. I'm a Process Optimization and Automation Manager and China.

This presentation is for educational purposes only.

Train it provides clients with legally compliant HR guidance and best practices. China does not provide legal tax or accounting advice. Next slide.

China is a professional employer organization that provides small and medium sized businesses with full service HR solutions tailored by industry. To free, as in these from HR complexities, training offers access to human capital expertise, benefits, risk, mitigation, and compliance payroll in real-time technology from Main Street to Wall Street, trying it empowers SMBs to focus on what matters most growing their business.

So let's talk about challenges with business transformation.

Looking back on 2020, the business world is changing faster than ever, Technological innovation is growing at an explosive rate.

So, we really are seeing tremendous explosion in technology.

This is transformation, isn't simple.

Think about this.

What would it take to understand a process that has hundreds of ..., 500 or more colleagues?

Almost 300,000 work orders, 15 departments, five versions of the service model, seven different configurations of the product, and eight locations around the world with multiple time zones.

What would it take to deeply understand that process?

The traditional approach to process improvement may not scale fast enough these days.

We all need to be sure that we're focusing on the right opportunities that are aligned with customer needs, and that we can implement change and validate results.

Traditional approaches to process improvement can be time consuming, expensive, and requires specialist skills.

Prioritization often relies on opinion. Instead, effect based measurements, In many cases, impact is not measured.

In order to successfully transform, it is best practice to use data to identify, prioritize, improve, and validate business transformation.

Technology like process mining may be used to accelerate business transformation.

Next slide, please.

So, next, we'll take, we'll take a look at some of the features you may want to consider when looking at process mining tools.

Here are some features that may influence your decision.

As you're looking at tools, you may need the ability to create process maps automatically, gather process data across silos from multiple systems, and associate the data with the map.

Having done this myself manually, I know this is a common feature of represses mining, but this is just, This is, This is just amazing.

So they have the ability to automatically create the map, get data from multiple business units, and multiple systems, and bring it together into one process map with data.

You have to ask if you need that.

Do you need the ability to measure discrete steps of the process, as well as measuring the process end to end?

Do you need the ability to, to compare performance, I Dimension.

I want to compare it by product. How does this product compared to this? Maybe I want to look at it by market, or location, or team, or time period. Before I made a change for extra, I've made a change.

They want to measure process, duration, and cost.

They didn't have filter on those as well.

So may want to narrow in a particular market when they want to narrow in a particular duration.

Can I look at the process and show me things that take more than three days or show me things that take that cost more than this dollar amount?

Can I include or exclude specific events?

You may want to consider if the tool provides easy identification of process bottlenecks.

Does it make process variation visible?

Can you easily see the detail related to process instances?

PM_Live_GraphicSo, for example, if I look at a process, and I want to look at what's happening, if the process is taking three or more days, can I actually go into the individual transactions and see the detail? Can they see the dimensions? Can they see the duration of the process step by step? These are things to evaluate.

You may need the ability to measure compliance with standard process and generate alerts.

You may want to consider the ability to extract and load data manually going through automation.

Regardless, consider looking for a solution that provides the right tools to answer the questions you need to answer.

Now, let's talk about best practices for process, for process mining.

Best practice and process improvement, begins with diagnosis, and then moves implementation.

We don't move to implementation until we fully diagnose the problem, and we have a full understanding of current state.

As we move forward, you can see the alignment between the ... Process Improvement Model, and the two stages, diagnosis, and implementation.

Next slide, please.

So, in the diagnosis stage, there's three phases: define, measure and analyze.

And these phases are supported by the questions related to what, where, and why?

What problem are you trying to solve?

What is happening? What is your target condition?

Where are we performing above and below expectations, And why is the problem happening, and why does performance vary?

Next, please.

So now we're going to take a look at how process mining can be leveraged to support that model, our approach to process improvement. So now let's dig into the diagnosis.

In any improvement effort, best practice, first, to define the problem you're trying to solve and the target condition you need to achieve.

The target condition is just an explicit expression of process requirements. How does the process need to perform?

Now this is independent of SLAs.

Or what people, what leaders may be willing to commit to, it's really thinking outside the box in challenging ourselves. how does the process need to perform?

The interval tool supports this, as it allows for a standard definition of your measurement, which can help you gain alignment with your stakeholders.

So when we know what problem we're trying to solve, we'll go to the next slide, please.

And we know that our target condition, then we can focus on what is actually happening, best practices to dig deeper to understand current state.

On the left, you can see an example of the path tool, and we, this is, this makes process variation visible.

We can see different variants of the process, side by side.

In the middle, you can see a picture of the schema tool, And this is an example of a process map with data overlaid on the process map.

And on the right-hand side, you see the interval tool.

The interval tool allows for end to end measurement, or you can measure discrete steps in the process.

So we understand what is happening, what our problem statement is, what our target condition is, and where performance varies, best practices, now to perform root cause analysis.

This is where we seek to understand why performance varies and why the target condition isn't being met.

The timeline tool, shown on the left, allows you to look at each process instance.

So, if you look, there's, it's, I know it's kind of small, but there's icons with little dots, and each line represents a process instance.

If you click on one, the picture on the right pops up, and you can see that instance of the process, you can see the values of the dimensions.

You can see the duration between steps, and you can even sort your does. the timelines that you filter down to, bite performance.

So you can put the longest ones, and then the shortest ones on the bottom. And go through them 1 by 1, and see what you see.

The timely tool also allows you to filter on, such things as show me where a particular event happens more than a certain number of times.

So if I were interested, if this event was something I was particularly interested in, I could go in and say, hey, show me where it happens between 1 and 3 tenants.

So following the genetic model.

Now that we deeply understand what's actually happening, where performance is varying, and why the problem is happening, then we can move to the implement stage.

Next, please.

In the infinite stage, we have two phases improve and control.

Improve is about taking that root cause analysis and creating hypotheses as to what will fix the problem, choosing 1 or 2, running a pilot, and then measuring to validate results.

click_to_view_all_eventOnce you have your measurements, you can do is determine, Did I hit my target condition? So let's say my process, when I started out, was taking 10 days.

And whether I need it to be is I needed to be three days.

So, what input I have my ideas around? What's going to fix it? I run a pilot with 1 or 2, and I measure, maybe that change gets me down to eight days.

So I'm not quite there yet, so I want to iterate and run another experiment.

And I keep iterating until they hit my target condition.

So the control phase comes into play in two ways.

When you find something that works, you can implement and standardize the change that worked.

And you can set up your inspection and monitoring.

I think if you reflect on your personal life, you can realize how hard it is for change to stick.

So let's say you wanted to improve your fitness level, or you want to eat more vegetables, it's not something that just sticks because someone says to do it.

I'm sure, over the years, like me, you've had e-mails come through that say, OK, from nowhere, do this.

There's some people who will get that e-mail, and they will do it from now on, and they will do it until you tell them to stop.

From my experience, there's people who will do it until they notice that you're not paying attention, and then they'll say, and also from my experiences, people who will never do it.

So if you've gone through this work to determine what actually improves your process, you need to make sure that those changes are going to stick.

So let's move into and see how process mining supports us.

So, best practices include creating your hypothesis, which includes the measurement play.

And this is where it's something like the interval tool is helpful, because you can get in and be very specific about your measurement. I want to go from the first instance of this event to the list instance of that. That's my definition.

And because I can go, it is near win scenarios. When I run my pilot, I can say, I think I'm good, I think of those 650% of those over here.

And that allows me to estimate the improvement.

So we've implemented a change.

Then you want to measure to validate results.

The dashboard tool allows metrics to be saved, and automatically re-used as changes are implemented.

The ... methodology is an iterative approach to process improvement.

Once changes are implemented, the process is repeated until the target condition or said another way, the business requirements are met.

In the control phase, best practices include: slowly rolling out the changes it works on a broader scale monitoring to ensure the change is stick, and that performance is maintained.

The diagnosis phase contains three phases it's, I guess, I would say the diagnosis stage, contains three phases, define, measure and analyze. Supported by the questions related to what, where and why. What problem are you trying to solve?

What is happening?

What is your target condition? Where are we performing above and below expectations? Why is the problem happening, and why does it vary?

The implementation stage contains two phases: improve and control, supported by the actions.

So, as we talked about earlier, this is transformation, is not simple.

If you think of it, world math, it's not simple.

However, following best practices for process mining can accelerate your process optimization with stronger, cross functional alignment.

Best practices to choose this include a well structured, fact based, probably statement, which allows stakeholders to align on the problem or problems that need to be solved.

Prioritization based on data, instead of opinions.

Deeply understanding current state before moving to implementation, Using data to validate improvements.

And, lastly, I want to talk about some use cases for you to consider where you may be able to have improved business outcomes.

Consider providing data from process mining to support your business case, to prioritize process and technology changes.

Consider validating the performance of software automation.

And consider ways to use to use process mining and processes that you might not think about it first.

What vital behaviors can do discover about some of your processes, such as something like a sales cycle?

So, I would like to thank you for attending this presentation. That concludes my portion of the slides.

Thank you, Carolyn.

So, the next thing that I would like to cover would be some of the frequently missed opportunities. That would go along with process intelligence. So keep in mind that we've talked about the fact that we brought all of our data from all of our different systems of record, into a single engine that can then build a model, or sometimes referred to as a Digital Twin of what's been going on across all of those systems. Now, the benefit, of course, of having this model is that in one place, we can see all the information about our process. And that also means that, because we're getting this information continuously updated from the various systems of record, we have what's essentially a real-time view of what's going on within our environment.

ABBYYAnd that allows us to be able to specify automated monitoring rules. You might be looking for operational conditions.

You might be looking for compliance monitoring, but if you can describe a process behavior, we can then use that behavior to monitor 100% of the cases that are coming through our system. And this is important, because what we have found from our clients is that typically takes a fairly large staff of people to be able to monitor even a small percentage of their cases manually.

But in this way, we can monitor 100% of cases and only pass the specific instances off to individuals when they match some criteria that we're looking for, as far as what behaviors we're interested in. So we define the behaviors of interest, we monitor the data as it's coming in, And we can send out notifications as soon as we see that a particular behavior is present.

This could be the form of an e-mail or a text message to an individual. But it's also worth note that when we act on these behaviors that we've seen, not only can we notify an individual, we can also invoke, a did a service, a web service, or a digital worker, a bot.

You can automatically call a service to handle these situations as they arise, Or you can take it a step further.

Start out with the fact that we have this idea of a digital twin, of our operations with continuous updating on that.

If you add to that, the idea of a neural network that's been trained to recognize the behaviors within our process, you can now do predictive analytics over this same information. Now, what we have found is that, most frequently, this becomes useful in ways such as, you might have multiple outcomes that are possible from a given process. As an example, some of the work we've done has been with the hospital, and we've looked at some of the data from the Emergency Department. Now, in an emergency department, a patient might be released from the emergency department to go home, or they might be released with the assumption that they are then going to be checked into the hospital. They've got to be admitted to the hospital to a room in the hospital.

Well, if you've got those two outcomes with using predictive analytics, sitting on top of a process intelligence framework, you're now able to take an outcome of interest, such as admitted to the hospital, and you can associate a probability with that, let's say, 80%.

And, what happens then, is, every time we update the process intelligence model of what's going on with every new piece of information that comes in, we re-evaluate that probability of, of that particular outcome, with every case as it develops, and now as soon as something happens, such that, the system believes there's an 80% chance of admission, We can immediately notify someone or call a service to begin preparing for the likely outcome that someone is going to be admitted. So, what this means is that you can monitor a situation as it's developing, and you're able to act on this before something happens, so that you can take what might have been a problem, and deal with it before it becomes a problem. So, that's the opportunity for predictive analytics, layered on top of process intelligence.

The next thing I want to focus on, an area that I find fairly interesting, is the idea of, what can we do with process behavior? Now, to be clear, there are various dimensions associated with each instance of a process. What department is this happening in, or as a given step happening, and what customer, what month?

Now, it's important to notice that, in addition to these, what I'll refer to as static dimensions, you also have the idea of process behaviors, that you can define for a given process. Instance. Behaviors might include things such as, what events are included or skip or repeated? Or what is the sequence of events or the timing between events? And of course, along with all of these, you also have the idea of being able to look at the attribute values that are attached to these events. So, what does this really mean when we talk about behavior? Well, one way of looking at this is a, behavior might be a situation where, for instance, a citation is issued. And, the problem is residential and the department is community life.

But there was no issue found event before that citation is issued.

And, ideally, you should be able to define these behaviors graphically making it easy for any of the system users to define any behavior of interest to them.

And, of course, they can get more complex where there is a case opened, with no issue report it, and after the case is open, then an issue was found, and then work is completed. But it took more than 10 days for the work to be completed after the issue was found. And that work completed event occurred more than once, at least twice.

So, the point being that you can define defined very specific behaviors or protocols based on this capability. And you can define multiple different behaviors and use them together to get more complex behaviors. So, now that we've got the ability to define behaviors, how are these behaviors used?

Well, there's the obvious filtering. So all of these analytic tools we talked about before, you can turn on a behavior filter. And now, that gives me the ability to look at what the outcomes are from my process. For only the subset that follows a particular behavior, or for those that don't follow that behavior. So, it starts to get to the idea of being able to provide actionable insights when you can see how the way you're doing something relates to the outcomes.

We've already talked about alerting, possibly by e-mail or SMS. Well, this is what we might be alerting on when you spot a given behavior. I want to alert someone on that, or I want to take an automated action. Whenever this behavior occurs, I want to act on that.

And an aspect of this that I think is worth considering is the idea of using these behaviors as an analytic dimension.

So, for instance, I want to be able to see how the results, the outcomes, vary based on how a process is done. If I look at the outcomes broken down by month, for instance, well, it's great for reporting to be able to see the different outcomes each month. But does it help with making better decisions? When you look at the results, based on how a process was done, how the outcome depends on the way the process was done. It's my contention that that can provide actionable insights that you can't get in any other way.

So what might that look like? Well, in the Emergency Department example, we might start by looking at all of the process instances that follow my standard protocols. They do the right steps in the right sequence, and they do them with the appropriate timing. You can then be able to look at a variety of metrics and say, Whenever I'm looking at those that, follow the standard protocol.

I see the average number of events per case. I see the average time per case.

But interestingly, you can also see things like: what is the average patient outcome? Whenever we follow that protocol, now it'll be clear. Outcome wasn't known at the time when the process was executed, but any data that's available that has as one of its fields, The process ID, the patient encounter ID, or any process ID you're using for what you care about, which might be an insurance claim ID, any type of data that can be linked back to the process instance can be analyzed in this way. So, in this case, we can see when you follow the standard protocols, what the average patient outcome is based on some numerical score that's been set up.

PM_Live_GraphicBut when we define a different behavior, an expedited protocol, that might have different timing. constraints are different actions you should be taking.

Yes, we see that there were more steps, it took longer. Or we can see the overall patient outcome was significantly better when you follow the expedited protocol.

So, this is the ability to use it behavior as an analytic dimension, or in another domain with credit card onboarding.

You might be looking at when you follow current guidelines, yes, a variety of process matrix. But, maybe what I really care about is what percentage of the cards apply for, and get all the way through to activation. I can see what that is, But I can also see that when you follow the proposed guidelines, even before you do the experiments that Carolyn was talking about, I can come up with an idea of guidelines, I think, should be in place for all the cases that follow those.

I can see I got roughly 10% increase in the percent of cards activated makin be considered significant.

So, the things to consider about using behaviors, an analytic dimension, is that this is not something where you've got to define the behaviors before you collect the data and set up an experiment for this.

After all, the data's collected, any end user reviewing the data, can define any behavior or protocol using tools, similar to what I showed a moment ago, to define the behaviors they care about, and then be able to see how the actual results or outcomes vary based on the way the process is conducted, and when it, whether it follows a particular behavior or not.

So in summary, we started out talking about some of the blind spots in process mining, that can be fixed with process intelligence, things like, how do you handle information from manual tasks, or tasks are so highly variable. They don't easily map down to a schema without filtering too much data.

How do you handle unstructured data? You might need a framework that allows for unstructured data.

Carolyn talked about some of the business transformation challenges and best practices, and then, also about the methodology they have used to get started, with process intelligence. And, finally, I talked about some of the things that you should consider beyond just doing Discovery, which clearly is a major value of process, mining, and process intelligent. But beyond that, the ability to automate, process, monitoring, and even act on there, to predict issues before they become problems, and be able to act on those. And, finally, to be able to define process behaviors, and see how behaviors related to outcomes, to give you actionable insights. So, thank you very much for your time. And now we have some time for questions.

Richard, thank you. Terrific presentation. Great participation from Carolyn as well, providing insights. So, number of different questions have come up here. I would start with Brendan nelson's question on the applications of process mining. And, then, the end, If you can share some of your experience across multiple segments of industry and clients that you work with on the use of process mining, for things like internal audits, or by regulators for compliance, audits, or by QA, QC functions, for, for, for quality purposes. Can you talk a little bit about those type of use cases?

Well, I think what you'll find is that those type of use cases occur across pretty much any process area. That there clearly are some businesses that are more early adopters.

And we'll pick this up more quickly, and therefore, more highly regulated industries that are concerned with compliance will have a lot of use for this. But when you're looking at being able to view performance, look for outliers, look for inefficiencies That really works across pretty much any process area in any different field.

Another question that emerged really related to process mining versus task mining and the kind of the combination that you showed there, which is, which is a great way of showing that, followed up by the very methodical kind of like Lean six Sigma approach that Caroline describes on really trying to understand the problem before you just dive into automation.

So, very, very sound advice. So, I just want to mention back to you that the feedback from the audience is very positive on, on having the discipline as a setup. So, I assume that, and I want to make sure that if you could please confirm this, that when you see implementations of process mining, there are successful.

That that kind of fundamental work is being done well by the organizations that the processes are being clearly defined, That the needs are being clearly defined. And that, then, you have a clear target here. And then there's some evaluation, you know, using the define, measure, and analyze process. And then the improvement and control later on. Can you validate that that's the case, that you really see a significant difference in outcomes when you take that discipline approach, versus that, oh, let's just, you know, implement process mining here.

Well, of course, you, you see the benefits in those outcomes.

And a part of this relates back to the fact that there's a lot of emphasis on automation.

And if you simply find a manual task and automate it, you don't necessarily know if you're getting the value out of that in the context of the larger process, The value that you might expect. So, the ability to measure what you're doing before you do an automation, and after you do the automation, allows you to see if automating that individual step really has the overall process benefit that you would expect from that. And this relates to the fact that many people spoken on the topic of that.

When you automate a bad process, you end up with a bad process that runs more quickly. So it's important to understand your process before you do the automation, and to be able to measure the results, not just of the step that's being automated, but that step in the context of the larger process.

Very well, Jim Griffin asks a question that's related, should should should this should this topic, and How long do you typically spend on this diagnose phase, the define, measure, analyze, phases? Before you go into this implementation phase, I have to assume that that's very context specific, but what is your, What is your experience with that?

It certainly is very context specific, and it could be as short as a couple of weeks. But the thing to be aware of when you're talking about the different phases is that the actual analysis of the data might be very quick.

one of the major challenges that I hear from everyone who we work with, is that it's actually the preparation of the data, which is the one of their biggest challenges. That's why we include some ETL capabilities in the tool. So if you get data delivered to you, that's not in the format you need, you don't have to go back and get an IT person to use an ETL tool to clean up the data.

You want the ability to do at least some level of massaging the data, preparing the data within the process intelligence tool itself. Because the time it takes to get the data, to prepare the data, is going to be one of the larger aspects of that.

The other thing, that's critical to what you just described, is that it's not enough to have the business units deliver the data to a center of excellence, dealing with process intelligence. They tell me what you can find with this. There is a lot that relies on having people that understand the data, to know, what are the dimensions that you should be exploring? What are the results that you care about?

So, yes, there's a question of how long it's going to take to to analyze the data. It might be a matter of a small number of weeks, It could be larger, if the data is very complex, but what's critical to the success of that, is making sure that within the team, working on it, you have people that are familiar with the business process, and the data being used. So, you're not stumbling around trying to understand the data. What you really want to do is, understand the process.

click_to_view_all_eventAdd that, that Grading site director, and thanks for sharing that, because ... asked, talked about, you know, data mining before, implementing intelligent process automation and process mining.

And there are a number of other questions and commentary that has happened here, that, that is you, you just addressed with, with that reply, I appreciate that very much.

Chief Park asked the question about, the process intelligence solution is specifically on what you have there, and the, and, you know, he's asking a question about kind of the general cost for implementing solutions like that, and I have to admit that that's a very tough question, because, again, depends on the context. What's required.

But how does that process intelligence solution work with your, with your product, and where maybe it's a part of a suite, how the modules that you present to us. Are they integrated? Are? they like separate. How do they work together?

So all the things that I've talked about has been part of timeline.

So it's all a single product that handles process, mining, process, intelligence, and task mining, all within one tool.

Now that of course, is part of the larger company Abbey, and there is relevance to what I was talking about in that when you're looking for what is the data that supports the conclusions you're trying to understand. It's more than just what's available in fields in various systems of record. There, in many cases, aspect of unstructured data, in document. So in our case, we have easy access to the unstructured data.

If you're using a process intelligence solution that's not linked to such a capability, then you need to be thinking about, what are the other elements of this, of the environment that needs to be brought into this, to handle things, like unstructured data, or to handle things like the artificial intelligence prediction capabilities, which are also built in any, a standard part of timeline.

Rich, When I said it's all one product, I do want to elaborate just a tiny bit more, but, yes, the task mining is part of the same product. But it will have separate components a recorder that might be loaded onto an individual desktops where we want to record activity or a task mining service that will be used to obfuscate data before uploading it into the cloud. So there may be various software components, but it's all part of one integrated product.

Thank you. Terrific, Richard. Richard, thank you so much, You know, you're a global thought. An industry leader in this area of the work you're doing with Abby is fantastic. The participation from Carolyn provide a real application and insights on the, on the practical side of what needs to be done to be to achieve greatness in this area. Thank you to you. Thank you to Carolyn for sharing their expertise with our global audience, today.

And thank you to you for setting this up, so we can share our knowledge.

Thank you. Thank you, Richard. Ladies and gentlemen, that's Richard Robin Liter at Abbey in process mining processing intelligence. What a fantastic journey on the different components of this effective implementation of process mining in organizations. So, we're going to take a break now.

Make sure to take a break and re-engage at the top of the hour. Because at the top of the hour, we'll bring you a special treat. We're going to do a review of great enduring organizations. And, how do they successfully implement intelligent business process management as a foundation for a successful process?

mining, So, very much connected to the topic, that, to the examples that Carolyn talked about. That you have to have a methodology. You have to have governance. You have to have a structure in place, so that the technology can be applied to create value. Because, as one of my great friends in the US. Air Force, and as a leader of deployment, a very large-scale deployment in the US.

Air Force technology, the right technology is great to enable to create value out of your processes, but the wrong technology can also make stupid happen at the speed of light, and we don't want that. So how do we build governance? How do we build an intelligent business process management so that we can bring in the right process, my technologists to create the most value in the shortest time? And simplest means is what we're going to be covering in our next session. And we're gonna look at great, enduring organizations across most several industries that have done this well, and we're going to share their best practices with you. So, I'll see you back at the top of the hour, and the, and in that a very interesting journey of the Foundation for process mining success. So, take a break now, and we'll be back soon.

pillar%20page%20line%201

About the Author

Richard RabinRichard Rabin,
Product Marketing Manager, Process Intelligence,
ABBYY.

Richard Rabin is the Product Marketing Manager for Process Intelligence at ABBYY, a Digital Intelligence company. He works closely with global enterprises to help them better understand and optimize their business process workflows, bottlenecks, and how to select the initiatives that will yield the most business value with intelligent automation, and how they will impact overall operational excellence. Richard has a remarkable academic background in Computer and Information Science and AI and has more than 35 years of software engineering expertise. He previously worked as a Senior Solutions Consultant at Appian, where he led sales of Appian’s digital transformation platform primarily in the pharma and financial services industries. Before that, he led his own consultant business, where he provided services for Kofax Insight in the areas of business intelligence, process intelligence, and behavior-based analytics.   

pillar%20page%20line%201

About the Author

Carolyn DobieCarolyn Dobie,
Business Operation and Automation Manager,
TriNet.

 

 

 

pillar%20page%20line%201


The Business Transformation & Operational Excellence Industry Awards

The Largest Leadership-Level Business Transformation & Operational Excellence Event

opex_assembly

business_assembly

Proqis Digital Virtual Conference Series

View our schedule of industry leading free to attend virtual conferences. Each a premier gathering of industry thought leaders and experts sharing key solutions to current challenges.

Download the most comprehensive OpEx Resport in the Industry

The Business Transformation & Operational Excellence Industry Awards Video Presentation

Proqis Events Schedule

Proqis Digital

Welcome to BTOES Insights, the content portal for Business Transformation & Operational Excellence opinions, reports & news.

Submit an Article

BTOES UNIVERSAL GRAPHIC - NO DATE.webp?width=1200&name=BTOES UNIVERSAL GRAPHIC - NO DATE
ACCESS 50 VIDEO PRESENTATIONS
Access all 75 Award Finalist Entires
RESEARCH REPORT 2021/2022
BTOES AWARD - NO DATE
BTOES UNIVERSAL GRAPHIC - NO DATE
Subscribe to Business Transformation & Operational Excellence Insights Now
btoes19.png
png
ATTENDEE - Proqis Digital Event Graphics-2
ATTENDEE - Proqis Digital Event Graphics (2)-1
ATTENDEE - Proqis Digital Event Graphics (1)-1
png

Featured Content

  • Best Achievement of Operational Excellence in Technology & Communications: IBM
  • Best Achievement of Operational Excellence in Oil & Gas, Power & Utilities: Black & Veatch
  • Best Achievement in Cultural Transformation to deliver a high performing Operational Excellence culture: NextEra Energy
   
Operational Excellence Frameworks and Learning Resources, Customer Experience, Digital Transformation and more introductions
  • Intelligent BPM Systems: Impact & Opportunity
  • Surviving_the_IT_Talent_deficit.png
  • Six Sigma's Best Kept Secret: Motorola & The Malcolm Baldrige Awards
  • The Value-Switch for Digitalization Initiatives: Business Process Management
  • Process of Process Management: Strategy Execution in a Digital World

Popular Tags

Speaker Presentation Operational Excellence Business Transformation Business Improvement Insights Article Continuous Improvement Process Management Business Excellence process excellence Process Optimization Process Improvement Award Finalist Case Study Digital Transformation Leadership Change Management Lean Enterprise Excellence Premium Organizational Excellence Lean Enterprise Lean Six Sigma Execution Excellence Capability Excellence Enterprise Architecture New Technologies Changing & Improving Company Culture Agile end-to-end Business Transformation Execution & Sustaining OpEx Projects Culture Transformation Leadership Understanding & Buy-In Lack of/Need for Resources Adapting to Business Trends Changing Customer Demands Failure to Innovate Integrating CI Methodologies Lack of/Need for Skilled Workers Lack of/Need for Support from Employees Maintaining key Priorities Relationships Between Departments BTOES18 RPA & Intelligent Automation Live Process Mining BTOES From Home Cultural Transformation Financial Services Customer Experience Excellence Process Automation Technology Healthcare iBPM Healthcare and Medical Devices Webinar Culture Customer Experience Innovation BTOES Video Presentations Exclusive BTOES HEALTH Strategy Execution Business Challenges Digital Process Automation Report Industry Digital Workplace Transformation Manufacturing Supply Chain Planning Robotic Process Automation (RPA) BPM Automation IT Infrastructure & Cloud Strategies Artificial Intelligence Business Process Management innovation execution AI Lean Manufacturing Oil & Gas Robotic Process Automation IT value creation Agility Business Speaker Article Systems Engineering RPAs Insurance Process Design Digital Speaker's Interview data management Intelligent Automation digital operations Six Sigma Awards thought leaders BTOES Presentation Slides Transformation Cloud Machine Learning Data Analytics Digital Transformation Workplace Banking and Capital Markets Data Finance Professional Services Education IT Infrastructure IT Infrastructure & Cloud Strategies Live Blockchain Interview Solving Cash Flow with AI BTOES White Paper investment banking Analytics Insight BTOES19 Consumer Products & Retail Enterprise Agile Planning Government Operational Excellence Model Project Management Algorithm Automotive and Transportation Banking Business Environment Digital Bank Enterprise architecture as an enabler Hybrid Work Model Primary Measure of succes Relationship Management Sales business expansion revenue growth Adobe Sign Agile Transformation CoE Delivery solution E-Signatures Electricity Global Technology HealthcareTechnologies Innovation in Healthcare Reduce your RPA TCO Transportation Accounts Receivable (AR) Big Data Technology CORE Cloud Technology Cognitive learning Days Sales Outstanding (DSO) Logistics Services Operational Excellence Example Risk Management business process automation transformation journey Covid-19 Data Entry Digital Experience Digital Network Digital Network Assistant (DNA) Digitization Drinks Effective Change Leaders HR Internet Media NPS Net Promoter Score Program Management Portal (PgMP) Sustainability TechXLive The Document is Dead The New Era of Automation Automated Money Movement Banking & Financial Services Biopharmaceutical Blue Room Effect Building Your Future Workforce in Insurance Business Process Governance Capital Market Creative Passion Digital Transformation Workplace Live Digital Workforce Digitalization ERP Transformation Finance Global Operations (FGO) Financial Services Software Frameworks Hoshin Planning Human Capital Lean Culture Natural Gas Infrastructure Natural Language Processing Organizational Change Pharmaceutical Pharmaceuticals & Life Sciences Project manager Supply Chain Management Sustainable Growth The Fully Automated Contact Center Transformation Initiatives Workplace Analytics eForms eSignatures 3D Thinking BEAM BFARM BTOES17 Big Data Processing Business Analytics Business Growth Centralized Performance Monitoring System Communication Creativity Digital Technologies Digital Technology Educational Psychologist Energy Management Health Insurance Health Maintenance Organizations Hospitality & Construction Human Centered Design Integrated Decision Approach Integrated Decision Making Intelligent Document Processing Kaizen Medicare Moodset for Excellence Natural Language Processing (NLP) Offering Managers Oil and Gas Optical Character Recognition (OCR) Pharmaceuticals and Life Sciences Photographing Price and Routing Tracking (PART) Process Design Document (PDD) Product Identifier Descriptions (PIDs) Python Quote to Cash (Q2C) Resilience SAP Sales Quota Team Work Telecommunications Text Mining Visually Displayed Work Culture master text analytics virtual resource management