View our schedule of industry leading free to attend virtual conferences. Each a premier gathering of industry thought leaders and experts sharing key solutions to current challenges.View Schedule of Events
Courtesy of SAP's Rahul Lodhe, below is a transcript of his speaking session on 'Challenges in realising AI use cases for enterprises and the way forward' to Build a Thriving Enterprise that took place at BTOES From Home.
Challenges in realising AI use cases for enterprises and the way forward
As we know the success rate for AI projects are very low, many use cases reach till successful PoC phase by creating ML model and desirable outcomes based on sample data; BUT moving them to production with real data starts coming from various sources ex, S3, ERP , Social media etc, and then the challenges start.
With changes in the model during training, accuracy in Inference goes down, versioning of dataset, model explain ability becomes important. This session discuss systematic approaches to take for scaling AI use cases.
It's a pleasure for me to present at the conference, and I managed, delighted to have that, so many participants coming from different places, and yeah, hopefully, my experience, we, on machine learning and AI, I would like to share the Forum, and what I gathered working with different customers so far in the journey. I thought it's a good idea to share that with the broader audience, and we can have a track to discussions on this topic.
So, I have been working on the artificial indigenous for last 4 to 5 years. I've been seeing the way. It's been changing quite a lot in my observation, what, what has been seen. That, as with different reports, that we see any custom website trying to struggle to get the best out of their data and getting out of their data means they will be how many data sets coming from their traditional systems, They have been working with, you know, traditionally with the business processes and now, the data has been also transform in the part of getting the new kind of data sources. I would say. We just coming mostly from the cloud and the enterprises as well, and the kind of data. So, company, but typically have a lot of data with them. But, they always struggle to get great them together and get into the, The right.
There has been a, kind of, a notion that many of data science project, usually kid fails great production, because, there had been challenge of, of, they're not able to get to maximize the copy of the data.
Another challenge that, that we see, that when your organization is free to, again, start with the complexity of the landscape, and everyone talks about that. They have the the landscape has been pretty silo: there'll be no, you know, some datasets, August, at one place, there is a finance data is the current versus the HR application is working differently. and your CRM has been a different place. So, that is a challenge. And talk about that How does My complex that can be good at race? And most of the Euro customer struggle with that, because the credit data, Because you can only able to get the good data science experience unless you've been getting your data. The educator will be bringing the, all the datasets aligned at the same time. So many companies has been also struggling to get, How do I solve this complexity of theta at the different landscapes?
Third challenge that we also see that the many companies has been also, giving the importance as a business issue. And they feel that the AI project is important, and the purpose is AI project can be more operationalize for them. And how this project can be, you know, make the business transformation, that we see. Quite good transformation journey, and confident about the acid that they have could make a difference in their own areas. But, at the same time, they're also confident that, you know, the ability of implementation of the project is not as high.
So, you say, if we start the video projects, but, many projects get Ticket failed at a certain phase, and because of the various users, that that big query, other challenge is that, the compliance part. So, typically, as you see, data protection, privacy, becoming important, data governance, becoming important part there. And, also, the cost of compliance is as much as $7000 per employee. And, may also be around 40 million to get the annual organization Toradex paid under the protection on non compliance problem. So I didn't read the data that. We talk about that. To bring the ADA Project, which is a data driven copy, which also has strong compliance with something.
Companies should be, look up to, or also understand the fact of how do we address this problem? So we need to look at digital simplistically to imagine your business processes that that has been worked through.
The first one was the year to get the maximum value of your data. And in order to get the maximum value, you need to kind of meet the connectivity to the data set. You need to look at that. How do you transform that raw data into the re-usable data sets? So everything start start with, how do you get the value out of your, your important data sets, and also how the data has been curative and, you know, diverse in the form at the same time. You would also look at the usability of the data that you picked up. So, there'll be many, many more aspects of that you would have been using the companies or your customer IDs and stuff like that across different data sets. But you look at that. How do you bring, bring them together? Also operationalizing for the machine.
Second important part is the, how do you simplify your data, right? So data is something which which could be the analyzable format, you can able to correlate data across the different landscapes, and also it's important that how you simplify the life cycle of the data for the developed, But. So is the once data coming in there. But also at what point of time the rhetoric of absolute. And the data will take up on part of it, is important that you unify the data connected data across your different landscape, and also make the correlation among them, which can be understandable for your development team, and define the life cycle for all your data.
Importantly, that you look at, how does you can make the machine learning experimentation that, because, once we have our data ready? The second part, companies, that, how to, how can I create a machine learning implementation, and how does the model comes, that I can build up there? And also, I can explain the data stewards of the results? Because many times, when you build your model and experimentation, it, many times happens that you could not able to say that particular prediction of a model is going in a particular direction. Our prediction is it talks about the reserved for the prediction is not as well as the that the trustworthiness of the model goes down. So you need to have something like Explainability concept to be implemented where you could have ... talk about that, but deserts is, right, and that's something which could be correct for the same time.
You also look at the automation part of the data. So, data has to be, you know, the automated, but in this case, we would look at the, how the automation and the scale taking place. So, that will be this maintenance for adoption of the AI use cases, rather than you more to do the manual work, and at some point of time, it's become a pain to the DevOps, DevOps updation forth. An important part is the simplest compliant, so differently complex is the important part. I did. Look at how you analyze the data that you're getting government with the, within the organization that I should have the right kind of access. So these are the some of the things that, that, that before you start the project, you need to look at it carefully.
You have to also look at how it would be good benefits. So when we look at the scaling part of that ... data, that you could have, and transform the different business processes, that could pick up there. So it is not only the data coming from one source, but you look at the data coming from IOT data available already that your data warehouses. And also, how does your business application is getting transformed at the same time? So, we're more focused on looking at the other artificial intelligence, which we like to stimulate data. But he had to leverage the data, which you have on your traditional system, as well as we have, also a transaction systems as well, and also look at, from the perspective.
So, let's look at the, how we look at this, scaling the advantages, data.
So, typically, so far, that, that, what we have seen that the, definitely, the AI has not able to live up to the too high. Because of the huge demand for, yeah, That's there in the market. Lot of projects, which upcoming, which does indicate to the fluxes. And there will be different challenges as well. So, as we said, there is the complexity of challenges, but it's also the huge amount of resource challenges that we see for the. Yeah, the light. We see a lot of people getting into the data science, and I wanted to work also on the IEA engineer or a machine learning engineer profiles, but still there are a lot for the point that that's something company.
going forward, we also see in the areas where the AI was going to, the redefined the business going forward. So first part, is that, how do you use the machine learning activity? How do you make the deep learning, as well as scaling? The AI across your enterprise and eight is the 1 proxy 1 place where you're going to, the destruction of, this means that image recognition, or your your kind of PG&E imagining your manufacturing process. Or, you know, you have the huge amount of data, which has been company we hate to be processed and productivity on top of it. So that's where you see that. A lot of model building, or AI or adopting.
Second important part which which could be getting disrupted, is the intelligent process automation, what we call it as the RP arrived IRB. So, as already the robots been started working on one spot in your manual processes and that's where the RBA plays the. List the work where the uninterrupted mailbox get good monitored. And the thought which is there on, on your system, will take a call whether this particular process has to be executed or whatever automation need to be done with typically that you want to, on the venue always. So, there has been disruption happening in these areas as we will see that more and more to come into the upcoming days as well.
The third part is the interactivity. So, we already see dot with changed with the chatbot, been coming in the business. The covert situation, also, there have been a lot of chart, but being used in multiple places before, you know, now, before you react to the or talk to the human. right, Also the personalized experience that anybody would like to have their own way of our user interface and the way without is an ... has been transforming the way to bring the automation and our day-to-day life. You'd see more and more automation happening on the interactivity or the user experience because it is the third space that the PC the AI is going to be disturbed.
Also, there has been a lot of prediction that 92% of T, for the critical.
Building, the personalized customer experiences, will be 60% of event tasks will be automated. Again, that will not touch that one, could to avoid a definite. That is something which coming from the survey, as well.
But, the app, they will also be talk about about three point nine billion revenue opportunity, that Enterprise EMR, it would lead to 22 as well as, we see more and more automation company, which is Yes, So, you can get that, the, what are the personas that typically involved? While we work on the project, so, allocate that moved from the enterprise perspective. when we get started to building the project. We definitely have CIO, and then, at this point of time, the challenges that, that each person that would also face when they wanted to work on them. So, the CIO is looking at the cost and meditated many of our data science projects. Usually, they can get to the open source and I think open source is something that you've all been component at the middle of it. And we don't not able to keep the pace with the open source component.
And, we see that there have been landscape been evolving so fast and not the, typically, the challenge of getting the new open source the beginning, the data science not. The third persona is that about the DevOps. So, as soon as the mini POCs is developed by the data scientists, they would like to bring them the things into production. And, mostly, there are not documentation are not supported, Or the security ..., or Machine Learning operations person. Our AI of persona that we look forward to, is not very comfortable for water. data. Scientists have been bringing the production directly, which, which could complied with other Enterprise strategy or the strategy.
Because if they're using many open source product, which are blind, or has the security flaws, are not as a required certification, is always the chance of something could be leaked out from from that, and as well as Yann. And there also needs to be have the education and training of the envelopes or other pictures. So, these are The challenges also comes from the, across the organizations.
So let us, before getting into the assumption, but understand that, what is the enterprise ear that we are talking about, right? So in order to have your enterprise architecture readiness, where we talk about, what are the business value, that, at the side of part, same, with the enterprise E, I is also addressed the complete enterprise, that green space here. So, for example, when you build the particular POC, POC might be you get some data coming from your exit cheese that your data engineers are irritating and given to you. Based on that data, you're already building some models and mitigating some some awesome results, but things would be entirely different when you want to prioritize it.
And when you want the product, as if you mostly think about the, what action can be triggered out of, out of the inference that your model is given to you, that point of pain, you need to be very clear. that how do you orchestrate your data, along with the, your model output. So typically, when we talk about the Enterprise AI, it's about the business value, that you are looking into, the out of your data science experimentation, and not only the particular output of your model, and that business value you can get. When you would think that with your business processes and business processes could be the other, you are connecting that. You're back to the ERP system or you are connecting that back to the your chat box on your kinetic backpack.
So when you bring those kind of connection to address the complete cycle, that's where we talk about the how the Enterprise AI could be different than the normal day or weeks of .... You will see on your simple Jupyter Notebook kind of solution.
So, in order to achieve that force, definitely, we need to understand that should we manage the data, the data management part is important, when you get the right data and data getting from the source, then how do you need to orchestrate that theta. Second part is that, importantly, how do you manage your development? And in order to manage your development, you'd like to use data science to like the like the Jupyter notebook. You use your point to have your Python libraries or, or do you need to also have the libraries that separate popularly used for doing the mutation as well? But you also need to maintain the life cycle of your experimentation of a station of your, your open source, and third parties, that that's something that you're using.
So you need to have the, kind of a management environment where we're able to manage the complete experimentation part, and bought a T when the mandate to the total delivery or deployment. But that, that has to be the scalability. So today, let's say I'm doing it for one region, or 150 particular output, that has to be transforming into the, let us say, 10, 10 cities of tomorrow, or drug stores. And that's something you should be scalable approach, where I could use this, use a similar experimentation, or similar parameters that I am using, could able to scale out of the different data sets, and exchange it like this.
Next, Libya to take advantage of digital information management, because basically, not look at that. How does we discard the data? It would be difficult for us to make the right assessment that, what kind of its rotation that I am going to do? So from a discovery perspective, you'd always wanted to be, what are different data sources that are, that are available? I need to have to leak, which could go into the directly to data source, get into the metadata management part of the data. Look at the what kind of structured data hazards, and then that not only one data set, but multiple data sets coming across the different place. And that's what you need to like the, where you can orchestrate the data and compose and create a pipeline.
And this particular orchestration, where let us say you are getting data from different data sources on the cloud. And digital services could be either on prem. You have something called Cloud of decoupling from social networking, but you should need some composition environment where you can create the pipeline at orchestration of the data. At the same time, you should be able to document the data as well. So as the period of time, our tool also has to be improvised, in order to cater the new kind of way of our data, which is coming up. I think we've been using, quite traditionally, the R.d.b.m.s.. But now I think it has to be transformed to the different way of cooling, as well.
Importantly, the data, that, that's something which is available, it has to be going for who should have access to the data. And it has it that you're creating, and because this data will be coming from multiple sources. And what kind of compliance that? Well, the data should become an anonymized way. All three dice, you don't have any favorite person information. So, be sure to have those checks and balances before we do that for experimentation. So that's where you get to leverage the existing information management tools that that patient.
Importantly, we now develop the foundation of auditing of the AI project. We need to think of as the streamline the assembly part of. Right. So it's like a simply for building the projects where all the three personas, where it's a data engineer or your DevOps person or your data scientists person, not all of them has to work to be that in order to generate the the complete value of the solution.
The important part to the first thing to connect so you could connect you to the Standards to connect with the structured data on structured data or streaming data.
Then you need to make the learning activities, which is pronounce VB.
Peer obligating the orchestration part with the data. We need to have the data preparation, as Elliot have depression, machine learning, ..., So we need to have infrastructure where you get the data. You won't make you let the experimentation on the data. And this point of time, also, data scientists would also have a good handshake with the data engineers directly.
So, not to work in retail, and be based on the, based on your Excel sheets that you are getting better. At the same time, you try to get the real data coming from the system, which is orchestrated planes. And, now, and which use for the, for the machines, I think, experimentation, or the calculation of the morning.
Then, we look from the scaling point of view. So, you deploy the model and then monitor the model performance. And as well as also look at for the, for the experimentation part of the model. And at some point of time, the market, as well. So that's what we look for. The scaling part of the data data life cycle. And the data life cycle scaling is as important as, how do you build the complete flow, where you're monitoring the performance of the model I spent.
So only not the part that you create a model, and you get the output, but model deployment, and regularly validating the model performance during the experience of the above, the model to get the drift of the of the model on the table Part. And also, looking at the, how does the EPA has been performed or not the accuracy of the model a period of time. That's something that that need to be monitor, and some, whatever the risk is. The most important part of the picture is that you should be able to establish the consumption of your modern state. So, once you have an output, which which has been coming up, there's output has to be consumed in a certain way to the either your business process automation or, you are setting the output of the visualization for this user, that they could use for their purposes, or, at the same time, that could be the personality trait chain for your chatbot. It's going back to your ERP system for triggering the additional process, or getting data back to the United States.
But it's very important that when within the enterprise, me, I don't believe that they wanted to bring the experimentation on. Scaling would solve your problem.
But anything and everything that you pick, it has to be a complete, consumption based model. Where the output of your experiment has to lead to the triggering some business processes, not or adding the value, Otherwise it would not be a very, very fruitful outcome.
And if you follow this process, I think we're kind of giving the, giving the transparency to the power of it, as scientists know what exactly we are doing. Because not everyone is involved in the process, and following the complete part. So, from a personal point of view, everyone are kind of looking into the complete process. Let's look at one example that how can you achieve, this could be predictive quality, maintenance in the manufacturing plant, I would say. So, typically, in this example, we have three different diverse groups, which are coming up. So I add images which are coming here. And the problem that we're trying to solve here is the we have the kind of manufacturing mold of creating the piston or some kind of crammed shop which are very, very minute part which is coming out of the mold.
Now but at the same time when they're building it you need to be the business in. The current process is that someone is entropy can get it major League. And passing saying, that this particular thing is right or wrong, But, here we are trying to automate the process of putting the camera will basically create the images. And the feature extraction from the image saying that, yeah, the exact hole that need to be there, a, precision of what I mentioned, that it needs to be paid, It has been building the right then. That's something based on the machine learning model that, that, that that, that IOT data coming up here, which is from the pressure sensors and say that the pressure applied on it, and it looks right.
And also, the data has to put in the context from the, from the material monster and the data that can be property yet, even if you were talking to and also this data getting back to system and also is creating the predictive analytics here, saying that is, the component has been more reliable.
In the current scenario, if you're all the landscape data, states are not completely connected. What we see the link due to the pressure sensor, which is basically a data which is normally stored into Kafka, kind of environment and then streaming data stored somewhere else. When you do your IR images, competences, which is machine learning, that typically you would maybe historically the Hadoop as a data store for the object store and then you are doing the image processing on on top of it. And then you need to integrate new analytics or do some great your application. Now, all these environments are not connected. And unless you not have kind of pipelining, good audit orchestration tool, you'd always have this challenge of getting decided to output, and not able to reach to the production ready solution with solution has been disabled.
Whereas, we would suggest that.
To do, to make it the event streaming, as well as you create the pipelining for your data, to bring them together, and then move to the to the enterprise data center, which could make your complete protection, good solution. And the real time, you can able to influence it in the right way. And that's where you need to look forward for, how do we break the silos. And the assembly line point of view, you are not only solving one problem, but you are trying to record the end application, or getting back at this.
Importantly, year to embrace your open source technologies as well. And now you have to take advantage of Kubernetes kind of environment where you have something scalable infrastructure that you can run on the different hyperscale notice that that, that you'd like to break it here. But at the same time, the Docker and Kubernetes, or maybe mostly, like this, which would be like agnostic across the scanners, there could be best to utilize the environment.
two scales applications all the time. At the same time, from the, from, the Open Source point of view, will always look at how you can use our Python, which could be integrated into the Jupiter Notebook to create your complete data science experience, and also the data that you need to good. It is always to be quite a pipeline object. That data can be either, and that's the way you could able to leverage in your business.
Processes, How does we look at that? And also the model that should be that you do what you use that you pay for it. So it should with this Caliber Litigate using the cloud or are also on the on prem Infrastructure, like Kubernetes, could only skeleton when you, you need to use it, and not to block so much of resources from the problem, the first place, as such, there.
In order to get started, your journey, what we advise is the, you should have a comprehensive information management across your enterprise, it, that's something you should get started, to get the, all your data source is connected at the same time. Next, important part, that you look at the visualization part to build on digital sources, where you would know that, how does your data sets are building up, and how does the data say it would be relevant to the work for you, and those datasets? You could analyze that widget, as it could be useful for the initial expectations for the machine learning use case. And that it is that you should look at it here, as well.
Then, we need to be looked at deployment of the machine learning and deep learning to the production stage. To the next step that that could we bring it up, where you could not make the deployment of the machine learning models, and make your pipelines ready. And then start to your pilot projects. And last, and the important part that we are focusing on, how do you take the advantage and connect the forces of the, of the machine learning to the tobacco, to be in business processes? And that's got to be able to take the actions on top of it there. So, this is the way, I would say, that relate to the work of the challenges that that means from the technical point of view and I have seen the project's been getting success wouldn't if you follow the complete, but not forget that the notification at a time.
Yep, that's from from my side back to you.
Sent Tastic fantastic, what a masterclass on the applications and the uses of Artificial Intelligence on the Enterprise Ramu. So, we had lots of questions that came in during during our session. So, I'm going to ask you to stop sharing your presentation, so that they can see both of us on the, on the camera. Perfect, perfect role. So, lots of questions coming in here. Thank you so much. Great insights on the on the on that presentation. So I will start let me look through the question list that we got here.
And we start with Harish Harish ..., he said, thank you for taking the time to provide these insights from your experience. I would like to get a viewpoint on what you're observing, whether you're observing more consolidation of data for data lakes, or are you seeing a move towards connect to the data with a sufficient pipelines, rather than collecting? So what do you What are you seeing in terms of the best practices around data collection? You know, with data lakes are more efficient pipelines for data collection Well, what are your perspectives on that?
Yep. So, from my experience, working with the customers and salting, the machine learning and data problems, we look at, that, the data lakes has been becoming cheaper, day-by-day. And also, the performance if you collect the data from the data lake is also positive. And, as will many times, customers would like to do that. Or what we recommend to them is that the data consolidation in your data lake voltage of the surgical data that they have, let us say. If you get using the memory database or the expense to data store for your transaction data sets, we would recommend to use the use the consolidation into the data lake. And then, it becomes a central place where you could access the data at a faster pace.
And we should give it to you are talking about machine learning, or AI kind of environment, where the, the most of data also is that the images, or you're streaming data, which is coming up quite frequently, is quite a bit, As well, from the processing point of view. We see a lot of usage in last two years. Our calculation of data would be coming to the, to the return They can. you also would recommend that to more ... of data storage, will be coming to the data lake, is the best practice to do and move forward. And there are a lot when tooling will be ticketing developed now, even better the tooling when you're getting, develop all fields, the different hyper scale that across the different. Different data lakes that they provide to, which can be accessed seamlessly, and orchestrating obviously, into the pipeline for a consumption point of view.
But, yeah, there's a lot of momentum, big data to data lake. And then store there, you can get your data sets that you required, and then you can zoom out, You'll see that expectation.
Very good, very good. Next question here comes from Mercedes Benz ..., who is calling in from Germany today.
How do you establish data ownership in your organization? And she's, she's interested on the, on, really, maybe more, on the structure and the governance side of things, how, how, what, what are you seeing in the marketplace in terms of data ownership, and establishing that data ownership and organizations.
Great question. I think it's, it's very good to hear that. And this has been a challenge in the industry quite a lot, that, who owns the data in the automation, right? Because, recently, when we talk about enterprise, great organizations, like, the data is not being gates, that the IT has some bite to eat, and sub part is with the with also the business. In this case, especially when data protection privacy comes in picture, we should not have a very specific data to be stored for a long time, and also get the normal data from the experience point of view. So, I think they did look at this problem of data, were no. Shipping it.
In a diverse suite, where if it's a private and personal data, that has to be owned by the respective business and IT along with that. But we need to segregate the data, and God made such a way that some part of Theta and beta animation become very, very important part of our data. And you have to, if you don't have those truly in your organization, I think you have to get the best out of your, your data. Because many times, to do the successful projects, you really do need to work with the, with the actual personal data that people as. But just to get us it is. What it is that your IT would have, which would use for the experimentation. and other one is the business feature, which will be more from the, from your specific code.
Which can be, I would say though, personal data and more sensitive data, that that could be keep it to them. So that's why I would look at the data. And also, when you code the data, I think it's important that you also called it not only your table structure, but also row level of beta Goldman says, in this case. So better governance.
Very good, very good.
Along this lines, relation related to governance, and related to there are lots of questions related to governance and the and organizations and structure specifically, Antonio, or Chiesa, asks, asks you to talk a little bit about the organizational type challenges that need to be addressed in order to more effectively use the AI capabilities that you're presenting. So, as you're introducing AI capabilities, organizations, issues beyond the data ownership that we just talked about, what are some other things, at an organizational level, then, they are that you see as challenges, that need to be addressed for the effective deployment of AI?
Definitely. I think, very good as the questions entered, And, there's a, there's a whole other topic that I presented at a conference on the, how does you organize, or, how does the organizational challenges around the AI to make the project successful? Is because we have seen, it's not only that you need to have, or technological support. Many times, technology could be at risk, but also you need a very strong sponsorship from your organization, And these projects, the plot of patients, to develop an entity. But it's not that the idea that your normal development projects, that you define the scope and you get started, and you have output coming out with the subsequent sprints, and then you see the, you know, what? What do they expect, right, perturbation to the liquid projects.
It doesn't happen that overnight you always need to be patient from an organization perspective and ensure that, you know, there's a lot more patients required also many times, you also see that the failure of the Simon, you start You do POC, but you don't get desired result because you do not have a data sets available with you or your thought that that that you wanted to bring in because we are and we just end of end up as an exploitation as well.
So, from organization point of view, I think there has to be some amount of management. Education. When you start this organization, that there are two kinds of organizations that I have seen implemented for the industry. What is that most centralized? So there are the central source of people who are data scientist and an experienced team who has expertise on that site. And they work with big projects from the business, And then they start working on them and didn't deliberately do not have much of business foundation. And lot of back and forth happens that, in order to get the data and insights right sources. Other setup that I have seen where the data scientist or EITC is very closely aligned within the particular line of business, and they are part of the business structure as well. As the chair to receive little more close alignment as well.
Where did they get to the, A bigger part of the? Because you don't see, there are maybe 100 people working on AI, The organizational details will be like core, I would say, pretty dirty data scientists, in, particular, in light, of these things. So that is, depending on the first important thing that I would suggest, is that you had to educate, educate you on your management and energy management planning, and set the expectation that every project that you get, is not going to be success with. The rest of you Also cross balance Jake's to be done. Second part is that the setup: organizationally. You need to get more closer to the business because it has to be a business driven outcome as well. and many times abuses looks good but your business outcome you might not able to watch you. And the connectivity to the end to end part might not happen.
So that has to be, only can come with the goal of automation. The same age, get siloed and sitting somewhere else, maybe you do the fantastic job of getting the output from the data science project, but it's not able to feed back to that as we discussed to, the, to the business processes Or is not aligned with the business processes that are just too complex to handle, Then. It might also fail, and there's no use of tickets, so, strong alignment with the businesses.
Very good. Very good, excellent insights. Row will have time for one more question here. So, this next one comes from Medan Develop and then asks, first, he says, Really great session.
Thanks for providing these insights. And he asks on on on the there was a portion of the presentation that you've talked about, re-imagine processes and automate business processes. And he wanted to talk a little bit more about that on the re-imagining processes and automating business processes. Any experience to which you have implemented recently and how that re-imagining of processes and automation of business processes are being done, maybe, well, in the context of AI applications.
Yes, so Bob ... even I'm involved in quite a lot of these kinds of projects of rebuilding the business process. one of the example that I could talk to, especially in the, in the area of, or it could be the real-time example, which, which, we could make it a simple example, but there has been no studies where the manufacturing ..., more business oriented company.
So, we talk about all this industry, 4 dot 0 kind of processes, and what happens here is that, so far, the current business process, we had a lot of manual activity happened and we always depend on the quality part of, right. And typically, if you are familiar with the manufacturing setup. as soon as, let us say, you talk about the automotive company and when to build the engine. You always testing. And you didn't, you test will be the voltage, which is kind of female making it, you're pushing the pressures and making all those things. You do for the hot test, which is actually you are putting the fluid inside the engine and testing that way, that is performing it with the right KPIs. And it's a process which which takes you a lot of time and hot test cause to you, I think the most of quality cost goes into that when you take the engine or for the occupants kind of scenario.
Now, the tasks that that we did for this organization is that mostly, too, how can I look at all the parameter company code from the voltage, which is very cheaply, because the image of the genes, I am taking, the, putting the pressure on the components, all the IOT kind of stuff. The moment, I decided to, the real, to really put that in the dance to run, and, again, just to cleaned up properly, to feed back to be the car. And we, we did the, all the data analysis, and then the width of machine learning model, Where we told that if your particular engine is a teaching two disciplines with threshold, based on this, you really do need to go for the hot tests.
And let's say, if we see there at least 60% of all of that, but, then, you know a year, or not, even if you're a month window, companies, waiting more than, I think. I was around the beginning of engine for a photo to the automotive company. I think they could save up to the yard, and against the latest one billion in which the sample basis, that ... editor and the transport basic, but you're 90,000 still. Can people as the flow, where the modern that expectation that? So, this is we called as a re-imagining the Business Processes, where you challenge the status quo, you look at, what is your pain point and where is, of course, that you're able to save to this kind of exploitation and something, which will either the end.
Terrific. Oh. Thank you so much for sharing your insights. I know it's late in Bangalore right now and you're coming in and being so generous with your time and your expertise. We really appreciate that on behalf of the entire global audience. It's a real privilege to have with you, with us and sharing your expertise. Thank you very much.
Thank you very much water for the opportunity and the great audience and the great questions of ... looking forward to take a look at.
Of. I, thank you.
Ladies and Gentlemen, this concludes our first segment. At the top of the hour, we're going to have another terrific section from Mondelez International on how Enterprise Architecture is defining the way they operate globally. So you do not want to miss that. We will close this session now wants to close the session. There will be, you can fill out a short survey with any feedback you have on this specific session, Also, for after today's session, And I will be answering questions on LinkedIn, and that there is a post that has been shared with you, or you can look under my profile, shows that Paris on LinkedIn. And that, and that, you can leave your comment about today's presentations. And we'll address some of the questions after today's session. So, very good. We'll see you at the top of the hour. Thank you.
Senior Director SAP Artificial Intelligence,
Engineering Leader with more than 18 years in world class company having extensive experience in designing, developing and testing enterprise software products. Expertise in domains like Analytics & Business Intelligence, IoT and Machine Learning.
Responsible & accountable for driving global missions with Product Development, Processes and People. Managed end-to-end life cycles of onPrem and cloud enterprise software product components – from whiteboard designs to maintenance releases.
Experienced in building ground-up world-class teams, as well as in driving transformations of inherited teams. Building the team with Innovation as DNA and transformed employee engagement and trust score. Extensive experience in working with geographically distributed teams. Managing a development team of Managers, Architects, Product owners and Program managers.
Excellent balance of Technical skills, Managerial skills and Entrepreneurial skills.
Thrives in a 'start-up like' work environment.
Search for anything
November 9, 2021
11:00 AM - 12:00 PM ET
January 13, 2022
1:00 PM - 2:00 PM ET
January 27, 2022
1:00 PM - 2:00 PM ET
View our schedule of industry leading free to attend virtual conferences. Each a premier gathering of industry thought leaders and experts sharing key solutions to current challenges.View Schedule of Events
Watch On-Demand Recording - Access all sessions from progressive thought leaders free of charge from our industry leading virtual conferences.Watch On-Demand Recordings For Free
Courtesy of DC Government's Ernest Chrappah, below is a transcript of his speaking session on 'Going Digital To Enhance The Customer Experience' to ...
Courtesy of 's Anu Senan, below is a transcript of his speaking session on '' to Build a Thriving Enterprise that took place at Enterprise ...
Courtesy of Tasktop's Dr. Mik Kersten, below is a transcript of his speaking session on 'Project to Product: Driving Digital Transformation Insights ...
Courtesy of Nintex Pty's Paul Hsu, below is a transcript of his speaking session on 'Improve employee productivity during and post-COVID by ...
View our schedule of industry leading free to attend virtual conferences. Each a premier gathering of industry thought leaders and experts sharing key solutions to current challenges.View Schedule of Events