Courtesy of CloudFactory's 'Tristan Rouillard' & Abyss Solutions' 'Adam Petruszka', below is a transcript of the webinar session on 'AI Innovation in Industrial Asset Management' to Build a Thriving Enterprise.
AI Innovation in Industrial Asset Management
How to Leverage Aerial Inspection Data with Humans in the Loop
Challenging economic times is driving heavy industries to adopt AI to dramatically reduce the cost of asset inspection and management. How? By unlocking hands-free insights from high-quality aerial and geospatial computer vision (CV) images, point cloud, orthomosaics, and digital twin data. Attend this webinar and dig into practical use cases and uncover considerations for adoption, including the role of humans in the loop (HITL) for training and sustaining AI/ML models. You’ll walk away with insight into:
Key trends behind industrial asset inspection and management
CV application use cases across various industries
Steps to implement projects efficiently and effectively
Benefits of a HITL strategy to accelerate AI projects
Good morning. Good afternoon.
Thanks for joining us today for a session on AI, Innovation and Industrial Management.
Monies are open up with our factory pleased to have with us today ..., who is the VP of Machine Learning Solutions, with Cloud Fracturing. And also Adam Patricia, who's Director of Business Development with solutions.
So before turning it over to Kristin to get us started, just a little bit about Cloud factory.
Tough actually is a global leader, commodity people, and technology to deliver AI automation solutions. And company has been around for over 12 years.
We've been delivering solutions to over 700 companies globally with a workforce of over 7000 data analysts and operating in across several continents.
So I'm really pleased to be able to introduce you right now to trust them, and I will turn it over to Tricia.
So yeah, I'll be brief.
I, before joining the Cloud factory team recently, I was one of the co-founders of Haiti.
It's a machine learning platform, specifically oriented around vision, as humans, how big machines see the world, we do this by enabling adaptive an Agile methodologies. We'll touch on that a little bit more later.
I guess, and before that I was working in the Gemini manufacturing sector doing Yeah, quite a few projects related to the topic for today, so I guess sharing what particular intent.
Hi, I'm Adam Petruska with the best Solutions, Best was founded in 20 14 in Australia.
We've since grown to a global company, and, uh, so, my experience beyond Abyss is working in the high-tech industry. Spent a number of years as the large high-tech companies, and my time at Hewlett Packard, I actually said a number of years helping commercialized technologies out of HP Labs, so, again, a lot of experience there with emerging technologies at times, such as Internet of Things, big, data AI, and machine Learning technologies, So, now applying a lot of what I learned back in those days, at the best solutions to the next slide.
So based solutions, as I mentioned, founded in 20 14, where we're growing rapidly, we're now a global company and we are leading the charge around inspection at scale in a number of different vertical. So our biggest verticals are energy and water infrastructure, but we're also planning agriculture space and exploring other industries were all about getting complete insight to customers about the condition of their assets and faster insights, mean quicker realization of value in their maintenance programs.
And ultimately, when they get to quicker insights and better delivery of maintenance, they end up with safer and healthier assets for their operations.
All right, thank you. So, I guess I'll go through a quick intro on Industrial asset Management.
The goal of this is more or less just to establish common ground, that we can dive into some of the topics that are on.
So, to start with, just the definition of what is industrial asset management, again, it's continuous.
First, kind of controlling physical assets to maximize or minimize the cost, while maximizing the reliability and availability of that asset.
So, the goal here is it's often distributed acids or assets in a single place.
Where there's hardware that needs to be maintained.
Yeah, I guess, that covers that really well.
Some, at least, now, we know what we're talking about when we talk about IAM as a topic.
What's sort of underlying the global trend? Why is this topic today?
What has led us talking about it in a webinar?
There's a few sort of macroeconomic trends, if you will, That have aggravated a situation where they need smart solutions, ranging from, well, going working through the list, there is an aging and gestural and civil assets globally.
So I think we've all seen it in terms of.
Power Lines is a common one, wind turbines.
Solar panels are all installed, but aging and needs to be maintained.
With that, there's, you know, the, the change and job demands, the, the younger generations seem to be less willing to, to do those kinds of jobs. And there's less supply.
In many countries, I do believe this tends to be a fiscal issue, but it remains to be an issue where there are less and less people are willing to do jobs like on Maintain Power Lines.
There's rising labor costs and limited skilled technicians and inspectors.
So what sort of mean by this, as we've seen, at least in Germany?
And talking from experience that I've gots to not sort of make a blanket statement.
There, there is an aging population of experts that know how to maintain a lot of these complex acids who are at a stage where they're nearing retirement.
And there seems to be little to no succession planning In terms of, say, apprentices that would be working underneath them to take over that responsibility moving forward.
And I do believe that this is reflective a fairly large global trend that is happening.
On top of that, we have the growth in renewable energies and civil acids. Solar panels take up space, is it's hard to get around that or get away from that.
Solar farms also are things that, I mean, wind farms are things that, that also take up space, needs to be maintained on an ongoing basis and they are distributed And it's not trivial to do this.
So, it's, it's not only traditional industries and say, Well, then gas, but, um, new renewal to injuries. And industries have the same problem of having to be maintained.
And then not to mention the regulations and environmental policy. You know, we are having mounting pressure to look after the Pentagon.
And there's increasing scrutiny to do that in a proper way.
So, the five factors combined mean that this is a critical topic to solve now, and to do it well.
And it's also something that needs technology to do it well.
So, segue to the next slide arrow, to talk about the technology. So, what is happening now that's lacking this as a use case for us?
There's six sort of major breakthroughs in the last 5 to 10 years that have made leaps and bounds that are making this more accessible, affordable, and deployable than ever before.
It's ranging from a few facts. Reworked and paste is a vision AI company. And we work with cameras on a daily basis.
The cost of a camera sensor and the proliferation of the distribution of the sensors is massive, where in the past, it might have been a different sense that someone would use to do a given job.
The fact that karma senses are simply so cheap these days, a lot of companies tend to switch to using vision for that.
So that's credited quite an opportunity in terms of having an install base to turn these applications on, There's been massive improvements in machine learning, tooling, as well as the models. Some models are improving on a daily basis.
And, machine and engineers have a full-time job just keeping up with breakthroughs in techniques and methodologies. But, not only that, it's also the tooling. Software's are getting better.
It seems to be following the same parallel as the DevOps world, where DevOps had fairly archaic tools to begin with. And then they get more sophisticated And easier to work with. As time goes, by, we have the same parallel machine learning, just a few.
drones are the range of drones.
Then, this lot of people that I know companies that we've dealt with in agriculture, and power sectors, simply didn't have the range they needed, or the payload, that they could put on a drone to be able to do these kinds of tasks. They are able to do that, which means you can do these kinds of inspections, which is great.
Then processes improvements, Nvidia and Intel have both been releasing new chips that are just doing better and better work and have strong external press the power, which is, you know, it's important. Sort of final one, I think, is the one that is the most elusive, which is the combination of people taking process.
I'll allude to it while I come back to this topic that Iran, but the process of how to do things and how to work with this technology and do it reliably is still being established.
But it is almost there and it means that people are doing as far more successfully than they were in the past with teams are doing as far more successfully than there wasn't in the past.
So we'll touch on that a little bit later on.
How big is the market? I mean, yeah.
Just some research to say, big and growing like Health, 15% CAGR.
I guess it's also early, Early days. You know.
A lot of the innovations and industry or technologies that are affected by this are still sort of coming up on that end innovation adoption curve.
Um, where we want to see those growth rates go up, rather than down in the future, like you will.
Right, we're going to jump into it quick poll now, just to get a sense for the room.
If you take a second to appreciate, if folks are interested, we'd love to get your feedback on this poll.
The question should be up in the tool.
Question is, What phase are yet regarding adopting AI?
For industrial asset management? There's four options there.
Are you interested in learning, exploring a evaluating solutions, see, currently in trials for beta?
Or, do you have solutions in production?
So, give you a few more seconds to enter and be curious to see the results.
Are a set of results are up, and it looks like we've got a lot of interested learning, exploring folks.
I'd like to turn it over to Adam, at this point in time MR.
Sorry, Adam. You're still on mute first.
Sorry, I thought I'd click that.
OK, I'm going to talk today about a real-world case study in the oil and gas industry, where we're basically using the best technologies for autonomous inspection.
So, next slide.
OK, so in the real-world today, there are limitations with visual inspections that are done for, you know, for asset integrity, asset management. one of the first things is that the inspections are performed by humans, and every human is different in their productivity, in their interpretation or perceptions. And, on top of that, let's face, it, every human can have a bad day.
So, as a result, you end up getting different levels or different subjective interpretations of the condition of an asset.
Additionally, these assets are so large that no one person can maintain, the level of concentration needed to cover these assets in an accurate and effective way. In addition, some of these areas are very difficult to actually ax access on the asset and this. So, that's another challenge. So, as a result, most of the annual inspections that are done in the oil and gas industry cover anywhere from 10 to 20% of the asset in a given year.
And then so they basically do kind of a rolling inspection overheat over the years to get a complete picture of the asset.
So as a result, conservative or, sorry, maintenance programs are very conservative in how they address challenges. So, for example, any, any issues identified, they will go out and and paint sandblast and paint the entire piece of equipment that was identified as having an issue, as opposed to targeting the paint campaigns to a particular piece of the assets.
That's one example, other issues or challenges he faces, even with the rolling inspection, people still miss things. Things are not identified, that can lead to downtime, as well as accidents.
But, actually, one of our customers, when we did the first inspection, we found a damaged piping that wasn't due for inspection for another two years that it was so far degraded.
It was likely to lead to a loss of pressure containment and a spill on that particular platform so that that right there, sold that customer on the, you see the solution.
All right. Next slide.
OK, so this is the inspection process. So the first part of our process is to go out and capture the data on the asset. So we do that through scanning, through photogrammetry, visual, no cameras essentially capturing asset as well as lidar sonar. We have some underwater camera systems so that we can take images and and give back the health of that asset from sub C all the way to the top sides, especially the gas offshore deepwater world.
We use a variety of vehicles so, your RV for sub C, aerial drones in some cases, for structures on high with terrestrial laser scanners with a human crew as, well, as, even Sonar. If the water conditions are two turbid, for actual visual inspections, then we take all of the data that's captured and stitch them together into a single model of your asset. That has both the visual imagery. as well as the point cloud from the, from any lidar that we, that we capture as well. And stitch that together into a complete model of your asset, essentially a digital twin.
Secondly, we then analyze that data suite. First thing we do is we contextualize that data.
So if you know, if anyone's familiar with the Digital Twin today, if you have a digital twin and it's great, we can walk through it, you can see all the equipment on the asset, However, you can identify easily what something is. Well, one of the benefits of our solution is that we contextualize. We basically will label when tag every pipe, every ****** every vessel Structure Structural members rails rates, that are on an asset. So, as you walk through and click, you can see what that offset or what that piece of equipment is, as well as any associated metadata you have around that asset, it service class, how, you know, what size typing, what's the corrosion allowance, or, warlocks allowance, for that particular type of material, what the material is, et cetera?
So when you're clicking through, you can, you can better get a better sense of what that acid is.
Then what we do is we let loose our AI, machine learning models, both computer vision, as well as some additional models we have that look at the topography, the point cloud, to identify various damage, class mechanisms that might arise and an offshore oil and gas platforms. So external corrosions, the most, the most likely, but we can handle other damage class mechanisms.
And what we do is, we then classify based upon this, the customer's rating system, heavy, moderate, and low corrosion.
Then what we can do is you see this image in the middle, we will mask those pieces of equipment piping, etcetera, where that corrosion was identified and we have achieve high accuracy levels. So severe corrosion we can identify at 97% accuracy rate, moderate at 95%, and low at 92%. So those are quite high your machine learning model results. So we've been very effective in identifying, providing value for our customers in that way.
And then, finally, what we do is we help our customers act. So we're integrating into their workflows, helping them prioritize the damage found by the AI models.
Then we we can help them identify the pieces of equipment, the structures, et cetera, with the highest level of damage, whether, So, for example, an external corrosion, we would prioritize those pieces of piping or flanges, pressure vessels, with the highest coverage of severe corrosion first. And then we can also do it by total corrosion of severe, moderate, and low. So then what they get is, here's my prioritized list of where the highest damage classes are. So then they can nominate. These are pieces of equipment for painting, for replacement.
At a, you know, very targeted fashion.
So, that's critical for us, as we're trying to get our customers insights faster and more targeted and more accurate. That's the other thing that I want to mention here, is the computer, the computer vision and machine learning models follow a standard.
So they know they're constantly, as they crawl through the imagery, they are not taking breaks. They're not getting distracted. They're 100% focused. And so you get an objective level of analysis that you typically don't see from human factors.
So, ultimately, what we're trying to do is give them objective results, that they can then target and optimize their spend on maintenance, fabric, or paint painting, fabric maintenance, such that they're optimizing their stance, and they are increasing the level of health and safety of their asset or offshore platform.
OK, next slide, next side of the video, they'll give you a taste of how the best product works.
An ad, I am!
OK, so that gave you a quick taste of kind of how our product works. We can actually see how the corrosion masks for apply, the fact that we can then identify critical lines, that that they, that they need to address on that particular platform.
Now here's the actual inspection benefits that this customer obtained for. This customer has approximately 10 platforms in the Gulf of Mexico, and this is what they quantify that, and this is, this is working with their Asset Integrity Engineers that have the budget for Inspection. So this is where they quantify their benefits. There are other benefits over and above just these quantified benefits.
But as you can see here, the key benefits Argo coverage of the asset.
So typically, they do 15% of the, of the lines, or start you out the lights per year, and then cover it out over the course of about six years. So now, we're getting 95% plus coverage of all the lines.
Then, when we do the inspection instead of a visual inspection by a human, it's a computer vision, and then any inspectors that they have in the office can then walk through the model themselves, and do a follow-up inspection based upon the results that were identified.
Additionally, the biggest benefit for these folks were, they reduce their inspection budget by 50%. And, so, that was a key benefits to them. And then, one thing here that make, my may not make sense for a lot of folks. But, what they call POV a person's on board. So there's a limited amount of space on these platforms to bring people out. There's a limited number of accommodations that they can safely hat, or a number of people that can safely have on a platform, and that most of these platforms are running at a very high utilization of POV. So freeing up EOB means that they can spend time instead of sending inspectors out.
They can have, you know, either production, engineering, out there, may be enhancing or, or upgrading equipment on that particular asset. Or even adding new rice, there's new lines at all to increase the production of that particular platform. So that's another thing now from an Unquantified benefit.
What you'll see is this also leads to things like reduced helicopter flights, and a helicopter flight out to a platform and deepwater Gulf of Mexico can run 20, $25,000 per flight. So that's a significant savings over time. Plus, you end up with increased equipment uptime, you end up with reduced painting. So if you can identify the critical lines, and you can actually, in our tool measure.
So you have measurement capability. So you can say OK from this ****** over 20 feet paint that, you know, sandblasted paint that particular area of the line. But don't paint the entire line.
So that, then, starts to save money on the actual painting, where today, they say, go out and paint this line, and paint the whole line.
So that's another.
Additionally, another benefit is, when they're doing engineering work, when they're doing upgrades and adding equipment onto the platform, they typically send a team out, an engineering team to examine and do some rough measurements to figure out if this equipment will fit, et cetera. They can actually do that visually in the of this model. So don't, again, they don't have to send a team out off shore to eat up EOB and helicopter flights, et cetera.
So that's the quantify benefits, plus the non quantified benefits.
Now, next slide.
This kind of says it all from an end result perspective.
So this customer, over the course of a couple of years, was able to reduce the risks to the platform. So if you looked at the top, you'll see the red and orange, where they had pieces of equipment lines that were in a, in a bad state.
And through targeted focus, on those high risk items, they were able to reduce the amount of critical items that they had to address, and then they could get to painting. And so, you see at the end result, they had much, much healthier platform. Fewer lines that were corroded fewer critical items. So, then, that means safer platform, more uptime, you know, essentially increase the economic throughput or output of that particular asset.
Excellent. Great insight, Tana, appreciate that.
This point in time, we'll run the second poll.
and we'd love to get your insights on this question that we've got here.
So the polls that should be coming up, what challenges are you facing getting your ML into production?
Option eight, getting a model to generalize to real life, deal with the data shift, Option B, deploying models to multiple devices, see, creating accurate, clean training datasets, D collecting relevant data, or E, getting SMEs to work with your ML teams.
Let's see, the results.
Already been pushed through.
So, I guess I'll turn it over to you, trust, and any comments on the results are showing that, I guess, a, C and D are the top five top ones that were selected here.
Yeah, thanks, Cheryl.
I guess the summary, there's data. That is hard.
I don't think we surprising anyone by saying this.
That seems to be, it is confirmed by the audience.
Yes, Thanks. So, just jumping through talking about that picking up on the topic, it's quite an apt intro to what I'm gonna be talking about.
But ultimately, before starting hastie, and after starting AC, we're involved in a ton of machine learning projects.
I've seen hundreds, various applications, a lot of them in asset management.
And the goal here is to just share a little bit about what we've seen work, not work, some potential best practice that we've seen. So just setting the scene in the background for that.
There was a hype in 20 17 as needed the world.
Everything semi automated machines will take all of our jobs.
That didn't happen.
The failure rate for machine learning projects is something that we've been grappling with for a long time.
When we started hasty, the statistic we found in research was that 65% of projects fail.
Then, we fund research later on in 20 19 that it goes 87%. We saw another research paper last year that said 85 or 89% somewhere around there. So, it seems to be that treading that it's just hard to do.
It is changing, where we're seeing teams do it successfully and doing successfully at scale, which is different than a POC.
And Engine G was famous about talking about data centric machine learning in 20 21.
You know, those a big hype and a big, big topic.
For awhile, it seems that, that, that's sort of been accepted.
Now, that data centric machine learning is a dominant approach, Model centric, And we'll talk a bit about that, in the next slides, data centric versus model centric.
Know, the point being that if you're not looking after data, and you're not curating inaccurate thing, datasets, you're not going to be successful.
And, you know, despite the challenges, we're still seeing 32% of companies invest in AI. So it seems like the, you know, the problems we're solving a real. They're not going anywhere.
How to do it as hives.
And what we need to do is start establishing best practice and reliable processes, and methods to work with technology.
So I hope you will hint that the next slides.
So this is about data centric model centric set.
Starkly speaking, especially in research, machine learning teams tend to focus a lot on the model.
That's because, if you're doing research at Stanford, for example, you, you have a static dataset and you're trying different machine learning models and approaches and methodologies to get the best performance on something that's static. So, very common to use was coca datasets. And ...
dataset was static dataset, and everyone would benchmark their latest approach or technique against co occur and then publish the research paper to show how they've improved on, on previous research.
And the retreats would like to separate worlds.
We've seen that that picture perpetuated in to the corporate world where we were seen large edge cutting-edge.
tech companies have a link between the training data and the algorithms of the machine learning team, which is e-mail.
The, the annotation team e-mails the training data to the machine learning team, which is an insane length.
Because at the end of the day, the algorithm reflects the data that it's been trained on.
It's, you know, a bit more complicated than that, but by large, by and large, it's a reflection of the data it's trained on.
And those worlds needs to be looked at in a more dynamic fashion.
The interplay between the needs to be focused on, because it's, it's really that interplay that makes this whole endeavor work, and more exciting, is when you understand what the data is doing to the neural network, And you understand how those two are relating to one another, that you start having fun.
So, personally, I don't love the arguments of data centric versus model centric, because I do see them as two sides of the same coin, and they should be treated as such, et cetera.
Hmm, hmm, hmm.
With that in mind, this is a historical sort of approach that many teams would take. When deploying machine learning applications.
They define a business case and the needs, you know, in our case, it's maintenance, maintenance of assets, and reducing the cost to maintain as assets. And you can approach a poor machine learning engineer, and say, How would you solve this problem?
They will design some sort of approach based on what experience, or, you know, what they've done in the past.
They collect images, labeled images.
You see the annotations, train the model, test the model, Deploy the model, get into production, And then realize, oh, we defined our taxonomy incorrectly, where we were treating this thing as two separate level classes, where it should have been one level plus four.
We should have added an attribute there, or we should have been an instance segmentation with masks instead of bounding boxes for object detection.
And so they didn't have to go sort of back to the drawing board and and start labeling data all over again.
But that's only once they've tried to deploy the model.
And that means this iteration cycle is often 3 to 7 months.
We could imagine that, if you built Software Today, with iteration, cycles, and feedback loops of 3 to 7 months, you would expect that team to fail, Do not iterate quickly enough, and, specifically, when we talk about iteration can feedback.
It's feedback from the model. You need to get feedback from the model to understand how the model is understanding the data.
And that's what's, what's sort of been missing in this approach.
So what we've done at hastie's, we've created an application where the model and the training data live in the same world. They live in the same environment.
We actually annotate with the model, and we retrain the model based on what's being annotated. So what you essentially building is a feedback loop or a data flywheel.
Within the annotation environment where the more data you label, the better your model gets, the more you can automate the annotation process, the more data you can label.
And that's an accelerating virtuous cycle that we've created.
Where you've now got the opportunity to a understand.
Are we getting automation?
Is the neural network understanding the subject matter within images?
If it's not, you get immediate feedback that potentially your annotation strategy what you're using to annotate the data.
It's potentially, not, not optimal. And you can adjust in real time.
The other side of it is we've got an application, essentially, where every project gets the point, where you don't know where the next performance step is going to come from.
It might be sorry that you need to just label more data that's common, and often the case, but it also might be that you need to clean up what you've labeled in the past.
Labeling data is a very manual process and towards the error prone, which means that, often, those errors can hinder the performance of models, and you need to be able to clean that up.
So, we've got a flow for that, where we've got neural networks that go back and check what's been labeled in the past to find the potential mistakes, so that you can clean it up. You might need to test the different model architecture. You might need to change a hyper parameter, or augment that in a different way.
So, one needs to be able to jump between these tasks a priori at hok, to find an X performance increase.
But the other part of this, that's very critical is hmm, hmm, To get to production, to actually deploy a machine learning model.
You need to have a feedback loop where you can understand how the model's performing in the wild, and you can fetch instances in the, in the wild when production, where the model hasn't performed.
Get them back to the beginning of yore ML development flow, whether that starts the annotation, it probably does.
Annotate the data, retrain the model to understand what it is now most, and update and deploy a new, updated version of the model.
This is what we would then refer to as the data flywheel in production.
It's the sort of gold standard or the alley standard where people are working towards.
And in order to do that, effectively, you need to have this annotates train deploy Flow pipeline as it were be, as smooth as possible, with as many was Phoenix as possible.
So yeah, I guess I was foreshadowing the slide. essentially, you've got to virtuous cycles that one wants to try and get in any typical, did a flywheel worth.
Humans in the loop at key points, right? You're gonna get your, your data. You're labeled that initially, that's obviously got a human in the loop.
And you'll, you'll clean up the data, deploy a model in the wild. Now, once it's deployed in the wild.
The critical step here is to do inference monitoring the smart way.
So, often, it will be done with a form of active learning, where you can look at set model metrics to understand how insert the model is about the inference that it's making.
I would go beyond looking at confidence scores.
It's a very dangerous approach that we see many teams do, notoriously, for being able to monitor inferences.
But once you've been flagged an image and said OK, this is something we're not understanding, you probably want to have some sort of human in the loop to review that.
Confirm it and put it back into the development cycle to close the flywheel.
And how smoothly you can get that firewall to spin and efficiently will be the critical success factor that often determines whether or not the team will be successful in deploying machine learning.
Mike Tyson at the quote where he says, Everyone has a strategy until they get hit them all.
It applies here in the sense that your model will get hit them out.
You're going to deploy a model trained on a sample of data, which, by definition, will see things in production that it will not have seen in this on the sample it was trained on.
And it's not a question of if it will fail, it's when it will fail. And what do you do about this when it fails?
If you can answer those questions, well, then you can adapt that quickly.
You'll be successful in machine learning.
So, what now, I guess the question is, like, next, steps that.
So, from the first poll, there was a lot of teams interested and curious like, and starting their journey on Division AI on the AI world in general.
Um, critical steps, identify workflows, new business, that require experts to inspect something.
So, we work in vision, the hint that I always give teams is, if you have somebody in your company that needs to go and look at something to make an assessment.
So, in that case, vouch madam. If they wanted to, they would like to look at the wear and tear that's happening in the plants.
To visually understand, is there something that needs to be repaired, OK?
Well, given that that's the case, so you want to look at a crop tuition to understand, um, certain things, the stage, the growth stage of the disease, and that crop, or anything that requires insufficient information, that is being assessed by experts in your company, You have something that potentially could benefit from Vision AI.
You, you, then, need those SMEs, the person that's actually doing that work today is the person that you need to collaborate with and work with in order to train a neural network. Because that is the person that understands, OK, if you're looking at this, what part of this is visually important to make the decision?
And they need to then apply the judgement and the judgement that needs to be translated into how you label the data, which then creates the training data that you need to train the neural network. So working with those SMEs is an underrated step and very seldom done, actually.
Pay them with.
A machine learning engineer.
Willa can Have this wonderful friction of, well, if we tried to do it this way, with that book, no, that would not work. Because of this reason.
And you need that back and forth between the subject matter, expert, the machine and engineer, and potentially other operational teams that are involved, Then look to how you can implement the system to augment the existing team.
Very often, you know, there's a question of what happens to the systems that are there today, can you conscious replace them? Because if you replace them, what happens with the failure? Who takes responsibility for the failure, et cetera, et cetera?
I think it's a it's to drastic step.
We we often make the mistake of not looking at AI as an ally or a colleague, and look at it as a threat or a substitute.
It won't be there to substitute. So even in the example of radiology.
A lot of people say, Oh, well, doctors are useless there, will be replaced by AI.
You don't need one to do radiology assessments anymore.
That's not necessarily the case.
In most use cases, they're, they're augmenting the doctor, just either assess a larger sample of data, make a decision of that, or to augment what the doctor said.
I think one of the KPIs they track is, if you have a duct that's given your assessment and the AI agrees, the significance is significantly less likely to go for a second opinion, and that's the KPI they track.
But the company that's doing that is obviously looking at how can we augment our existing course with AI, and that's the mindset, or the approach we need to have when working with this tech.
First and then Adam, for great insight, practical examples of how horrifying industrial management and AI to that like to encourage if you've got any questions to pop them into the, to O'Hare and we'd be happy to answer those questions.
That's some time.
Um, I said Tristan, the questionnaire for you and shifting from model centric to data centric.
How long or a process is this, any guidance based on your experience on getting started?
The yeah, the length of the process, I think, depends on the team.
The number advice we tend to give teams is, if you're the machine learning engineer, labeled data.
A lot of machine engineers don't want to do it. It's not It's not glamorous work. It's grunt work. It's hard to do. It's tedious.
But it is critical that it's critical to build empathy for what your neural network was required to learn.
And in doing that, you tend to see edge cases and see, OK, yeah, maybe I didn't find this as well as I should have.
Or we can define this in a different way. So the first thing I'd say is, there's little data.
The second thing is to invest in having a data asset that is, that is seen and accessible.
It's two components of a data asset that I think, you know, you could do an entire webinar, just dedicated to that topic, but it's super critical for our team to have clean, accessible, relevant data to train neural networks on again, inside from.
So there'll be two main things.
We've got a questionnaire for you.
Can you share an example of another industry, you've been able to apply your solution to outside of oil and gas?
Certainly, there's actually two industries. one is the water infrastructure space. So, we've actually done projects in basically ports, marine terminals inspecting, understand under water pilings, the underside of docs, you know, the level of the docs, as well as any cranes at all up above the dock and build models and no damage classes at assessments for that. So that's one industry. We've also been working in the agriculture space.
So actually, I'm involved with one right now with a rice farm that has thousands and thousands of acres under cultivation, And we're working to do several flyovers with drones early on, when the plants are first routing to count this, what they call stand counts.
So, how many plants are actually germinating, If there are areas of fields where they're not, they don't seem to be coming through germinating, they can go out and improve the watering. They can increase fertilization in those spaces, and then the follow up as the plants get larger. Do flyovers, periodic flyovers for weed detection, if there's certain weeds that are growing, that can be detrimental to the crop, they need to know they need to, where they are, and send people out, to, remove that, those weeds. So those are two other industries in which we're playing, with our, kinda, computer vision, machine learning, methodology.
It's great, great to hear. Definitely. Lots of applications, and even more, so forth.
Yeah, and one thing I'll follow up there, just to kind of emphasize what Tristen said, is, the subject matter experts in that industry are critical. So we have data scientists that very smart people, they know nothing about rice, farming. I know nothing about marine terminals, so working with our customers as well as we get hired on board some subject matter experts ourselves from those industries so that we can actually build, relevant, applied machine learning models that solve specific problems. So these aren't academic exercises. These are the economic exercises to produce value for our customers.
No, makes lawsuits.
Kristen, this questionnaire, there are multiple research papers reporting high failure rates, all the way up to, 87% mentioned here.
For those projects, What are common causes for this, and how does your, you want to mitigate, I guess, those causes?
Yeah, thanks, Sarah, I guess.
The, the challenge or complexity that comes with implementing a system and a system that needs to learn on the job.
Which, which it does, you know, that, that, I think, the past approach, was, train a neural network, or an AI till it gets 100% accurate.
And then, when it's 100% deploys or as good as possible, and you try and build a silver bullets into Silver bullets, and silver bullets don't work, especially, not in this world.
Whereas now there's no deepen understanding for the identify, well, as mentioned earlier, and that you need deploy a system that's adaptive.
Uh, which has components, and in it that I think, are heavily on the estimated it's not, it's not trained to plant forget, is trying to play and monitor feedback, improve.
And then train, deploy, monitor feedback improve as a workflow is often not taken into account.
It seems detriments.
That's one in this. The second one is, I guess, something that I alluded to earlier was.
You know, a lot of machine learning engineers have an unenviable task.
This is a new industry and a new field with new neural networks, new forms of annotation, new ways of doing things, and they're defining a approach upfront by their best estimate, frankly.
And we, then, looking back at that and iterating on that, potentially too slowly, but also.
Sometimes, you see teams that get a bit defensive about their approach. Because, I guess you get painted into a corner to some degree.
I think that needs to be a bit of a comfort and empathy for the machine and engineer. That there is just limited experience.
And we need to be a K that these approaches will evolve over time. And they're going to change an annotation strategy, and your taxonomy is probably wrong in your first thoughts.
We spoke to team.
I guess like I mentioned with the team candidates, they do a lot of satellite imagery.
They say that they iterate on their taxonomy in a fundamental way, at least seven times, before getting close to something that looks representable.
So that's like how you define your labor costs and your attributes, and what you're actually looking at. They. Should I change that seven times. And that's a very experienced team.
Who knows what they're doing?
So, I think, yeah. Just create room for experimentation, beginning and be understanding, and empathetic with that would be the other one.
So I don't see any other questions, I guess.
So, I guess any final words, over to you, Adam.
I mean, I would just re-emphasize that, um, yeah, ultimately, we're about providing economic value.
So, you know, I think a lot of these machine learning, AI projects that fail, also don't necessarily bring in that subject matter expert and fully understand the value proposition for a, for a particular application well enough, to prove that there's, you know, that there's true economic value. And so, it's become, again, it's a science project.
So, I know everything we do in which we engage with customers is really drilling down, really investigating, know, what their challenges are, what's the economic value of solving those challenges, before we that engage in trying to build models, or build solutions that address those problems.
Excellent. Interest in any last thoughts?
Just to say, really impressed with what a person is doing, its looks really fantastic and, you know, that the three step process that they showed at the beginning.
Which each one of those steps could probably be a company in its own right.
And they're doing all through and seem to have stitched it together beyond one. But three industries, it's really, really awesome to see. Congratulations.
Thank you. Krista, Madam.
Really appreciate your time and participating in our webinar today and for the audience. So, thank you for taking the time to participate and appreciate your feedback on how this went.
Thanks, and have a great day, everyone.
VP of Machine Learning Solutions,
Tristan Rouillard is the VP of Machine Learning Solutions at CloudFactory. In this role, he leads the company’s strategy and direction related to ML products and solutions offered to CloudFactory’s clients globally.
Tristan was one of the co-founders of Hasty, a data-centric vision AI platform focussed on making it easier to implement the ML flywheel in production. Hasty was recently acquired by CloudFactory in late 2022. Before founding Hasty, Tristan was the Head of the Venture Development team at WATTx, a manufacturing, industry 4.0 focussed incubator, where his team built the business models and go-to-market strategies for various early-stage ventures. He has also worked at various B2B startups in Berlin.
Tristan studied finance and accounting with an MBA from the European School of Management in Berlin, where he resides with his wife. Raised by a business-owning mother in South Africa, Tristan was coached on all things business from a young age around the dinner table. In a past life he ran a Scuba diving center in Jamaica for a couple of years.
Director of Business Development,
Adam Petruszka is the Director of Business Development at Abyss Solutions leading the North American sales team. He is responsible for sales strategy, account planning, and field sales within the region.
Adam has experience in the technology industry at companies such as IBM, Accenture, Compaq, and Hewlett-Packard. He has held roles ranging from solution architect to business development across various industries such as oil and gas, manufacturing, and transportation. During his time at HP Labs, Adam gained an appreciation for the practical application of machine learning to solve customer problems.
Search for anything
View our schedule of industry leading free to attend virtual conferences. Each a premier gathering of industry thought leaders and experts sharing key solutions to current challenges.View Schedule of Events
Delivered by Progressive Thought-Leaders
Start Learning Today
Watch On-Demand Recording - Access all sessions from progressive thought leaders free of charge from our industry leading virtual conferences.Watch On-Demand Recordings For Free
Delivered by the industry's most progressive thought leaders from the world's top brands. Start learning today!View All Courses Now
The premier Business Transformation & Operational Excellence Conference. Watch sessions on-demand for free. Use code: BFH1120Watch On-Demand
Courtesy of Nintex Pty's Paul Hsu, below is a transcript of his speaking session on 'Improve employee productivity during and post-COVID by ...
Read this article about HP, Best Achievement in Operational Excellence to deliver Digital Transformation, selected by the independent judging panel, ...
Read this article about BMO Financial Group, one of our finalists, in the category Best Achievement in Operational Excellence to deliver Digital ...
Read this article about Cisco, one of our finalists, in the category Best Achievement of Operational Excellence in Internet, Education, Media & ...