Subscribe
BTOES Insights Official
By
April 26, 2021

IT Infrastructure & Cloud Strategies Live - SPEAKER SPOTLIGHT: Intelligently Leveraging Your Data: Why it is okay to "shift right" as part of your testing strategy

Courtesy of Beacon Street's Jon Szymanski, below is a transcript of his speaking session on 'Intelligently Leveraging Your Data: Why it is okay to "shift right" as part of your testing strategy.' to Build a Thriving Enterprise that took place at IT Infrastructure & Cloud Strategies Live.

bss_logo

pillar%20page%20line%201

Session Information:

Intelligently Leveraging Your Data: Why it is okay to "shift right" as part of your testing strategy

But how do we know we are testing the right use cases before deployment? What if we have complex business rules that preclude managing clear and easily executable test cases?

As testers, we don't usually have the time and the means to test everything, and sometimes we just don't know what we don't know.

With the rise of big data and our executive's obsessive attention to using analytics for decision making, we also can leverage data to assess system behaviors without having to replicate the end-to-end business scenarios.

In this talk you will learn about

  • specific tactics for mitigating risks
  • how to supplement you testing strategy leveraging your organization's own data
  • what it means to provide confidence in software releases
  • specifics on how to apply this approach with detailed examples

Session Transcript:

Elegantly leveraging or data why is it OK to shift right as part of your testing strategy? and we have John ... with us coming directly from Maryland today. John, great to have you with us. John is the Vice President of Software, Engineering and QA at Beacon Street Services. John has over 25 years of experience in software engineering, managing and leading both development and testing teams.

He has worked for a large billion dollar companies, as well as consultant with startups as they build out their technology stacks from scratch regardless of his title or role. His passion has always been focused on quality.

You recently finished the process of building an all new QA team with heavy focus on automated testing and CI CD.

He's currently working on software architecture, initiatives, and new features development, jonn, pleasure to have you with us. Thank you for sharing your expertise with our global audience today.

Awesome, thank you, Josie, and I'm glad to be here. Welcome, everybody. Today, we're gonna talk about, really data. And what we're going to talk about is a journey that I've had over the last couple of years.

And hopefully that you get a lot out of this, I mean, the whole point is to kinda give you some practical experiences and share those with you. So.

Here we go.

So a little bit about me, 25 years, plus in the industry.

Currently, I'm part of Beacon Street Services and that's a, a services core services company that is in the business of supporting affiliates.

So we our services group that it supports other businesses, and these businesses are generating, you know, more than half a billion. Probably close to a billion dollars this year in revenue. So pretty big companies. I do like the blog on the side of though, honestly. I haven't done that in quite awhile, so don't hold me to that.

So, what I'd really want to cover today is give you some background on our systems, some things that we went through on our journey in the last year and a half.

I want to talk about the challenges and the risks that we ran into.

And I really want to focus on, kind of, one strategy That was very helpful to us, and something that has been really, kind of, you know, I'm not gonna say life-changing. But it's really made a difference in how we operate our business. And then I do want to walk through some real use cases, Because I think the benefit is actually seeing how we apply the strategy in our, in our business is very important.

So, just to start with some background, I'll spend a few slides going through this. So, we, we built our system from scratch back in 2014 to 2016, and at that time, we were really just supporting one company, one brand. We then started to acquire other companies and affiliates and brands.

And so, from 20 16 to 18, we really moved our system into a multi tenant structure, OK? At that point, we also added a DevOps team that was really focused on that, which was very helpful.

And then in the mid 2019 timeframe, we became a true services company, because we were supporting all of these different affiliates. We weren't just supporting one anymore.

And at that point, we added a QA team, but five years later, so we were a little bit behind the curve on that. And so what I'm going to talk about are some of the strategies and some of the things that we did really focused around quality and quality assurance.

Screenshot - 2021-04-26T150945.616So just really quickly, our technical architecture, you know, pretty standard sort of stuff, heavy into Java and Node. Most of our systems are cloud based, so we're all into AWS or salesforce, that's our CRM. We use lots of different databases, RDS, Dynamo, Firebase snowflake, and then we have some reporting tools. but pretty, pretty standard structure here.

So to give you a flavor of you know, what we do and how we do it, just a little bit about what Beacon Street Services does.

So as a support Services company, we support businesses that deliver subscription based publications of financial information or software.

So, what does that mean? So, we're supporting businesses that are subscription based, So there's a lot of e-mail, there's e-mail for marketing.

We have a CRM that we have to maintain and support.

Certainly, we have billing systems and operations, so we support that.

And then the subscription fulfillments, so again, a lot of e-mails, text messaging, and web site support.

So those are kind of the technical areas that we have to support in 2019, I was charged with building a QA team from scratch, and so doing so, we had responsibility for testing. We had to build a strategy around quality.

We had to document system functionality and test cases. We had to establish regression in smoke tests. And really, my focus coming into this as an engineer is, let's see what we can do from an automated perspective. That was really key to us. And then, obviously, we had to assess quality. That was very important, is, how do we assess and report on the quality of our systems and our processes?

So, we built this automation framework that was great, pretty standard stuff. Again, we're using BDD and Selenium and Palm and test and G and Cucumber and Selenium. And all of that. So this is pretty standard.

And so, when we looked at the challenges of our system, this, the thing was that Salesforce is pretty, pretty large and complex. It gives you a lot of flexibility.

And so the way we structured Salesforce at the time was very batch heavy. So we had over 50 batch jobs that are running out of Salesforce. We had a mix of monolithic and microservices.

We had 20 plus Java jobs that we would run.

We had varied interconnect technologies, and we were really moving, or trying to move more towards the microservices side. So we have a lot of lambdas are step functions, and data streams. These are AWS things that are supporting serverless.

So pretty, pretty complicated system that we had, and a very large mix of batch jobs mixed in with real-time processing.

So batch heavy, job heavy.

Serverless is easy to maintain code, but it can be harder to test. You really have to have a good strategy about how to test them.

The other challenge that we ran into, and I'm sure others have run into this as well, is our environments.

So generally, you have, you know, development environments, test environment, staging environments, production environments, and in our case, we really only had a test and a production environment, which wasn't great, That was pretty difficult to deal with.

Btog CTAAnd again, the complexity of our systems, between the legacy that move into microservices and just all the different third party cloud apps, multiple databases, we had things kind of all over the place, in terms of the system complexity.

From an environment perspective, what we had was, you know, dev and QA shared the test environment.

And we were stepping on each other's toes. That wasn't great.

We still are that way today in our tests in our production environment, data is not completely mirrored.

So a lot of times, the things we were testing, and the test environment weren't really the same configuration, is how they were in production, and that cause us problems in R and R, Assessment of quality, and our ability to test.

And again, with the complexity we had, some features were not easy to test or maintain in our system.

So as the head of QA and engineering, so I had to say, OK, well, what does our test strategy we wanted to automate? We wanted to build out CI and CD Cindy. Well, really, we only focused on CI. We didn't do a lot of CD. We wanted to build confidence in our releases. We wanted to reduce risks. We wanted to harden our processes, improve our test data management.

I think yesterday, if you attended those sessions, you heard a little bit about managing test day, and how important that test data, or, or really data itself is to your processes. And we also wanted to drive quality ownership across the technology team. It wasn't just the QA teams job to drive quality. Was everybody shop to drive that quality ownership?

That was important to us, But really from us, and what we heard from our affiliates, because again as a services company that is supporting the businesses, their main concern was they just want a confidence in their releases. They just wanted to know things were going to work.

So looking at the QA considerations, we said, you know, how can we get started quickly with QA, how can we catch up with development, because we were five years behind the bar there. How can we make an immediate impact and show value? And that was important to us. We wanted to show value as QA to the organization, because, again, we were, we were building this from scratch.

So we looked at our plan and we said, OK, well, short-term and intermediate tactics, what can we do? So, so, the first thing we said is, we've built this automation framework. We have this infrastructure. Let's increase the amount of testing we can do, the amount of coverage we can do, and the amount of automated coverage we can do. So, that was number one, let's just increase our coverage. Alright, let's give people greater confidence that we're testing as many things as we can test.

And so, we looked at that, and we figured we could probably get about 70% coverage and automation of all of our test cases, after about one year. That was our goal.

Let's get 70% coverage and automation. You wouldn't ever go to 100%, because there are just some things that you wouldn't automate. And there's certain things you wouldn't test recover, and that's OK, right?

You're never gonna get to 100%, and there's good reasons for that.

The other thing we said is, maybe let's focus on corruption, Know, our environments are quite in sync, you know, maybe there's things we can do in production that we can't do in tests.

Maybe there's some jobs that run in production that don't run tests because of, you know, just the situation. Maybe the data.

So, maybe, let's spend some time looking at production, maybe testing and production, maybe evaluating production.

And, you know, maybe, maybe that well will help us get a little further along, OK.

And when we looked at that, we said, Well, you know, maybe we could spend about two weeks looking at that, figured out how to set up, maybe, some testing and production, and we'll get some bang for our buck doing that.

So that was another, another way of approaching the problem.

So, just to give you a little bit of a refresher here, so when I talk about data and shifting, right, some people may not understand what I'm really talking about.

Let me kind of explain to you a little bit about the quality continuum and The quality mindset, and what it means in terms of upstream downstream, and this is concepts that we're here to talk about today.

So when you look at the QA continuum, you've got quality engineering on the left, and you've got quality control on the right, OK, Quality Engineering on the left is what we consider upstream activities, OK, so when you look at upstream activity's?

You're looking at things like automation, You're looking at two things that will improve your processes, you're looking at CI CD, right?

So continuous integration, continuous deployment, you're looking at roles of, let's say, assets, which are your software developers, in tests, so your Software Development Engineer in Tests, So these are QA people, but they're QA people that are more on the development side That are building things, and then you're looking at test driven development.

You're looking at infrastructure and you're looking at, you know, strengthened in your processes, and then, as you move across the software development life cycle, you get to the point where you release your code. And then, now, you actually have running code in your production systems, and these are the downstream components. The quality control, this is where you find issues, this is where you find problems in production. You have to think about, you know, bug detection, recidivism, things like that. Finding issues at the end of the continuum is much more costly than finding them in the beginning where you can fix them quickly, OK?

14So, this is the continuum, quality engineering, quality control.

And just a quick refresher, we just talked about, quality assurance is really focused on prevention.

Quality control is really focused on detection, right? And you want to spend as much time in the prevention side.

This is where we get agile and these process driven sort of methodologies, it's always good to spend more time there.

So if you go back, and you look at the previous bullet, we talked about, our affiliates wanted greater confidence and releases. This is what they asked for it. and really, what it came down to. They kept saying to us, listen, we just want to trust the data. We just want to know things are working, and they're working correctly.

And so the aha moment for us was really, for me, you know, and a few people that, that I was working with at the time, was, you know, given our situation, we can't test everything. We don't have parity in our systems. We don't know what we don't know.

And we just had this aha moment.

We say, hey, we can shift some of our focus downstream more towards the quality control side And really focus on our data The issue was the data, like how can we Focus and assess our data in production to help us find things that we were missing in our testing, OK?

And that was, that was a great moment So, going back to the continuum, when you look at it the upstream activities we talked about that's the shift left the downstream activities is what we, what we consider shifting, right?

OK, again, that's more costly to shift, right? It's not something you want to do, is your primary strategy, but it is something to consider.

So the caveat is shifting left should always be your priority.

That should always be important, and the first, first thing you look at, but we can still take a holistic approach, and it is not an either or proposition, it really is, right? You can do both. It's just a question of what that mixture, it looks like.

So, just to kinda give you a visual, let's say your goals to be healthy, and you, you must have multiple approaches to that, right? It's not just one approach. So here's the visual for you, right. You're gonna, you're gonna be healthy. You want to be fit, You've got exercise, you got the physical, you've got food, and what you bring into your body and then you've got your, your mental components. You know, you need your rest and your meditation and so forth, so all of these are important, right? It's not an either or. And so I just want to put this visual into your, into your head here.

So, when we went back and looked at our continuum, and we said, Hey, you know, how do we want to approach this in terms of percentages, the way that I broke it down in, my team, as I said, well, let's continue to spend, you know, a good strong 20% of our time on those on the pure quality engineering components, right?

So the designing, the architecture, the templates, the processes, the CI CD, then let's spend about 70% of our time actually testing because we were, at this point, building the QA team we needed to test, so we did that. And then I said, let's spend about 10% and what we call inspection, right, and this is where we get into that shift Right and tell it can we want to do some of that?

And this is also what I consider, you know, beyond test case execution, right or not running tests in production we're doing something a little different we're inspecting the data, we're assessing the data, and we're hoping to prove that there's confidence and trust in the data.

OK, so, the things we would do, we would ask questions like, You know, if your customer service, we refund a customer for credit, You know, did we update the customers total credit balance? You know, is that correct? Can they see it? Can they, can, they know what's in their account?

If we're running jobs, let's say, a customer paid after a credit card here, we took away their access when it failed. And when their credit card process, we gave access back to the website? These are the kinds of questions we would ask. Are we doing these things correctly?

And so shifting, right became a very, very important part of our solution, and we call that D, Q R.

So DQ R stands for Data Quality Reporting.

OK, that's what we called it. And we built a strategy, specifically around data Quality reporting.

So, what we would do is, we would execute reports and queries against our production data warehouse, and there we would assess the end results of business processes. That's what we were doing.

We were saying, we know the end result of a process needs to be this, let's validate it and let's validate it in production, was very important to us. So, we took information input from subject matter experts, from our engineering team, from support tickets, from our customer service team, and we defined a set of rules. And based on those rules, those business rules, we built, Initially, we built 30 reports over one month. And when we ran those reports, we actually scheduled them as alerts.

We identified 20 plus issues not found in testing alone, OK.

So, just looking at the business rules, in the end state of the data, we found 20 plus issues that we weren't finding in our test cases, OK?

Now, caveat to this, is, this approach does depend on the state of your data, so we have a pretty solid Data warehouse. We have a mass for data management approach, so we're pretty solid in the data that we have. And that's important to have this work.

So, I'm gonna give you another visual, OK. So car example. So, when we think about data quality reporting, OK, this is the visual I want to put into your head.

When you build a car, you have things like your tires, in your suspension. Right? You've got your steering mechanisms. You've got your dashboard, you've got your electronics, you've got transmission, you've got all these components of your car. And these are being built, and they are being tested, individually. So this would be kind of your unit tests. Maybe some integration tests and some functional tests, but these things are being built along the way.

Then you get to a point. You actually build a car, right? So you've got your, your new car, it's been tested all the way your cars built in your car, so, and I do have a little caveat now here. And my apologies, I did draw this picture, it's not the greatest thing, so I'm sorry for that.

But then two years later, you bring it to the admission station. You get a little little piece of paper. I actually have one now for for getting my car inspected next month.

So you have the emissions inspection every couple of years.

You bring it to the emission station, you plugin your car, and it actually runs through a set of tests, and you get this feedback that says, pass or fail.

So, to me, that's kinda what D QR is, right?

We've already built the car, There's nothing we're really going to do about that right now. We're in production.

We're running these processes, these systems, these jobs, and what we're really doing is looking at the data, and saying, are we, are we running, OK? Did we pass, or did we fail? It's a lot of what we do.

So simple view of our data. We get a lot of data coming into our database. All of our data goes into snowflake. It's all raw data. That's our master data management approach. Yours could be different, but again, that depends on where your data is.

And then, what we do are kind of tool of choice is a tool called Looker. So it's a BI software platform.

Explore Analyze Report. Is ad hoc reporting, advanced reporting. It's APIs galore. It's a it's a lot like Business Objects or dolma, or Qlik, or Tableau or any of those tools. It really doesn't matter what tool you use.

But our tool is Looker. I really, really like it. I really like Snowflake.

It's made my life so much easier, and it's been working well for us.

The other thing about Looker too is we use Looker for everything. It doesn't matter, We use it for end user reporting. We use it for our internal reporting. We run reports against Jira and for internal reporting and analysis, we use Looker for that. We use Docker for our test case results, and processing, and our metrics there. So, we use it for everything that's really a great tool.

And just a quick, you know, again, it doesn't matter what the tool is, but, you know, lookers is pretty solid.

Allows data sources, filters, visualization, data results. It's very, very easy to use.

I spent a lot of time in the prior, you know, prior job using Business Objects, That was fine. It worked well, universes were a little tricky at the time but, you know, Looker has really worked well for us.

So, the next two slides, I just want to kind of walk through what it takes to set up the data quality reporting strategy, OK.

What does it actually look like and what do you actually do, OK, So step number one, OK, Identify the areas of concern and your priorities, OK?

Screenshot (4)for us being a subscription-based Business are supporting those businesses. What we do is we really focus on the payment processing, right, collecting payments and we also spend a lot of time on credits and refunds and kind of the opposite of that, right? Because a lot of people will purchase, there's a 60 day, 30 day money-back guarantee. Sometimes it's cash, the types credit, but there's a lot of, you know, a lot of refunds coming back and forth.

We really wanted to focus our priorities around payment processing and refunds. Those were big for us, OK?

So identify your areas of concern. Second, define your data rules and your queries, OK? What are the most important things around your areas of concerns?

Define what those data or those business rules are, and come up with some basic queries of things that you want to address, that you want to know about and your production environment. So that's number two.

Number three, document them as test cases. Treat these as, you know, your, your QA, your QA team, test cases, document what you're doing, and how you're doing it.

Because you may want to go back and adjust those later. And that's, that's pretty important.

Step two.

Once you've done those first three things, the next three things you want to do is you want to build your reports and queries. So go into your tool, whether it's Looker, or whatever tool you have, Business Objects, click whatever.

Build your reports, Test your reports.

And then, schedule your reports. So, again, we treat these reports as, kind of, like, checks on the system, and we actually set them up as alerts.

So if something happens, we want to know that something is either working or not working, OK?

So if you look here, this is what it looks like in our tests Braille folder. This test rail is our test case management system. We basically built test cases, we identify them by titles, and then we actually have a note that they're either schedule does an alert or not. Sometimes we just run them daily, just as informational. So we treat them as test cases.

And this is what it looks like in the Looker folder. So pretty much the same thing. We built them in Looker.

And then we document them in our test case system.

So, we've got about 10 minutes. What I want to do now is kinda go through our use cases and why this has been so impactful to us, why it's been so important to us.

And then actually show you specific examples of things that we did with data quality reporting, OK, so our first example, we're going to talk about payment failures, where we should remove access for the user to the website, so they, they've missed a payment for their subscription. They don't get access to the website anymore, OK? When you look at how we structure our, our systems, how we actually provide invoicing, payment, access, all that. It's really five jobs, OK?

We generate an invoice from there.

We go into a post invoice job then we have a payment job Then There might be a payment error in this case.

And then we have a status job And Then we have the ability to give or remove access to the website. So it's really five jobs that are running in sequence here.

The thing that's hard for us is when you look at the payment job here, there's so much that can go into the payment processing, OK.

Things we have to look at include, uh, know, invoices posted, The payment type, Is it a credit card or a check, what's the bill type? Is this a new sail? Is this a maintenance fee as a renewal? Is it a recurring sharps, or they paint on installment? Are they paying at fault?

Is there credit with the customer?

Do they have a payment method or a card on file?

Um, no, did we update the balances correctly?

Did we send an e-mail or not? So, these are things that are important to assess and it's really hard to test these if we had to go through and test every single one of these use cases individually. And we try to cover most of these.

It's really hard to make sure we cover every single scenario, OK?

If a payment fails, as you see here, a payment fails. And we go into a status job.

And so the things we need to look at are, you know, did we put the subscription into expired state or suspended State?

Did we check that there was an installment, you know.

Did we move them into Bill suspend? You know, there's a lot of things we can do here. So again, we need to test all of these things, but it's very difficult to do every single scenario and then decide whether we remove their fulfillment or not.

So we, we took this example, we said, OK, let's build a data quality report around. So we did that.

So the query that we built, as we said, we want to know, in this process, were there any payment errors?

Um, and where are those payment errors on invoices in the last 30 days.

And then, we want to look at the subscription and say, is the subscription status still equal to active, And is the customer still receiving access to the website, OK, so this would be a case where we want to know, based on a payment error, that we truly took away access for that customer, because that's really the intended business route, right? You don't make the payment, you don't get access.

Screenshot - 2021-04-26T150945.616So, we're looking for an exception to that business rule. So, this would be the query for the data quality report. We built a data quality report.

We actually built two reports, one for installment based payments, and one fernand installed base payments, And this is kinda what it looks like. I mean, it's really this simple. We built a query and we said, you know, what's the balance, what's the due date, what's the status with, what's the type?

And, you know, is it a condition where they should have had their Access renu, but they didn't get it removed? OK, so this is the basic query.

We built it into Looker.

We set it up to be a daily run.

And we basically are looking to see if any results come back, If any results come back, then we have an issue.

We have a data issue, OK, and so you can see here, we set it up to scheduled to run daily, and only give us a report if there are results. We just want to know if there's results on the negative use case here.

And here's an example where it happens. So we had two subscriptions come back that met this criteria.

We'll call that a hit. And so we get this report in the morning And we go and we investigate the subscriptions and then we're trying to figure out, did we miss something and testing?

Um, is it a process issue? Is it a bug? We don't know.

And a lot of cases, in some cases, it could be a timing issue. Maybe the job ran and ran. Too late, too long, or it got stuck, and therefore, we ran the report and we didn't get the intended result we wanted, but, you know, maybe it's a timing issue, but we need to go investigate. We need to go and look and see why these two subscriptions did not go into basically suspend mode.

Um, so, yeah, so here's an example. We set it up to run. It's on a schedule.

We get two records back that we need to investigate.

So, let's go and look at Use Case two. So Use Case two is really kind of the opposite of that.

So let's say we took away Access from a customer, then they paid, now we should restore their access to the website, OK, so this is the inverse of what we just went through.

So in this case, we're looking for, again, payment success in the last 30 days on an invoice, where the subscription, I wasn't bill suspend, but it really should be active because we should have given access back to the customer.

It's the same thing, right? We built two reports, one on installments, one on non installments.

We then set up the query queries, very similar. It's just kinda the inverse.

And in this case, we run the report. We wouldn't get alerted because there's no results on this one. But if I run it manually, I would see there's no results, and this means everything's working correct. I don't have to worry. There's no alert, there's nothing to investigate.

The process is working smoothly, OK?

Use Case three.

So this was a very important one to us, as well. So this is a sequence diagram, and this really kind of goes through a process where we have order forums where people can subscribe.

They, we send this information to a third party application called zohra, which is our billing system, which then sends information to Salesforce, and then we have a job that basically syncs data back to both systems, and then we can do reporting from there.

Now, what we found is that in many cases, there were failure points within the syncing process, OK? So this is really about syncing data back and forth between these systems.

And if you look at these asterisks here, these are processes we have no visibility into. These are third party cloud apps. We don't control these. But these are things that are important to the process.

And typically, when we have failures, it either fails at this point, or this point here, point number two.

And so, we wanted to build a data quality report around those kind of those, those difficult areas. So, what we decided to do was put an alert right after that second process.

So, in this case, what we wanted to know was, are there current subscriptions that, within the last four hours, did not sync correctly? We were missing a brand ID or an invoice transaction type.

These are things that sink back into the Salesforce system, and as those values are missing, that we can't do accurate reporting, OK. So this is one way we trigger an alert on a failed sync process.

OK, can we build the report? We build it in Looker. We build it and documented and tests rail. And the thing about Looker that's really cool is it can be as simple as just drop-downs, you know, build your queries, you know, dates, times, booleans, things like that. But you can also do custom SQL. So, in this case, we actually built a, just a quick little filter, custom SQL. And we're looking for these values that are missing, OK, so these are null values, and that means the same process failed.

And again, we run the report, And we did this one manually. There's no results. It means sync is working correctly, so we're not worried about this. But if there were results coming back, then, yes, we would see a list of data.

14We would get an alert. And we would go Investigate.

And there's a lot of things that data quality reporting can do. We use it for things like, are we missing data, values, are we missing fields? Like anytime we expect the data to be full and complete production, and there's missing data. We get an alert on that simple query. We get an alert. We can do it for potential potential fraud. You know, we have queries for that. We have it for failed jobs and data streams and processes, right? So we just set up these queries and say, we expect these things to be existing in production. And when they're not there, we get an alert. And then we go investigate, We try to figure out what, what's going on. And, again, we only get alerts. We set them up.

If there's a problem, right. If things are running properly, we don't get an e-mail, We don't get an alert. We just know things are working correctly.

So, today, as of April 21st, we have 76 data quality reports that we built. This is probably over the course of about, over a year.

52 of those data quality reports are set up as alerts.

So, we will get e-mails if there is a use case that fails, 33 of those of 33 of the 52 are actually tied to our alerting system, which is Victor ops, which will actually page development and say, hey, you know, to go take a look at this. one thing we found, which has been very cool, is we build these data quality reports. Because we have concerns. If we do get alerts and we investigate, we'll go take a look at them.

Generally, once we've fixed the root cause of those issues, very rarely do we see the alerts again.

So, although we have 76 data quality reports running 52 as alerts, I don't really get alerts that often. Once we fix the root cause, things pretty much work pretty, pretty smoothly.

So, I'm not worried about, you know, lingering effects. That's the general case. And it just kinda gives us a sense of comfort, and it really allows us to trust the data, right? To know that our things are working correctly.

So wrapping up here in the next two minutes.

Hit the high points here, right? So we talk about shifting, right? Shifting, right? is the quality control side of the continuum. It is more costly to find issues in production.

We know that, but I want to be very clear that quality control is not a bad work. Right?

People give it a bad rap, quality control is important.

I think it's definitely part of your strategy and your solution. It really comes down to how much time do you want to put into that, right? Again, for us, I'm gonna say it's like 10%. I feel that's very important to us. We still spend most of our effort on the quality engineering side, which is the process side.

Quality control is not a bad word.

And the reason I say that is executives will focus on the data, right? It's all about data now.

We should also focus on the data. Whether we're an engineering, kha, a project managers, whoever it may be. We all need to focus on the data.

When we can't control areas in the early stages, like for our example, we have a test environment, doesn't mirror production. Exactly, and we have concerns about that. We can't control that. Right now, we have an issue with our environments, but we can't shift, right.

We can't shift downstream and we can have a strategy to mitigate those risks, very important.

And, at least a little, again, we know it's cost here, but we can do this a little.

Um, Data Quality Reporting, you know, bottom line, it provides our customers with the ability to say, we have confidence in our data, it's very important to us, and it's a quick win.

If you remember the Graph, it was going to take us a year to build our automation coverage on our test cases. We built our first reports, and had them out the door in two weeks, and, we had 30 reports. And, we were already making a huge impact, that was a quick win.

Leverage the tools you have, It doesn't matter what reporting tool you have, Leverage what you have, Um, focus some time on data quality. Not just on test, test execution, we talked about that.

Think beyond just running test cases, assessing data is part of everyone's job.

And I think you'll find the Data Quality reporting is, you know, it's a, it's been a big win for us, and hopefully, you know, you can take some of this and apply it to your environments, and hopefully you can get some quick wins there as well. So, I want to thank you for your time.

We're right on schedule, and I think we're gonna get ready for the Q and A, here is my contact information here, very happy to connect with anybody on LinkedIn.

I use it quite extensively and at this point, I think we're ready for some, for some questions.

John, thank you very much for an outstanding presentation.

Really highlighting how quickly you can become a world class organization. If you focus from the outset on doing the right things and getting the right results. John, can I ask you to stop showing that presentation. Fantastic. Thank you very much for that. I'm already got some good questions coming in. Please don't hesitate to continue to send them in. We will ask them as we go forward.

I'm John, one question that seems to come up quite a bit, is, you've always see, identified, that you acquire behind the curve. When this whole process initially started, You were willing to kind of move that bell curve forward very quickly, walk well.

Do you understand it when you were kind of having the discussions with senior management and looking at what they were achieving? What led them to the position that they needed to implement this for their organization?

What led them to want to implement? Like the data quality strategy? that did? Yeah, What led them to understand that it was something that was important for their business, and could get them to where they wanted to be ultimately.

Yeah.

You know, so, I think, the way I phrase this question is like this.

So, it is something that internally, as an engineering team, we felt we needed to do, because we were so far behind, right? We knew we wanted to build automation test cases, and, and build that automation into our CI CD pipeline, which we did, but it was going to take us a year. Like, our plan was to do this over the course of a year.

We knew we knew upper management was going to come back and say, that's, you know, we need quicker wins. We need to know that there's confidence in our systems running. So, so, data quality was really the way to get us a quick win. And a continuous one, right? We still do it today. We keep building, you know, probably 1 or 2 data quality reports every month.

So, you know what? What it was important for us to do was to share and kind of promote this internally.

So, as an engineering team, we went to our affiliates, and we said, listen, this is a process that we're implementing. We think it's very important, we think we're going to get a very, very big went out of this, and a very quick return on investment, And, you know, we promoted it, we told upper management. I run a monthly CA Result session for all the engineering team, and all of upper management, OK? And what we do is, we go over the results of testing and we go over the results of our DQ are reporting.

Screenshot (4)And we discuss our successes and our pain points, and we end with a lot of feedback from anyone who wants to, you know, have input. So, the thing is, we don't sit around and wait for someone to ask us how things are going. We're going to put it right in front of you. We do these 30 minute sessions. And once a month, we go over all the data. We go over the results. There's a lot of back and forth.

And so, you know, we're promoting ourselves. We're pushing the data in front of them and saying, here, look at what we did, look how important this is, and look how well things are running. And we also show them the cases where we've implemented Data Quality Reporting around issues that we were having.

And then soon after we see that those alerts, they go away, right. So we've actually fixed core problems in our system, so, So a lot of it is we're promoting ourselves. So I look at when we do these these QA results sessions with with everyone, we open it to anyone who wants to attack, OK?

What we really say is, listen.

At the beginning of this process, and data quality reporting, we said, Here's what we will do, OK, there's, there's there's basically four things. We did. We said, Here's what we're going to do.

This is why we're going to do it.

This is how we're going to do it.

And this is when we're going to do it. And so, what we did very early on, in those, that first month, where we build these reports, as we said, we're laying the foundation with our stakeholders.

And we're going to basically tell you these things, and we're going to measure our results against what we just told you we're gonna do. So, we built a strategy of, we promoted the strategy, we continue to promote it on these monthly sessions.

And we really engage everyone across the teams, including upper management, with how important this is, why it's a great return on investment.

Now, as you alluded to during your presentation, that, that right shift strategy isn't always appreciated.

At a senior level, because it's a, it's a specific, strategic choice.

It costs, there's a definite percentage to it you wouldn't get point maybe focusing more on the left to fixing a little bit around center. How did you initially get buying at a strategic level to say this is the right thing for us to do, and that's the right percentage for us to focus on.

Yeah, I think, you know, it let's talk a little bit about the percentage, right, so Most people are going to tell you you want to always shift left, right?

You want to be very sauteing upstream processes, right? And I don't discount that. I think that is, that is hugely important.

So, as I said, you know, we spend my, my QA team myself, we spend, certainly, I would say, you know, 20% of our time on those activities. We, QA stays abreast of everything that's going on, from the grooming sessions, to the intake of the backlog.

Once a once a Jira ticket, you know, in our case, we used year once that's picked up by engineering. We do a kind of an architecture design review session with QA. We actually start working on the test cases before we even get the code, right.

So we're already assigning QA too, two tasks upfront, to stay very heavily involved with the, yeah, you know, the understanding of the design, the architecture. So, that is critical. Like, if we don't do that, and then we get code into the test environment that we don't understand, it's just a waste of time, right? Where there's too much back and forth.

So, so our mix, we said 20% of those front activities, those upstream activities, we do, 70% testing, and we just, 10%, seemed to be the sweet spot to us, in terms of what we could put into production level inspection. That will give us the greatest ROI without spending too much time on the backend because, again, it's more costly to do so. So that was the right mix for us. I think it depends on your situation in your company.

If you have really solid environments where everything is automated and in mirrors each other from test to staging production and you have better confidence, and that maybe you spend 5%, or maybe if you're in a worse shape, then we are, maybe you spend 20% on those backend activities, There is a cost to it. And I don't want to be little that, there's a cost to doing it at the very end.

But I think you can find that sweet spot, and show that there's actual tremendous ROI if you do it correctly.

And at a sort of a board level, did you have to find a specific sponsor who was supporting what you did?

Or Did you get a general cohesive view in terms of, Yes, it's right. Everybody's on board, Or was it one, or two individuals that were really pushing gifts?

Yeah, I think, for us, it was, it was a group thing, right?

So, from a technology team perspective, we had complete buy in across our entire technical organization, OK? So, there was no issue there. What really made this happen for us was, the same time that we were building out this data quality approach, we were restructuring our entire data infrastructure. So, we had a new VP come in over who's managing the data aspects of our business, so, the BI and the data.

And, they implemented the master data management approach, right?

And, that was transformational for us because once that MDM was implemented and our and our organization, that made the ability for us to do DQ are almost like, know, it was like a no brainer, right?

It was so easy for us to access the production data in a raw state to do whatever we wanted. What that data. So, we had a new BI, VP come in, who was transformational, who worked really well with engineering.

We were all on the same page, all the way up to our CTO, and it was really easy to promote that then to the affiliates and say, we have a plan. We know what we're doing. It's a great plan. We're going to share with you everything we're going to do from beginning to end. Here's our return on investment. So, it was a very, very, I would say there was no pushback, right? It was everyone was kind of on board to move forward with that.

Fantastic.

And, John, we could probably talk about this for a lot longer. Unfortunately. We've reached the end of our shared reading period for this session. I'm, John, if anybody wants to catch up with you, I presume link to social media following up after that would be great and you would welcome that outreach. Would that be correct?

Absolutely, yeah, LinkedIn is the best way to go.

Fantastic. So, John, thank you so much for a really exciting and insightful presentation. We look forward to seeing you again in the series. ..., thank you very much for joining us today.

Awesome. Thank you everybody, and have a wonderful day.

Thank you, Joan.

So, ladies and gentlemen, Johnson Mansky, giving a truly wonderful presentation on how you go from being at the back of the pack to the front of the pack, by focusing on the things which are really important for the business, and taking the risk. As John pointed out, sometimes going left is not always the right way to go. Now, up next, I'm really pleased to announce a world-class speakers joining us next ...

Pink are the CEO and founder of Innovation 360, who were we looking at playing both how to win the Business Game in this session, you get insights on it, how to get back in the game, and which through playing and building, now listen to this award winning author and thought leader. Magnus works with some of the most well established businesses and on the planet and a number of the leaders that you see regularly around the world, he will explain how to remove Blocos, activate, amplifies, and ensure you're aligning for constantly writing and jumping those curves. So join us back here at the top of the hour to get some great insights from Magnus. and we look forward to seeing you there.

OK, everybody, look forward to seeing you, then, Take care, Bye, bye!

pillar%20page%20line%201

About the Author

more-Apr-26-2021-10-47-36-19-AMJon Szymanski,
VP of Software Engineering & QA,
Beacon Street.

Jon has over 25 years of experience in software engineering, managing and leading both development and testing teams. He has worked for large, billion-dollar companies, as well as consulted with start-ups as they build out their technology stacks from scratch.

Regardless of his title or role, his passion has always been focused on quality. He recently built a new QA team, with heavy focus on automated testing and CI/CD, and is now back leading several large scale engineering projects. When not at work, Jon is spending time with his wife, daughter, dog, and watching sports.

pillar%20page%20line%201


The Business Transformation & Operational Excellence Industry Awards

The Largest Leadership-Level Business Transformation & Operational Excellence Event

opex_assembly

business_assembly

Proqis Digital Virtual Conference Series

View our schedule of industry leading free to attend virtual conferences. Each a premier gathering of industry thought leaders and experts sharing key solutions to current challenges.

Download the most comprehensive OpEx Resport in the Industry

The Business Transformation & Operational Excellence Industry Awards Video Presentation

Proqis Events Schedule

Proqis Digital

Welcome to BTOES Insights, the content portal for Business Transformation & Operational Excellence opinions, reports & news.

Submit an Article

BTOES UNIVERSAL GRAPHIC - NO DATE.webp?width=1200&name=BTOES UNIVERSAL GRAPHIC - NO DATE
ACCESS 50 VIDEO PRESENTATIONS
Access all 75 Award Finalist Entires
RESEARCH REPORT 2021/2022
BTOES AWARD - NO DATE
BTOES UNIVERSAL GRAPHIC - NO DATE
Subscribe to Business Transformation & Operational Excellence Insights Now
btoes19.png
png
ATTENDEE - Proqis Digital Event Graphics-2
ATTENDEE - Proqis Digital Event Graphics (2)-1
ATTENDEE - Proqis Digital Event Graphics (1)-1
png

Featured Content

  • Best Achievement of Operational Excellence in Technology & Communications: IBM
  • Best Achievement of Operational Excellence in Oil & Gas, Power & Utilities: Black & Veatch
  • Best Achievement in Cultural Transformation to deliver a high performing Operational Excellence culture: NextEra Energy
   
Operational Excellence Frameworks and Learning Resources, Customer Experience, Digital Transformation and more introductions
  • Intelligent BPM Systems: Impact & Opportunity
  • Surviving_the_IT_Talent_deficit.png
  • Six Sigma's Best Kept Secret: Motorola & The Malcolm Baldrige Awards
  • The Value-Switch for Digitalization Initiatives: Business Process Management
  • Process of Process Management: Strategy Execution in a Digital World

Popular Tags

Speaker Presentation Operational Excellence Business Transformation Business Improvement Insights Article Continuous Improvement Process Management Business Excellence process excellence Process Optimization Process Improvement Award Finalist Case Study Digital Transformation Leadership Change Management Lean Enterprise Excellence Premium Organizational Excellence Lean Enterprise Lean Six Sigma Execution Excellence Capability Excellence Enterprise Architecture New Technologies Changing & Improving Company Culture Agile end-to-end Business Transformation Execution & Sustaining OpEx Projects Culture Transformation Leadership Understanding & Buy-In Lack of/Need for Resources Adapting to Business Trends Changing Customer Demands Failure to Innovate Integrating CI Methodologies Lack of/Need for Skilled Workers Lack of/Need for Support from Employees Maintaining key Priorities Relationships Between Departments BTOES18 RPA & Intelligent Automation Live Process Mining BTOES From Home Cultural Transformation Financial Services Customer Experience Excellence Process Automation Technology Healthcare iBPM Healthcare and Medical Devices Webinar Culture Customer Experience Innovation BTOES Video Presentations Exclusive BTOES HEALTH Strategy Execution Business Challenges Digital Process Automation Report Industry Digital Workplace Transformation Manufacturing Supply Chain Planning Robotic Process Automation (RPA) BPM Automation IT Infrastructure & Cloud Strategies Artificial Intelligence Business Process Management innovation execution AI Lean Manufacturing Oil & Gas Robotic Process Automation IT value creation Agility Business Speaker Article Systems Engineering RPAs Insurance Process Design Digital Speaker's Interview data management Intelligent Automation digital operations Six Sigma Awards thought leaders BTOES Presentation Slides Transformation Cloud Machine Learning Data Analytics Digital Transformation Workplace Banking and Capital Markets Data Finance Professional Services Education IT Infrastructure IT Infrastructure & Cloud Strategies Live Blockchain Interview Solving Cash Flow with AI BTOES White Paper investment banking Analytics Insight BTOES19 Consumer Products & Retail Enterprise Agile Planning Government Operational Excellence Model Project Management Algorithm Automotive and Transportation Banking Business Environment Digital Bank Enterprise architecture as an enabler Hybrid Work Model Primary Measure of succes Relationship Management Sales business expansion revenue growth Adobe Sign Agile Transformation CoE Delivery solution E-Signatures Electricity Global Technology HealthcareTechnologies Innovation in Healthcare Reduce your RPA TCO Transportation Accounts Receivable (AR) Big Data Technology CORE Cloud Technology Cognitive learning Days Sales Outstanding (DSO) Logistics Services Operational Excellence Example Risk Management business process automation transformation journey Covid-19 Data Entry Digital Experience Digital Network Digital Network Assistant (DNA) Digitization Drinks Effective Change Leaders HR Internet Media NPS Net Promoter Score Program Management Portal (PgMP) Sustainability TechXLive The Document is Dead The New Era of Automation Automated Money Movement Banking & Financial Services Biopharmaceutical Blue Room Effect Building Your Future Workforce in Insurance Business Process Governance Capital Market Creative Passion Digital Transformation Workplace Live Digital Workforce Digitalization ERP Transformation Finance Global Operations (FGO) Financial Services Software Frameworks Hoshin Planning Human Capital Lean Culture Natural Gas Infrastructure Natural Language Processing Organizational Change Pharmaceutical Pharmaceuticals & Life Sciences Project manager Supply Chain Management Sustainable Growth The Fully Automated Contact Center Transformation Initiatives Workplace Analytics eForms eSignatures 3D Thinking BEAM BFARM BTOES17 Big Data Processing Business Analytics Business Growth Centralized Performance Monitoring System Communication Creativity Digital Technologies Digital Technology Educational Psychologist Energy Management Health Insurance Health Maintenance Organizations Hospitality & Construction Human Centered Design Integrated Decision Approach Integrated Decision Making Intelligent Document Processing Kaizen Medicare Moodset for Excellence Natural Language Processing (NLP) Offering Managers Oil and Gas Optical Character Recognition (OCR) Pharmaceuticals and Life Sciences Photographing Price and Routing Tracking (PART) Process Design Document (PDD) Product Identifier Descriptions (PIDs) Python Quote to Cash (Q2C) Resilience SAP Sales Quota Team Work Telecommunications Text Mining Visually Displayed Work Culture master text analytics virtual resource management