Courtesy of Target Corp's Krishna Seshagiri Rao, below is a transcript of his speaking session on 'Enabling organizations to trust their data' to Build a Thriving Enterprise that took place at iBPM Live Virtual Conference.
Session Information:
Enabling organizations to trust their data
Most organizations rely on data to receive insights and gain a competitive edge. Others may need to periodically store data to adhere to compliance norms.
Whichever the case, for data to be useful, organizations need to have a clear strategy and adopt an efficient approach towards data protection, management and governance. This session will shed light on strategies to help you to build a strong data foundation based on trust and reliability.
Key takeaways include:
Session Transcript:
Really, from Bangalore, India, we have Krishna Seshagiri Rao, Sasha Gaddy Rao, who is a senior director of platform engineering of bengaluru based target in India, an extension of the US retailer targets headquarters of operations. In this role, he leads platform engineering, one of the pillars of the data science team.
The team is responsible for building and managing in-house platform products to support enterprise reporting, data collection, data search, and AB testing.
Krishna joined target 14 years ago and has worked in many leadership roles with expertise in the merchandise space planning and supply chain domains.
Krishna, I'm very excited to have you with us. Thank you so much, I know it's late in linear right now, and so grateful for you to take the time to share your expertise and your journey with our global audience today.
Thank you, Josie, and thank you for the nice introduction.
Good morning, good evening, everyone. I hope all of you are well and safe.
So I am really, really excited to be here with you all, to share my perspectives on a subject that is very close to my heart data trust, and how data trust plays a crucial role in enabling an organization becoming data driven.
So, all over the world.
If you look at how data analysts who are part of different organizations spend their time 70% of their time they spend in wrangling the data.
So what I mean by that is, in order for them to answer a specific, analytical question, they end up spending most of their time in finding the right data that their organization already has in its disposal, and make the necessary transformations to get the right sense out of it and use that to really inform a specific analytical question.
There is gross inefficiencies in this overall overall experience for data analysts.
Rather than spending most of their time in doing complex activities like mathematical modeling, they end up spending time in figuring out how should I really get the data that I need to really answer a specific analytical question.
Imagine the possibilities that we can really bring forward for the data analysts if we are able to really make these data wrangling efficient.
There comes data at rest.
Today, What I am going to do is that I will walk you through an approach what we in target have adopted that really enabled us to help the entire organization understand the data, and establish trust on the scene, and make sure that we are able to leverage the right kind of data to answer specific questions, thereby driving both operational and strategic decisions.
So the topic is, how should we think about enabling organizations to trust their data?
So before we dive deep, let us spend some time to understand why trust is an important aspect.
So we all know that data is becoming a critical asset for all organization. And it plays a pivotal role in enabling organizations to make strategic and operational decisions.
And data needs to be reliable.
It is one of the aspect of data for anyone to really test it, right?
Unless the data is reliable and trustworthy, the individuals within the organization are not going to use the data in making any kind of issues.
In such organizations, you would certainly be able to see the symptoms of the problem.
Like even for answering a basic question related to organizations performance, the individuals within the organization will struggle to really find the right data.
That really helps them to answer the question, which means lack of awareness of what kind of data they really possess.
The second biggest thing that you would notice is multiple copies of the same data existing, which means that there is a lot of confusion amongst the individuals of the organization to really understand which copy of the data that they should be really using to drive.
They are work forward.
The third aspect, or the third symptom that you would notice, is there is no way or means for the data consumer to understand what kind of quality that the data that they are having at their disposal.
When I say quality, is that data actually complete?
Does it really represent the complete information that that the business process produced as part of its operations? And does it really meet all the quality rules that the business process is supposed to really comply with, in terms of bringing the data together for broader enterprise consumption?
There is no easy way or means for any everybody in the organization to understand.
Last, but not the least, we all know that failures happen, right, in any software, and when such failure happens.
Which impact the availability of the data for broader consumption.
The organizations may not have the right kind of framework, established to really share transparent information about these kind of failures.
Which means that the end users of the data would end up using data, which may be stale. And which would, in turn, has huge impact on some of the work that they're doing.
So, you will be able to see this kind of symptoms, which, if you think about how to really resolve these kind of symptoms, organizations need to invest on building trust in data, Right? So, how an organization should really go about in building trust.
So, the first step in anything, right, that we have to do when it when it comes to trust, is, the entire Take does ecosystem, should have a well defined definition for all the data elements that they, that the organization pushes.
So the date the definition of the data should be consistently understood across the organization.
So there isn't any room for miss the interpretation when it comes to how should we really use the data for and in this specific situation.
The second thing that we have to focus on is not only making the data consumable, we also need to be transparent about where the data is coming from.
If I want to know a specific metric related to my business process, I should have the ability to understand, what are the data elements that have been really put together to arrive at the matrix?
and which sources of data are emitting these data elements that I am using and developing my metric.
And not only that, not only the upstream traceability, but we also need to provide visibility on where and all these specific data element is getting used downstream as well.
Moving on, we also need to have a well defined process of establishing service level objectives.
When I am saying that my data for this specific business process is going to be available at eight AM Eastern Time every day, on all seven days of the week.
I should have that established as a contract, my stakeholders, and make sure that I am making the data available as per the SLO.
The fourth one is very, very crucial.
Understand what kind of quality, constraints, and quality, patients, that the data element is expected to meet, based on our conversations with the stakeholders.
And make sure that there is an easy way for us to measure how the data is measuring up against the quality rules that we have agreed upon with the stakeholders, and expose that as well.
And the last, but not the least, every data element, should I, Every dataset, or every business process data, should have a very clear ownership defined.
So, that Rohner should, in fact, operate like a product corner, Right?
We have been using product owner terminology, in application development world, but even in the data world, product ownership is really catching up.
Data also needs to be treated like a product, so that you are able to clearly establish the roadmap for the product, the life cycle for the product, and how effectively and efficiently the data can be used for answering specific questions within the organization.
So, these are the components that we need to keep in mind when it comes to, how should I go about in building trust in my organization's data.
Now, what we have, the next step in this process is, OK, we have understood, these are the attributes that I need to keep in mind and how am I going to bring this to life?
So, if you think about the five things that I articulated, it is nothing But we have to answer specific questions related to data, starting with what data it is, when it is going to be available, where it is sourced from and how reliabilities.
So, if we are able to get these four aspects figured out accurately.
The next step is make sure that we are able to catalog the entire data ecosystem of our organization.
Because unless you catalog there isn't a way for you to really understand what actually you possess, what your organization possess in the form of data.
So, start with cataloging all the datasets.
After cataloging, make sure that the catalog is available for discovery.
So, only then the data elements will be may will be used across the organization.
You cataloged it. You made sure that the catalog is available for discovery.
The next step in that in that process is we spoke about some of the attributes of trust.
We have to establish metrics like for traceability, data, quality, data availability, and metadata about the data in terms of definitions, ownership, and all those stuff.
And we have to expose those metrics for each data element that helps the end user understand the trustworthiness of data, what we are exposing.
So, this, these are the steps that we need to keep in mind when we are trying to build something together for enabling the organizations to start trusting their data, and move in their journey towards becoming data driven.
So now, we know, what are the attributes that we need to keep in mind, and how should we really go about in building trust in data? The next step is how everything is going to come together.
Right?
So, this is a very simple approach.
Sounds very simple, but really requires a lot of investment to really bring this to life.
Like, we all are so much used to Google. Rachel, we really cannot imagine a life without Google today.
And similar to how Google operates for the world, think about standing up a Google for your organization's data environment.
So, which means, you need to start with cataloging all the data sets.
When I am saying data sets, it's not only the data set.
It includes the source system from where the data is sourced, the jobs and the, the numerous jobs.
Let really move the data from point A to point B And the infrastructure that is actually getting used, like the message queues. And the micro services that have been used to really move the data or the infrastructure that is enabling the movement of data.
And how the data is consumed through different reports.
All, all aspects of data needs to be cataloged.
So, once we have, this stood up, it would really help us to realize one important aspect, right?
So, like what I said in the beginning, wrangling data wrangling is a costly fire, right.
It data analysts are spending 70% of their time, starting with finding the data, and so on and so forth. Once we have this catalog, a live catalog, which can be discovered, searched, and so on and so forth.
And you would certainly be able to realize prediction in time, for finding the right data that you need.
And you would also have a very good understanding of which datasets are becoming stale. And how should I really manage the life cycle of those data elements that are not needed anymore for the organization?
Which means that you have a very clear understanding of your data ecosystem, and you are not investing undue resources to managing your data ecosystem, but may appropriately managing relevant data within the I equals within your organization, and purging whatever is not required.
And once you have this live catalog, which is searchable, and making things easier for the entire organization, that trust will automatically increase.
Moving on, So let me just dive deep into specific aspects of this catalog that I am talking about, right. So what are some of the things are? What are some of the features that we all should be keep in mind when we are building a data catalog?
Getting all the assets cataloged is the first step, but we also need to make sure that it is easy for everybody to search and also discover.
The third dimension, metadata enrichments is also very important. We'll I'll cover that in a bit.
But when you look at search, some of the basic functionality that we really need to enable for users who know what they really want to find, we should be enabling text based search based on whatever they enter in this search box, need to just bring up all the assets that meets the search string.
And while showing the results, you can certainly use different aspects to rank your search results.
You can certainly look at how effectively that specific data element is getting used, and how what kind of likes that the data has.
You can use that as one mechanism from the usage point of view to sort the data, start the search results, sorry?
And you can also consider freshness, data quality as other set of attributes that you can use to order your searches.
And the other intuitive thing that we should that you should consider is depending upon the individual who is searching, who is issuing the search, we can filter the data based on the level of permissions, what the individual has, and only show those results for which they use the individual has permissions to access.
Last, but not the least, consider providing recommendations for the search data set is mostly used with this bunch of data sets, the others, other set of data sets.
For example, if you, if the user is searching for, let us say, sales at a specific location, normally, we know sales is also used with the inventory position in that location.
So, you can provide a recommendation like when a date when the search is issued for sales.
Users of this dataset also use recommendation inventory as a recommendation to make sense out of the data.
So now, moving to Discover.
So here we are trying to address those type of users who necessarily do not know what they are looking for, but they know they have to get into a specific space to discover what they actually want.
Like if you think about an e-commerce site, right, You can go into specific categories and then browse for different types of items that you are really interested in.
So this specific browse kind of an ecosystem helps us to stand up a controlled environment within which the user can find what they want to enable that.
We have to, when we are cataloging the data assets, we have to keep in mind.
We have categorizing the grouping, the data assets by domains, subject areas, and organizational hierarchy will go a long way in terms of enabling the discoverability of the data.
Last, but not the least, the metadata, The metadata enablers, there are three aspects of metadata that we all have to keep in mind, when you are cataloging the data.
one is, the first one is the metadata disk, Read Describers, which contains all the static information about the data.
When I say static information, the owner, the level of permission, the description about it, and how, how exactly it needs to be accessed in terms of giving a sample query, and so on and so forth.
The next one is metadata relations.
This is where you you use this information to establish the lineage between different sources, between the sources of data and the actual dataset that the user is interested in doing, and also providing a view on where and where, and where, exactly it is getting used across the organization.
That's the relationship.
And the last piece is the metadata events.
This is the one that gets updated very frequent basis, like the timeliness when exactly the data is going to be available on a daily basis.
The quality measures of the data, and how is the, how has the usage been trending, And so on, and so forth.
So, that's the metadata event part.
If we make sure that we are taking care of all these three dimensions, from when you are building the catalog, then you would certainly have a very good repository. Which would be, which would be, effectively, which can be effectively used by the entire organization.
And it will become a lot more meaningful for everyone to use that relevant data to drive their body of work.
In essence, this catalog should not be a once, and then it should be a live catalog.
It should be constantly updated, and it should always expose all the relevant data, and it should be easy for everybody to access.
So, that's what this catalog should really meet as a, as a requirement standpoint.
So, we, once we have this, right, so we already have built is, and we have all the information in our fingertip and, if you think about, what are the possibilities that water and all that, I can really accomplish as an organization, right?
So, if you have this Google type of ecosystem, big fire, your data environment, then you can certainly help the search and discoverability of the data in our organization.
We can provide a clear view of where the data has come from.
And by exposing the right set of trust matrix, you can help the user determine whether they can trust the data to be used in their respective body of work.
And you can also enable the users by how offering help, in terms of how they should really go about using the data.
And if there are specific security related restrictions that you really want to add for the data, then you can provide those information in terms of how and individuals should really go about in getting the right kind of access to this data.
So, all these things can be done without having the need for the individual in the organization to talk to multiple people scramble to get the information and spending a lot of non value added time to get to what they want to get.
So, in essence, how this is really going to help the entire organization, we spoke about trust.
We spoke about having this catalog of data assets, which she's going to represent the information in information about data constantly. And also, it is going to be accessible all the time.
This is playing.
This is going to play a huge role in enabling organizations to accomplish the vision of, how can I ensure that any data that my organization needs is going to be easily discovered, and it can be shared with the right kind of individuals within the organization.
And I can establish the right kind of understanding for everybody who needs to really know about the data set, and it can also be managed effectively in terms of life cycle.
And this vision is very, very crucial.
Because once we are able to get this infrastructure build and in line with this specific mission, then you are certainly driving enterprise wide data discoverability by providing all the necessary trustworthiness metrics so that the entire organization can use the right kind of data to drive their strategic and operational decisions.
So we have spoken about all these aspects of test and how it really helps organizations become data driven in addition to this.
one of the other things that organizations can also realize by having this kind of an investment being made is you would be able to clearly understand what aspects of your data drive greatest, strategic and operational impact.
And you would also be able to determine which datasets have the highest risk potential, Right?
No matter whether you are hosting your data on Cloud, or on premise, when it comes to imposing controls on your data, It is your responsibility, not the cloud provider's responsibility. Even if you run your data.
Even if you have data centers run by Cloud enablers or cloud providers, you are responsible for ensuring that the data is likely protected.
So we all know how, oh, yeah, how important it is to make sure that we are coming up with the right kind of controls for our high risk potential data in order to ensure that everything is safe.
So having these catalogs really helps you to establish a good understanding of what is the data in my ecosystem that has the highest potential and determine what kind of controls that I need to bring forward.
so that I am completely safe and not exposing myself to any kind of threats, and thereby risking my reputation in the marketplace.
And, another thing that we are, we will also be able to deliver by having this catalog, Yes. We can deliver data as a utility.
Similar to how the human population is able to enjoy water and electricity by, with, by, by turning on a switch, we would be able to enable data that is being made available to everybody, like any other utility.
And, of course, we have spoken about this in terms of how data is becoming a growing asset, and it is definitely gaining a large part of defining strategic and operational decisions.
And having a clear idea of organization's data will really put aldea, put, put the organization in a very good spot to drive our maximize, all the opportunities that is going to be made available for them.
So, yeah.
Before I go into, before I end my session, I also wanted to share with you how we have really brought this to life in Target, Right. So I spoke about this, Google, and all those stuff.
Just wanted to spend some time talking through how this has been brought to life.
So as you can see, location is the search text that the user has entered, and you are seeing all the results, which match this location text string.
And we also spoke about discoverability in addition to search, which is where all these different categories, like can a search based on a specific date up that form, like Hadoop, or Oracle, and so on, and so forth.
And this data group is, it is, does it belong to a specific business unit, Like merchandising marketing, so on, and so forth.
And you can also have the ability and flexibility to search by domains. And you can also have search tags, and so on, and so forth.
And the asset types, this is also very important. Right, do I only need to look at the datasets? I should I am interested in also data, Movement jobs reports, Kafka, Escapees, and so on and so forth.
Just wanted to show this, and the second one is, I spoke a lot about metadata.
and just wanted to decouple how this is being brought to life.
So, when we show the search results, we also make sure that we are addressing all the aspects that would really drive the trust, worthiness of the data, like timeliness, lineage, metadata for the data, security, and quality.
And if you look at the metadata aspects, lot of aspects around, OK, what's the platform, uh, oh, when was it last updated? When most it created, and who updated it? What was the service level objective?
And how much time this data will be retained normally?
So all those aspects around metadata is being provided as well.
Last, but not the least, lineage. So the asset that we are showing in our Search pages this, and we are making sure that we are showing the lineage in terms of from where the date originated from and where and all it is getting used in our organization.
So, this gives a very good understanding for the end user to make informed decisions on, OK, this data is getting used in this specific business functions, and it has all the qualities and the characteristics that I need to answer my specific business question, and I can make an informed decision.
So, just wanted to make sure that, I am providing some visibility to how we have really brought this, a metadata based data management search capability to life in our organization.
So, before I close the session, I just wanted to leave you with this following three things, which I believe you can take, take it back with you.
The first one, like what we spoke about, data test, is an important aspect.
And it really helps the organizations in the path of becoming data driven.
one of the first steps in building data, at rest, is, knowing organizations, data.
And ensuring that, it is appropriately cataloged, and the catalog is exposed and it is easy to search and it can it is kept up to date. It is a live catalog.
And the last, but not the least, that trustworthiness matrix that we spoke about, that should be appropriately established for all the data elements in partnership with the data owners.
And it should be, it should be exposed in your, through your catalog so that everybody understands what they are using and how they can really bring that into their business process in terms of driving their business decisions.
With that, I have come to the end of my presentation.
Terrific, Krishna. Thank you so much for that A true practitioner. reveal of these issues, and we really appreciate that. And, and the driving at something that's so critical for so many organizations in a time where we are talking about the importance of data, and, you know, data is the new oil. If you will. And so, if that's the case, you know, how are we treating data in our organizations?
And you have provide a very practical perspective of that. So lots of questions that have come up. I'm going to relay several of the of the main themes back to you here in the next few minutes.
The first one has is about ownership.
You talk about ownership, data ownership, and, and that sounds good, but it's actually sometimes in large organizations like Target, quite difficult sometimes to pinpoint who is going to own this data. Because lots of people, lots of people think they own it.
But there's also this dynamic where nobody really wants to own it and have the full accountability and responsibility for it. So so it's just a bit of a paradox. So the question of governance, and how do you make decisions about who is going to own what type of data where. So if you can talk a little bit more about that, how do you make those decisions?
Yep, that is a very good question.
So, for any big organization, there are roles like Chief Data Officer, or in most of the organizations, Chief Information Officer will also play this role, right?
So, first step is establishing the core data domains that are going to be considered within the organization.
Right, so, these core data domains are not getting, grouped are categorized based on source system or the actual implementation of the applications, But this is more aligned with how the business is kind of running, like, for example, item.
I'm coming from a retailer. I'm just giving some examples that that is coming to top of my head when it comes to core data domains. Like item is a core data domain.
Location is a core data domain, right?
So, once we have the core data domain, and establish alignment for that core data domain with specific business team, and have the business owner of the team identified as the owner for this data element task, that is what we have done in Target.
So wherein we would be able to clearly know, OK, item, belongs to merchandising, though it has a lot more prevalence in sales, as well as in supply chain is origination starts with merchandising. That is how the item comes into my organization is ecosystem.
So merchandising business team should own that merchandise, basic merchant, the item. Data item. Code data.
Well said.
Wow.
Ownership is established.
So then, you would have product owners, so you have the data on it. And from from that point of view, you can kind of falls in place in terms of a corner.
Who is responsible for defining the roadmap for that item data.
And an engineering team will also be stood up to really support the kind of work that surrounds making this item data available for the entire enterprise use.
So, you have a product structure wherein you have a product owner from the business and the engineering team combining forces, but the ultimate owner of the data is going to be the business which produces the data.
And that starts with the process of identifying core data domains and establishing ownership at the business leader level.
That's very good, that's a very good insights on how you go through that decision making process as assigning the owner. So thanks for thanks for sharing that. I have a comment here and in the question from Karen ..., who asks When you what software, if any, are you using to store the data? And, you know, multiple platforms are centralizing some of this data storage. Is this something that you develop internally or your something that you're a commercial platform that you may be using for your, for your data storage?
So this is centrally managed, the cataloging of assets is centrally managed, and whatever screenshot that I showed is an in-house application, which was the ground between the organization.
That's all I can tell.
Very good, Very good to develop that in Toronto.
To a large extent, the question about this, one comes from William Fuller related to the differences between the data catalog in the data dictionary who should be able to, to see, you know, read the read only data.
So a couple of questions here, one is, is maybe it's if there's a difference in semantics between your data catalog and the data dictionary and the other one, is that how you're, how do you do, how did you decide on the on the access to different types of data in the organization?
So, data dictionary, data cataloging is up talking about cataloging data sets that your organization produces.
It is, across, the board, I'm talking about, multiple data sets, that needs to be cataloged centrally, Whereas, data dictionary talks about the attributes of a specific data set.
Right?
So, if the, the screenshot that they showed, it not only tells you the platform information, and where exactly it is sourced from, it also explains, what are the attributes that this specific dataset contains, So, that is, also, exposed through the product.
Let me help write to the second part of the question, when it comes to security, and all, the data management of phase plays a crucial role.
And there are a lot of guidelines governing principles that exist in our ecosystem. As a retailer, we have first access to personally identifiable information.
And data that needs to meet CCP and GDPR GDPR guidelines. And we have socks. We have HIPAA.
And we have G L B, A So all those compliance related guidelines are well established within the organization.
And the data Management office clearly identifies what are those data elements that belong to that needs to comply with all these different guidelines.
I can just tell a very simple example around CCP, right? So TCPA talks about personally identifiable information.
And it is definitely classified as a secure handling required data, which means that only the folks who has the need to know will have the permission to know, and all those permissions, security permissions for specific data elements will be decided, managed, and governed by the data owner.
That slave ownership is very important. Here.
The owner, the owner, would certainly be able to clearly understand and articulate what are the different compliance rules that the specific data elements need to comply with.
Then, the engineering teams will be able to bring it to life, in terms of implementing all the required security rules.
And role based security will take care of ensuring that all these principles are complied with and folks who have the need to know only will have the ability to see the data.
Very good. Very good. That's a that's a great overview, Krishna. Thank you for that. The question that came up, also, especially when you're talking about building bridges early on in your presentation. And how one develops trust in an organization that do not have communication bridges between IT and the business. Is there a standard approach that you're using internally to build such bridges between IT and the business, to develop, you know, trust all the data, but, you know, develop trust around the organization as well?
So, the product model has been a blessing in disguise for estimates. So, in the last 2 to 3 years, Target as an organization has been amping up in terms of bringing product model to life.
In all the active, in all the efforts and all the investments that you're making to take technology to have technology enable the business.
All the efforts that happens within the organization will invariably have a product.
And the product owner will certainly be the business Weiss for the product and the product and the engineering organization will not be under the same chain of command.
So there isn't a positive or negative influence. It doesn't matter.
It's the two sides of the same coin, but again, if both product and engineering are part of the same organization, then there is, oh, there is an opportunity to over index on one aspect rather than looking at both business and technology.
In this through the same lens, there should always be creative tensions between the product organization and the engineering organization.
So that the product model, coupled with this principles of not having the product owner from in the same chain of command of engineering, has greatly helped us in balancing what needs to be prioritized.
And building trust also has been pretty easy because there is a voice from the business always sitting in what we are trying to deliver.
It doesn't, This is not related only to data, but anything that we do interact.
This is very good. It's very good. That context really helps the audience understand the different components in the system. That makes the system work So I appreciate that those insights Krishna For someone who is learning from you today and thinking about how I can apply this to my, to my organization. I'm thinking about, you know, the, you know, there's an investment that you have made here on catalog, in search of all, this data assets. What would be some suggestions you have for them on on taking the first steps in this direction? and maybe even creating a business case for it. How do you, how do you measure the value creation that you get from ... for an activity like this?
What will be some of the steps that you would give to guide them in this journey?
Yep. So, the first value proposition, right. So we need to, I spoke about the symptoms in the first slide, right?
Where the lack of awareness in terms of what data that is available in my organization, that I need to use to answer a specific question.
So like that, you need to quantify those symptoms, right?
So because of lack of awareness, how much effort is going wasted and what kind of opportunity that you missed as a business.
And without having a line of sight on data that is, that is a high risk element and we have not put in appropriate controls. And because of which we had to endure a security incident and what kind of impact it really resulted.
So quantify that.
That's the first step, though. It will be easy for us to really sell with our stakeholders, as this is an important investment we have to meet.
Then we have to start small, right?
So, width, most of the organizations are moving towards microservices based architecture, and it will be easy for anybody to build pipe's pipes of pipes of data flowing between different applications. You can just go and intercept the pipe to understand the metadata of, of that specific data element produced by the business process and then just said bring that catalog or establish that catalog centrally.
So, start slowly and again, like what I said, this catalog should be live. right, Which means that it is not a once and done kind of an effort.
Start small, show Value for your critical business area, and then continue to really invest in making this.
Krishna Rao. Thank you so much for sharing your expertise. Today is such an excellent session on the practical applications and approaches for creating Data Trust, which is so critical intuitively, But, you really have laid out an incredible business case for that. And, and, we really appreciate that. Really appreciate your sharing, being transparent and sharing the expertise that you have, But also, the mechanisms and approaches you're using to make this work. So, we are very appreciative on behalf of our global audience, grateful for, for your leadership.
Thank you. Thank you for the opportunity to say and I wish you all, a very, very good rest of the day. And looking forward to seeing you all, And thank you for being such a wonderful audience.
Thank you.
Ladies and gentlemen, that's Krishna Rao, Senior Director of Platform Engineering At Target in India, and what a wonderful presentation on. Very, a very practitioner driven presentation, You know, Did you notice that? there are no pictures on his presentation, this was about what's really happening, what's going on? He wasn't here to kind of sell you on anything, or any concepts. It's about how it's working.
How it's working for Target, globally, how are they were building this data structure and, and, and data trust as a foundational piece for their strategist. So I, I think that's, that's a really great view of Strategy Execution and the and the and what it takes to do it. Well. so, I'm grateful for him sharing that with us. We're gonna wrap up this session and we're going to start back up at the top of the hour With A Shift Into Healthcare.
We're going to We're going to be welcoming the Research Manager and System Architect for Precision Mirror for the Precision Narrow Therapeutic Program for the Mayo Clinic. Scott Whitmore is going to be with us talking about measuring business processes, Learning how to link business outcomes to process performance. Learned the five measures. Common to all processes. And Learning. I'm more useful form for a business change roadmap. So, looking forward to that, I'll meet you all back up at the top of the hour.
Krishna Seshagiri Rao,
Senior Director Platform Engineering,
Target Corp.
Krishna Prasad Seshagiri Rao is a senior director of platform engineering of Bengaluru-based Target in India, an extension of the U.S. retailer Target’s headquarters operations.
In this role, Krishna leads platform engineering, one of the pillars of the data sciences team. The team is responsible for managing Hadoop infrastructure, Teradata systems, the MicroStrategy reporting platform and in-house built platform products to support enterprise reporting, datac ollection, data search and A/B testing.
Under his leadership the platform engineering team completed the journey to modernize DataStore platforms that enabled the shift from traditional data warehouse platforms to open source data store platforms. The team also played a crucial role in conceiving and implementing productideas to eliminate dependency on expensive third party reporting, data collection and A/B testing tools.
Krishna joined Target 12 years ago and has worked in many leadership roles with expertise in the merchandise space planning and supply chain domains. He excels in managing complex and large enterprise Business Intelligence platforms and building high performing teams with a strong agile culture, focused on product and design thinking in a matrixed global organization.
Krishna received a bachelor’s degree in mechanical engineering from P.S.G. College of Technology, Tamil Nadu. He is also a certified software quality analyst and a sun certified Java developer. Krishna is married and lives in Bangalore with his wife and two children. When not at work, he enjoys long distance running and has completed seven full marathons including two world majors(Chicago 2016, New York 2017).
View our schedule of industry leading free to attend virtual conferences. Each a premier gathering of industry thought leaders and experts sharing key solutions to current challenges.
View Schedule of EventsWelcome to BTOES Insights, the content portal for Business Transformation & Operational Excellence opinions, reports & news.
-------------------------------------------------------
Search for anything
Insights from the most progressive thought leaders delivered to your inbox.
Insights from the world's foremost thought leaders delivered to your inbox.
Being a hero is all about creating value for others. Please invite up to 5 people in your network to attend this premier virtual conference, and they will receive an invitation to attend.
If it’s easier for you, please enter your email address below, and click the button, and we will send you the invitation email that you can forward to relevant people in your network.
View our schedule of industry leading free to attend virtual conferences. Each a premier gathering of industry thought leaders and experts sharing key solutions to current challenges.
View Schedule of EventsWatch On-Demand Recording - Access all sessions from progressive thought leaders free of charge from our industry leading virtual conferences.
Watch On-Demand Recordings For FreeDelivered by the industry's most progressive thought leaders from the world's top brands. Start learning today!
View All Courses NowThe premier Business Transformation & Operational Excellence Conference. Watch sessions on-demand for free. Use code: BFH1120
Watch On-DemandInsights from the most progressive thought leaders delivered to your inbox.
Insights from the world's foremost thought leaders delivered to your inbox.
Being a hero is all about creating value for others. Please invite up to 5 people in your network to also access our newsletter. They will receive an invitation and an option to subscribe.
If it’s easier for you, please enter your email address below, and click the button, and we will send you the invitation email that you can forward to relevant people in your network.
Courtesy of Nintex Pty's Paul Hsu, below is a transcript of his speaking session on 'Improve employee productivity during and post-COVID by ...
Read this article about HP, Best Achievement in Operational Excellence to deliver Digital Transformation, selected by the independent judging panel, ...
Read this article about BMO Financial Group, one of our finalists, in the category Best Achievement in Operational Excellence to deliver Digital ...
Read this article about Cisco, one of our finalists, in the category Best Achievement of Operational Excellence in Internet, Education, Media & ...