New Whitepaper! Getting Data Mesh Buy-in

Download Now!

Novo Nordisk: 10 Insights into Maturing Data Mesh Platform

In the ever-evolving field of data management, adaptability is key. At the pharmaceutical company Novo Nordisk, rising consumer and producer demands led the company to transition from a traditional centralized data lake system to decentralized data mesh approach. This led to a profound impact in the way data was harnessed for business growth. In the initial phase of data mesh journey, they learned several lessons which are previously shared in this Medium article.

After running the data mesh platform in production for over two years, multiple data domains have adopted their data mesh platform. As a result, it is now essential to shift focus from being an innovator to achieving platform maturity. Throughout this journey, the team gained invaluable insights shared in the talk including top 10 learnings gained from the team’s extensive experience and practical insights to help others implement a successful data mesh platform in their organizations.

Speakers:

Watch the Replay

Read the Transcript

Speaker 1: 00:00 Working as a data engineering specialist at Novo Nordisk based here in Denmark. Novo Nordisk is a leading global healthcare company with century long history. Today I’m here to share some practical insights that we gain as a team’s learnings and some of my personal experiences as well. So before I jump into the presentation today, let me give you a quick background of our data mesh journey. So almost four years ago, we built our first centralized data lake to break data silos. We have almost 2000 plus active internal users even now as we grew and data volume kept increasing along with data producers and consumers, we faced some technical and operational challenges. Then we came with our next generation data lake that is data mesh platform. So we have been running this data mesh platform in production for over two years now, and multiple data domains have adopted it.

01:02  So it became essential for us to shift our focus from adoption to achieving platform maturity. During the early stages of our data mesh journey, our primary focus was on what we were providing to our users. However, as we continue to mature and evolve, we realized the importance of not only what we offer, but also who we are serving, how we can best meet their needs, and how to sustain the evolution of our data mesh platform. Throughout this journey, we gain invaluable insights that I would like to share with you guys today. So without further ado, let’s get started.

01:45  The first and foremost one, be reliable and provide user support to build trust and enable effective use of a data mesh platform. It is crucial to establish reliability and provide comprehensive user support for data domains. At our platform, we have implemented various approaches to achieve this goal. For instance, we use behavior driven development, which has shown significant improvement in platform reliability. We also provide self-serve documentation and user guides that enable users to quickly access the information they need. Trust me, this was a game changer for us. Additionally, we offer biweekly developer drop in support where users can get their issues resolve in real time. So by prioritizing reliability and user support, we believe that our data mesh platform can foster trust and encourage effective collaboration among teams.

02:45  Moving on, embrace user feedback. So by understanding users need and identifying their pain points led us to more effective solutions and better data management practices. At our platform, we initially focused on catering to data engineers, but after taking user feedback, we realized that we were neglecting key decision-making personas such as data stewards and data owners. To address this, we improved our user interface and user experience to better cater these user’s needs. Our product owners have been doing excellent job of prioritizing common feature requests, which has helped us to create a backlog that aligns with users’ needs and pain points by continuously incorporating user feedback and prioritizing features that address their needs. We believe that our data mesh platform can create a more engaging and collaborative environment for all users.

03:50  The next one is know your customer. Okay, so this could be controversial, but I would like to bring it up here. One of the key lessons we have learned is that while a data mesh platform can provide significant benefits to many teams within an organization, it may not be suitable for those who do not work with data or have limited resources or technical expertise to own their data product. While it is important to provide training and workshop to help users get the most out of our platform, sometimes supporting non-data team can consume valuable time and resources. In such cases, it is important to meet customers where they are and kindly say no to stakeholders whose needs may not align with the platform’s capabilities or goals. So this allowed us to focus our efforts and creating the most value for our platform users by utilizing our time and resources effectively.

04:53  On a related note, we are actively working to raise awareness of data mesh across departments within Novo Nordisk. This help data domains to understand if data mesh is relevant for them or not. So the next one is focus on interoperability of data products. As a data mesh platform grows and evolves, it is essential to provide users with the flexibility to consume data using the tools and services they prefer. In our case, it was mostly power bi sales maker, Databricks posted and Snowflake. Recently one power stakeholder shared her magical experience with me that I would like to share with you guys today. So she was looking for a data set and we were in a live station. So she went to our data mesh platform and found, discovered the dataset. Then she raised the data access. So now our platform sent a notification to the data steward of that dataset and once he approved it, which happened in few minutes because he was aware of this request, the request got approved.

06:14  Then he went to our analytical platform and opened her favorite analytical tool where she writes our script for analytical purpose. She could access the data immediately and get started with her, our script, and she was like, oh wow, it’s magic. I mean, we know that’s more than a magic, right? But the reason why I’m sharing this story with you guys is by focusing on interoperability of data products and hiding these low level complexity, users can easily access and analyze data using the tools they are most comfortable with improving productivity and efficiency. In addition to providing native integration, we also offered API and CLI to our users in order to interact with our platform programmatically. Let me tell you this, developers just love it. So this enable users to easily automate task and integrate the platform with other tools and services in their workflow. Another challenge that we saw among our users is how to securely transfer data from external vendors into Novo Nordisk, which may be in different size and formats.

07:33  To address this, we developed a tool called partner data transfer, which has been widely adopted by our users. This tool enabled secure data transfer from external vendors into the user’s own business account within the mesh, creating landing zones for each data domain to create their data product. So by providing these features, a data mesh platform can improve the usability and accessibility of the platform, allowing users to focus solely on business logic and not to be bogged down by low level complexity. Ultimately, this can lead to increased productivity, efficiency and user satisfaction. I hope you guys agree with me on that.

08:23  Okay, so the next one is implement modular data access. I believe that modular data access is a critical component of a successful data mesh implementation. By adopting this approach, data consumers can access only the data they need, creating a more efficient and streamlined process. This leads to improved performance, cost saving and better data governance. Modular data access can be implemented in numerous ways, such as role level security, column level, security table level, and database level access. It can also involve sharing structured and unstructured data in different ways depending on the user’s needs. For example, in our case, we had a structure data that both a BI analyst and the data scientist needed to access. The BI analyst wanted to access the data in a table format. While the data scientists wanted to access the same data as an object to build a machine learning model to accommodate both needs, we stored the data in such a way that expose the same data as both a table and a prefix or folder structure. So this approach eliminated the need to copy the data in multiple places, reducing data duplication, and improving data governance. So by adopting a modular data access approach, we can ensure that our data implementation is efficient, cost effective, and meets the need of all data consumers.

10:03  So this one comes truly from a developer experience. So prioritizing paying technical depth is essential for a scalable and sustainable data mesh platform. This involves paying off any accrued technical depth and avoiding shortcuts in platform development. I have to admit this, neglecting technical depth can have serious consequences, especially when working with external consultants with set deadlines like we do. So activities such as cleaning up obsolete features, addressing to-dos and optimizing code improves code quality. Not only does paying technical depth enhance the platform’s scalability and sustainability, but it also considerably improves the quality of life of developers. So by prioritizing this, we can increase the release frequency of the platform enabling us to continuously deliver value to users. Ultimately, this approach ensures that our data mesh platform remains flexible, efficient, and effective meeting the evolving needs of our users. I hope developers agree with me on this.

11:21  So moving on. The next one is accelerate data product creation. So as a data mesh platform, we aim to help data domain accelerate the creation of data products. To achieve this, we build reusable components like blueprints and templates. However, we realize that some teams struggle to see the value of building data products or contributing to the data mesh unless they see it in their own context. So to address this, we took a MVP approach and co-develop pilot data product with their data engineers and subject matter experts. So this approach helped them to see the real business value and take their data products to the next level. While fully owning the process, we set aside dedicated resources to support these data domains so that platform developers could fully focus on the platform. Additionally, we provided user guides, sample projects to follow and invaded CICD pipelines to qualify and validate their system. These tools and resources enabled data domains to accelerate the business value they could get out of their data products.

12:37  The next one is autonomy of data domain. As a data mesh platform, we operate on a shared responsibility model, which makes it challenging to strike a balance between what we should provide centrally and what we should give teams the freedom to build. We realized that by giving more autonomy to each team, they could bring innovation, use their preferred technology stack, and take ownership of their data products. This approach fosters a culture of innovation, ownership and accountability leading to better outcomes. However, we also recognize the importance of centralized federated governance blueprints and data sharing mechanisms. These enablers ensure that the data domains work cohesively towards the organization’s goal while maintaining data, consistency, quality and security. So this approach empowered our data domains to create data products more efficiently and effectively delivering more value to our users.

13:51  This is second to last, so hang on for a while. Okay. Staying up to date with the latest technology is crucial for a sustainable data mesh platform. As a rate of technological chains in the data platform is staggering, the latest technology brings new capabilities and features that can improve platforms, performance, scalability, security, and competitiveness. We are committed to staying up to date with latest technology and we have taken several steps to achieve this goal, including providing integrated ML lops framework for our users as machine learning and AI continue to be in high demand, adopting readily available tools and services rather than building our own. Even though engineers tend to prefer building their own solutions, sometimes it may not be the right decision, and I hope engineers do with me there. We are also accessing and striving to support multi-cloud in our data mesh platform, which could ensure that users don’t have to worry in which cloud provider data is stored. So this approach would abstract that complexity from them, but it comes with its own challenges for the platform, right? So by giving users the freedom to use modern technology and adopting the platform to new technology, we believe that we can future proof our platform ensuring that it remains relevant and useful as technology evolves.

15:35  So last but not least, based on my personal experience, I have found that refactoring an entire platform’s code is much easier then changing mindset because the latter one is much more challenging, and this is true not just for data mesh platform, but also for data domain teams. So in a large organization, especially in the healthcare industry, we have learned that technology alone cannot solve all problems. Sometimes changing the process is necessary. Hear me out again. Technology alone cannot solve all problems. Sometimes changing the process is necessary and one way to change the process is by adopting an evolutionary mindset which encourages continuous improvement, flexibility, innovation, and a user-centered focus. This mindset enables the platform to adapt to changing business needs and technological advancements. Making the decision to change can be tough, but by striking the right balance between user needs and satisfying those requirements, using the right technology in the platform, the evolutionary mindset enables the platform to stay current and provide ongoing benefit and value to the organization. Let’s say in our case, we would not be where we are today if we had not built a centralized data lake to break data silos then evolved into a data mesh platform to solve more complex business requirements and bring innovation to the organization using the right technology. I’m so much looking forward to see where we go next and this is how we are bringing our data to life. This brings to the end of my presentation. Thank you for your time everyone. Now back to you Paul. Hi.

Speaker 2: 17:44  Alright, thank you Jo Smith. Yeah, we have some questions from the audience and so I will ask them to you and we’ll see what you have to say. So we have two questions from a dt and the first one is what infrastructure and data requirements data cleanup, for example, need to be in place for successful implementation?

Speaker 1:  18:12  Sorry, can you, okay, let me see the chat.

Speaker 2:  18:16  Oh, sure. Yeah, it’s towards the top there at 8 0 7 and actually if you want, I can put it at the bottom of the chat.

Speaker 1: 18:30  That’s okay, I found it.

Speaker 2: 18:32  Okay, great.

Speaker 1: 18:33  That’s how much resourcing is needed for ongoing maintenance of the solution post implementation, right? I hope when you say resourcing, it’s the human resource I believe. Just to clarify that question. Okay. If it’s a human resource, then I would say it’s mostly, so we had a different journey there. We started our team with half of the AWS ER who was building the platform and also some Novo employees. Then we added more data engineers there. We grew so big and when we started solving the problem for our stakeholders, we had to split our team as a data engineering partners and the platform developers, which remained let’s say maybe around 10 people including the Scrum masters and product owners as well. So now we have almost 10 resources altogether and they are mostly now the Novo employees and we have few AWS preserve as well. So I would say somewhere of the scrum team size is good enough for a data mesh platform to maintain. I hope that answers your question.

Speaker 2: 19:58  Great. Next we have a question from Ganesh and it says, can you please give examples of data products in your case? Is it a dataset, table dashboard, et cetera?

Speaker 1: 20:10  Well, it could be anything right now for us it is depending on the data domain, they have their own data product. For example, for clinical trial, it could be clinical trial data for sales, it could be multiple tables and also some of the unstructured data, let’s say images, P, D, F and others. So depending on which data domain you are coming from, they define what their data products look like.

Speaker 2: 20:43  Great. Then we have a question, Mathias, apologies if I mispronounce that, but how do you achieve data product discoverability? Are you using a data catalog tool for that? And if so, which one?

Speaker 1: 20:58  Yes, so so far we have the one invaded. So we have our administrative database and also we use some of a services like open search in order to serve that data to our users. But we are taking a bit of a different approach now by collaborating with our enterprise data catalog team where we can share the data and also we are planning to build some semantic layer there so that it’s easy for our users to understand the data, discover the data with the right metadata and catalog. But right now, yes, we are not using any catalog tool for now.

Speaker 2: 21:43  Okay. Is that something you see down in the future or is that not in the plans?

Speaker 1: 21:48  Yes, right now we are working on it as we speak, so hopefully we’ll have even better data discovery in our platform there.

Speaker 2: 22:00  Yeah, that’s right, because a journey, so you’re always making changes. Yeah. Well, with that, as you’ve scaled, how’s the structure of your team changed or your teams?

Speaker 1: 22:16  Did you mean the data platform or the product team?

Speaker 2: 22:19  The product team,

Speaker 1: 22:21  Okay. So in general, it depends again on the data domain to domain, because some of the data domain are full flex, some of them are very starting very few with only technical expertise there. But let’s say if I have to talk about the across functional team that wants to build and maintain the data product, and I would say the basic role that are there are data product owner who identifies and discuss the customer needs, owns the vision, and also prioritizes the work. Then we have more data engineers who builds the data pipeline and implements the business logic. Then depending on the domain, again, you can have the data source expert or it’s commonly known as subject matter experts who understand the source system and how data is structured and the meaning of data. And if you take more analytical team members, then you can have the data analyst and scientists who builds analysis and sets requirements to the data, to data engineers.

23:26  And sometimes you might have the ML engineers as well if you have some ML use cases there. And especially in the healthcare industry, for us it’s quite common to have data governance person representative as well who especially in these data mesh platform, they are known as a data steward who defines, okay, which data whom to share this data and sometimes defines the quality metrics, ensures the interoperability of data. So these are the basic one, but if we scale it also, then I would say you can have site reliability engineers, cru masters, and also there could be some supporting rules like cloud architect and others.

Speaker 2: 24:11  So in an ideal data product team structure, that’s how you see it? Yes. Okay. Let’s see. We have another question from Iion. Could you elaborate on the semantic tech tooling you are using for data harmonization and the availability of internal reference data, if you see that at the bottom of the chat?

Speaker 1: 24:38  So for data harmonization, availability of inference data.

24:45  So for data harmonization right now, actually again we give the autonomy to our data domain. So they use based on their needs. So for example, most of the people, because it’s built on AWS, so they use these tools like AWS glue and where subject matter experts help them to come up with how the transformation should look like. So they harmonize the data based on that data engineering and subject matters knowledge there. And for the internal reference of the data right now, I would say we are currently working on that to determine how can we surface the common terminology that is used across Novo Nordisk and bring that into our data mesh platform. So it is the part of also the data catalog and others that we are working together with the enterprise team there.

Speaker 2: 25:44  Okay. And then from Ganesh, have you implemented data contracts or data sharing agreements?

Speaker 1: 25:53  So far I would say no to that. So for now, the data contracts are mostly either it’s at the table or it’s mostly the unstructured data and it’s between the teams that they defined for now. But I think this is something also we are looking forward to have it more matured way to see, okay, this is the standard way of doing, but for now as far as you have documented somewhere as far as your consumer knows how to consume your data, that should be fine. And which is mostly table that they use Athena or any of the single client or it is just shared as a folders, let’s say if it’s maces and PDFs and unstructured data.

Speaker 2: 26:42  Okay, thanks. And then let’s see, we have another question from Mattia. So how do data consumers obtain access to data products? Do they need to request access permission from the data product owner and how do they do that? Workflow Jira email.

Speaker 1: 26:57  Okay, so today how it happens is we have a data mesh platform where you have the requested tool button kind. So once you discover the data and then you can request for that, you can see the metadata of the dataset. First of all, you can see who is the owner of that, that could be a business owner and who are the data stewards there. So you can request for that dataset. Then our platform sends a notification to those data owners and data stewards. So when they accept or approve that in our platform again, so then they can internally we use AWS lake formation and bucket policy and many other things there in order to make this sharing mechanism go well. So for now we do not have invaded into the workflow. It is more into the email is notified and someone there will be a link to go back to that share request in our platform and it’ll redirect to that link. But so far I think people are happy about it. Yeah,

Speaker 2: 28:05  Ya, your question is pretty long. I wonder maybe if you want to unmute yourself and ask your question, feel free to do that. Or if you can’t having troubles, I can unmute you if you just wanted to ask that question. Well, okay, well if not mean, you can always, if you want actually, Jo, is there a way for people to contact you if they want to? Hi. Oh, there we go.

Speaker 3: 28:42  Nice. Hi. I want to know you were talking about data mesh as a data platform and I want to know how you manage to make the whole company have the same focus and the same objective on making the data accessible. I mean there is some efforts made by, for example, data engineers that willing to clean the data to pin data owners and so on the data platform, but there’s still a lot of data that are too that not assigned that a business owner without, and this is too much work or too long efforts for single team. How do you manage to make the world company work on this data mesh objective?

Speaker 1: 30:10  Okay, so with the data mesh, as I mentioned that we have a shared responsibility model. So we only provide something centrally, which is more like this data governance, how we do the data sharing and data discovery and others. But while when it comes to the actual data, we don’t own that. It’s a data domain team who owns that data and also that’s why it’s called data product. So they have this product mindset there where they will maintain this data product. So sometimes they might have the, let’s say, junk data there. Then I would say that is something that they will look into, not the platform team. So to me, when I publish the data, I need to make sure that it is interoperable. That means something like when a user comes, they can consume that data in some means, whether it’s let’s say folder structure, then of course it can be accessed as a S3 object or let’s say if it’s a structured data, then I can use my Athena or our analytical platform that we have natively integrated so that you should be able to consume those data. So I think it’s not just one data platform’s responsibility. We do take care of the central whatever we provide centrally, but actual dataset is owned by the data domain themself, which is multiple teams like let’s say sales production, clinical trial. There are multiple other teams within an organization.

Speaker 3: 31:49  I hope that yeah, yes, yes. But how do you make that move from a data centric approach? I think we all start from a data centric approach before realizing the data mesh benefits and the data centric approach is made from a big hotel ETL transformation. So it’s hard to share the ownership. There is some kind of old big, big table, big data assets that answers a lot of use case and knowing how to split the ownership or how to split the ETL, they’re going from centric to mesh is not that evident.

Speaker 1: 32:52  So for us, I can relate to you because we also initially had our centralized data lake, but it was not possible for us to scale. As you mentioned as well, we face both technical and operational challenges to maintain it centrally. So then we had to evolve into something else. So we chose this data mesh platform where we do not own the data. We maintained a few things centrally as I mentioned, and we let the data domains, who owns the data, own that ETL process to how they want to expose their data. Also, sometimes now people are also free to make the lake house use our platform as just the storage and then have the consumption layer or something else like Snowflake or Databricks or something there so that they can build their own data product and that will distribute the responsibility from the central team to the particular team, and especially the one who is more subset matter experts. Because as a central, you cannot have the knowledge of sales to production to everything. So it became hard for us to also scale. That’s why we gave those very domain related responsibility to data domain product, product teams.

Speaker 3: 34:11  Okay, so you’re saying that is a long-term objective is not easy to move on. We cannot throw away the actual business because the business has to run and this is a long-term effort to split and district this big ETL as possible to give the more ownership we could to other teams?

Speaker 1: 34:42  Yes, and I think for us it became easy for us to move on to this direction because the pain points of centralized data lake was realized sooner than later. And also then when leadership is convinced, then it’s much easier to communicate across these multiple data domains team because the actual problems are very engineering problems, but the approach when itself is not working, then we have to take this different approach where it was beneficial for us so that we have less responsibility towards very domain specific thing while we empower our data product team because they were also locked down by many things like let’s say having, as you said, having a big table, then the query performance will be slow. So it was not helping them either. This was a win-win situation for both a data product team and our team to move forward with a data mesh platform.

Speaker 2: 35:46  We have another question. I think Ganesh, you wanted to ask another question? Is that right?

Speaker 4: 35:54  Yeah, sorry. It’s really hard to type every time. So my question is again, back to the discoverability part of the data products. You said you don’t have catalogs at the moment, but then how come a consumer can discover the data products?

Speaker 1: 36:13  So we do have some metadata and we do have some organization level kind of thing where you have your own environment, so which data domain it belongs to. And of course we do have some metadata as well. When you search in our discover page, then you can put it based on the tags that if you already know it or you can search by the word, if you know some of the very Nova specific terms, it could be that dataset that you’re looking for. So you can do the text search there so you can browse through it. So it’s not that we don’t have completely, you can do it, but many times it is very conscious choice that, oh, I’m looking for this data set, then you are already talking to this data domain team to convince them enough that why you want to get the access to that data set. Some of them of course, you don’t need to discuss that prior, but many of them, if it’s a data, critical data, then you do have to have that agreement before you request for the data set.

Speaker 2: 37:18  Okay. Yeah. So many questions. This is great. Thank you so much for answering all this questions. But yeah, another one from Ryan Waters, it says, hi Jna. It’s great to hear from somebody who also on the journey, and it sounds like you’re slightly further ahead, so it would be great to understand how many data products are currently active on the Novo Nordisk data platform and how much of the targeted Novo Nordisk data percent age is currently available via the Novo Nordisk data platform.

Speaker 1: 37:50  So right now we have more than 20 stakeholders that represents the different data domains. Also within this data domains also, they have multiple different teams there who wants to have their own ownership of their own data products. And for your second question, I would say Novo Nordi has century long history. So if you look at the data part of it, it’s huge, right? I don’t think we have even managed to get the 10% of it, but I think we are moving ahead there and we are usually, what we are doing also is with the new stakeholders, we are migrating to the cloud as much as possible from on-prem, and we are starting the journey from here itself. And also the tool that I mentioned with the partner data transfer, we get the data from many vendors, external vendors as well. So we are securely loading those data using our data mesh platform. That means that those data are available for other people to use. So yes, we are moving forward and I think 20 is two less stakeholder in the scale of Novo Nordics right now.

Speaker 2: 39:02  Great. Mattias has a question that I wanted to ask as well, which is if you could start your data mesh journey from the beginning, what would you do differently? Or would you?

Speaker 1: 39:16  I think we have quite a learning journey there. And today also in my presentation, I mentioned some of the failures as well. For example, in the beginning we didn’t put so much of a focus on our UI and user experience. Maybe we could have done better there, but I would say in order to gain some maturity, you cannot gain within on the day one, you have to learn, you have to grow, you have to get this embrace, this user’s feedback because depending on whom you are serving and how you are serving and as technology advance as well. So we have to be more adaptive in that approach. So in short, if I have to say, I would say I wouldn’t do anything differently because we really learned really good lessons and we are adapting to it. And I think that remains true for almost like for any robust system that you want to build or scale or sustain your platform.

Speaker 2: 40:16  Yeah, adaptability seems key. Yeah. Okay, great. Yeah, we’ve had a lot of great questions. Very, very active session. So where can people actually, oh, I had one more question actually before, so maybe if people are interested to know more about the implementation of your data platform, where can they find that information?

Speaker 1: 40:44  So we have been very open about this data meh platform. So we have our AWS blog post along with the AWS Prosser, so you can actually Google it if you Google for the Novo Nortis modern data architecture. We have the series there. I think third one was out. We are working on the fourth one right now, which will be out soon. And I have also presented this, our data mesh platform and its architecture and how our data domains are using it, plus our vision where we want to go with this data mesh platform in the last AWS reinvent 2022. So you can also find that actually in YouTube itself. And along with that, I have spoken about this in data mesh radio, which Milia from this community has also made an article about it, what we learned about the building this data mesh platform. So yes, most of the information are available in the Google itself,

Speaker 2: 41:47  And actually you can find that article. So it’s the medium and then also on the data mesh learning block, so you can find that article. Well, yeah, this has been a great talk. We’ll have to have you back in another two years and you can give us even more information on how your data mesh has matured and more lessons that you’ve learned.

Speaker 1: 42:08  Yes, I think we have so many things actively that we are working on, for example, the catalog discoverability and more enhanced this lops integration. So I think we will have more to share there. And I hope to be here again, and I’m very happy to share my learnings with this community.

Speaker 2: 42:28  Okay. We’ll have you back in one year from today. How’s that sound? Okay, great. Well thank you so much. This was a really amazing talk. So I want to thank everybody else for taking time to come here just now. So yeah, I think that’s it.

Speaker 1: 42:50  And thank you everyone.

Speaker 2: 42:52  All right. Have a nice morning, afternoon, evening. Great. Bye. 

Data Mesh Learning Community Resources

 

Ways to Participate

Check out our Meetup page to catch an upcoming event. Let us know if you’re interested in sharing a case study or use case with the community. Data Mesh Learning Community Resources