Frederik Gjørup Nielsen has gone from Lead Engineer over Engineering Manager to the Head of Data & AI platform at Pandora, the world’s largest jewelry brand. His last years have been focused on translating commercial use cases to real-world implementations in Data Mesh-inspired architecture. In the presentation, he takes you through the initial discussions around Data Mesh through a two-year journey of an evolving architecture, Data Platform, and strategy.
Speakers:
- Frederik Gjørup Nielsen – Head of Data & AI Platform, Pandora Jewelry.
Watch the Replay
Read the Transcript
Speaker 1: 00:00 Thank you everybody for joining us today. My name’s Paul. I am the community manager for Data Mesh Learning community. We have a great talk here with, oh my gosh, sorry, Frederick Nielsen from Pandora. And yeah, he’s going to give us a really great intro talk about his data mesh journey at Pandora. Before we get too far, I want to just do some community announcements. So we recently just launched the Data Mesh Learning MVP program and this is acknowledging some members of the community we’ve probably seen and probably already know who have been helping push the data mesh community forward. And you should reach out to them in our Slack channel if you want to talk with them. And you can also see in that our virtual meetups. Speaking of virtual meetups, we have our end user round table meetup coming up on April 16th. We just had one a couple of days ago, but that one will be covering domain architecture and that’s with Andrew Sharp, Amy Regatta Hawkinson and Juan Rossier who are also data mesh MVPs.
01:14 And then this is the third part of a series of next generation data mesh operating model. And that’s with Eric Broda as the moderator with Jean George per and Tom de Wolf. And that’s going to be on May 2nd, so you should come check that out as well. And we do post all of our talks if you miss them live on YouTube at the Data Mesh Learning YouTube channel, so you can go check them out there as well. And then if you want to join the data mesh Slack group, you can do that by scanning this QR code and that will take you, well that’s an invitation to the group and you can ask questions, you can answer other people’s questions and just start conversations about data mesh. So definitely join that group if you haven’t already. And then lastly, you can find a lot of information about data Mesh Learning community by going to our website and that’s just data mesh learning.com. We have hundreds of resources lists, events like this one and there’s also a use case library so you can gain some insight on how other people are doing data mesh. And that’s all I have right now and I’m going to hand it over to Frederick who’s going to give us his presentation.
Speaker 2: 02:35 Thank you Paul. Lemme just see if I can share.
Speaker 1: 02:42 I’ll say one more thing and that’s in terms of questions. If you have questions, go ahead and put them in the chat and Frederick will answer those questions at the end of the presentation.
Speaker 2: 02:57 Good. Can you see my screen on my presentation? Good. Okay, so thanks for having me. My name is Frederick Gu nis, we agree with Paul not to try with the pronunciation of the Danish middle name. I’m working at Pandora jewelry in Copenhagen, has a little bit about me so you know what my journey has been before data mesh and why I’ve ended up here. So I originally studied economics, thought I was going to save the world in macroeconomics, but found out that building data and IT systems, which was much more fun. Worked a couple of years in consultancy and then entered Pandora as a lead engineer and went on to become a manager in data and analytics department. And right now I’m the head of our data and AI platform team, which are the teams that built our data platform for all of our product teams that build stuff for the business.
04:09 I’ve said the two technologies that excite me a lot. One is Apache Flink, which I think has some really interesting use cases together with Kafka and is relevant for someone that works in data engineering that wants to start using stream processing, which is important in our area and where Kafka is interesting as well. And then recently really started to work and explore more with the graph databases. It’s one of the things I think we’re going to use in a data mesh context in Panora this year. In the other arena I have a master data services in Excel, like a master data plugin that you can use on virtual machines in Excel that I really hate, but it’s very tough to kill. I live in Copen Webs head office assist and have two kids here. It’s also why the talk is here at the end of the night where they are up in their beds.
05:13 So what I’ll cover today is I’ll walk you through what Pan is, what kind of company it is, then I’ll sort of take you through a funnel that is what is our overall global strategy underneath that a digital strategy and then see how this sort of zooms in on some of the challenges we saw when it came to being really good with our digital products towards our consumers, which has been a big strategic priority and this has sort of been paving the way for us saying we have some structural challenges here that we need to tackle and data mesh is probably a good way to do that. Good as company, it’s the world’s largest jewelry brand and no one comes close to us. If you think of pieces of jewelry sold 103 million pieces of jewelry, 28.1 billion revenue in DKK, that’s around $4 billion. And then in 2024 we’ve reached a hundred percent of recycled gold silver in our collection, which is one of the achievements that we’ve been particularly proud of and right now having around 34,000 employees around the world and it’s going really well.
6:32 I think if you know Pandora, most of you, what it’s known for is the charms bracelet where you can have a bracelet where you can add small charms to it, where you can gradually build a connection. We also have exclusive partnerships with Disney and all of their underlying franchises like Marvel. So that’s really where a lot of the money is being made. Good. Okay, so that was a little bit about Pandora overall. Now we start to focus in on how we’ve arrived at what we do in our data and analytics units, where we fit in the organization and how we’ve ended up in a data mesh journey. Our overall strategy corporate strategy that is what the whole organization is working under is called Phoenix. It’s because Medora a couple of years ago wasn’t doing that well economically, but we’ve been on a growth journey. So Phoenix is sort of saying rising from the ashes.
07:29 One of the important things to note there is that digitalization and personalization is a growth pillar and one of the foundational elements that we’re building the next chapter of growth on beneath this, we have a digital strategy that says on one side we have a priority that we want to really create a good shopper experience for our consumers whether they’re going in a physical store or online and provide a streamlined and efficient business. And this is then our digital strategy where you can say data and analytics, this is where it starts to appear. An that goes throughout all of these, this is required to succeed with this.
08:15 And the thing about this is is also how we organize our teams. So in this pillar two here we have all of our marketing and consumer related efforts and all of the in here pillar three, this is where we have all of our retail and e-commerce efforts. And the reason why it’s important is because that’s where I was a manager and that’s where recently, and that’s where you’ll see some of the slides relate to use cases in this area. It’s also where we’ve said when it comes to this customer experience where we have direct touch points with the consumers maybe on you can say social media platforms physically in the stores or online shopping. This is an area where we have an ambition to be world class. We’re not as good as we want to be right now, but the bar here is really high.
09:12 And why is this, why is it important to be world-class here and why is it that personalization means a lot for a company like ours? Here’s some numbers on it, but when we talk about personalization here and it’ll be relevant in a data mesh context, it is that you have, when you get communication from us, whether it’s on emails or you see an ad on a website from us, it is relevant to what you’ve clicked on, what you’ve bought, what we think you’re interested in. Hopefully we’ve used machine learning to make it even more relevant, but it’s becoming increasingly important in retail that you can start to work with personalization. Right now we say that we are maybe able to do what you call segmentation, but we want to get up to a point where you can do near real time dynamic serving of content and it’s when you start to work with some of these kind of use cases and look at this that you can say, hey, okay, we need to look at how this is actually executed in a data system at scale.
10:31 So here comes the issues then what were we looking at if you went a couple of years back, we had an architecture that looked like this and it’s still the legacy system we’re trying to solve but not going to dive too much into it, but we’re talking about a more classical left to right kind of architecture where we have a lot of on-premise machines load data from primarily production ERP systems and some sales systems which was ending up in reporting out here, very big SQL servers running in the cloud. Then a couple of years back we said like, okay, that needed modernization. So we did a big shift up to the cloud. As you can see here we are on Microsoft’s Asia platform. So those are the services that our data meshes built on, but again, I would say an architecture that is modernized in the cloud but still focused on primarily moving data from source systems towards reporting.
11:32 Now the reporting happens in Power bi, it’s embedded into a SharePoint portal. All of that is great. We can load data faster, it’s in a cloud environment which is easier to handle, but still we are looking at a pretty monolithic setup and this has been the journey we’ve been on from old school bi to a transition where we modernized the data systems but it was still primarily min for reporting could do a little bit more. You start to see Databricks, which of course has brought some AI and machine learning capabilities, but then we were faced with use cases like this. We had a lot of workshops with the business where we say, okay, this is the future ambition for what kind of journeys we want to be able to, what kind of journeys we want to be able to present to the customer. This is a person that goes into a store, they buy a bracelet and they sign up for our loyalty program.
12:35 They go online, they unlock some points when they land, they see personalized recommendation and they shop some more juries from our Marvel collection. They can call our customer service who know who they are when they browse Instagram, they can also get personalized content and they sign up for newsletter where we can give them tailored content. All of this down here denotes whether the consumer is known to us, which means that we have identified them by email if they’re semi known, which is cookies or if then unknown consumer, which means that you’re visiting the website or you’re buying something in a physical store, you don’t give us the email, we dunno who you’re, the exercise we then had was like, okay, if we need to enable all of this, what are all the events that we need to have in the systems? What needs to happen?
13:25 And when you draw it up in a diagram like this, who is the owners currently of these data blocks? Which systems needs to be connected and how do we link all of it? And this is where when we sort of took an existing cloud architecture that was consisting in the data analytics department of monolithic platforms where all of it needed to go through, you can say a centralized pipeline flow with all teams working on one platform, we started to say, Hey, this is going to be tough to execute. You can say simultaneously we could see that we do not have the speed and flexibility in several different ways, both from how fast can these systems load the data, what are the flexibility we have in linking one data block with another? We didn’t build this in a very scalable way. We had domain knowledge that was centralized a lot with specific teams.
14:33 We couldn’t really probably assign ownership of the data products and the analytical platforms was the only place to get a lot of our core data, for example, like sales. So gradually we started to mature and say like, okay, we need something that looks more like this where you have a data product layer and in between you have a streaming platform. For us it’s Kafka hosted with confent an API gateway that allows applications to curate data products that maybe hold transaction history, loyalty points, emails being sent and clicked on all the interactions you’ve had on our website and being able to distribute this seamlessly from data products to the application layer and back and forth as needed.
15:27 So when we’ve looked at the next steps of the journey, we’ve said, okay, after this transition to a cloud architecture, we need to go towards a more federated setup where we can enable more near real time where we can allow teams to quicker have access to data resources in a way that the mets them their need where they don’t have to go through a centralized monolithic platform. And these use cases have sort of been the example of saying we have a tough time executing this and we’ve talked with on the hand Denmark, Denmark is not that big of a country, so we have a tight network. I saw in the data mesh learning community page that we also had a talk from Louisville, which is a big pharmaceutical company here in Denmark and we’ve aligned on this same idea about what we essentially wanted to create for this to be able to scale was a data marketplace. A bit like Amazon where producers of data products can come and post that data or put the data products on the market and it’s easy for shoppers to come and find the data products and you have all these things like quality of the producer, maybe ratings of data products and all of that attached to it. So we’ve also been inspired by here in Denmark, Nova has really been good at articulating this and they are also very much building this on AWS. So the Amazon analogy is even better for ’em.
17:08 So when we’ve drawn it up, we’ve said like, okay, this new transition of the architecture is instead of looking like this left to right is more looking like you can say a connected spider web of domains where you have wall off domains that can share data with each other. You have the more raw data products that come on top of the source systems and then the mutation of these different data products throughout the landscape to service a big set of use cases. Some of them might be analytical and end up in dashboards still. Others might be to provide enrich data sets to downstream systems that need it and other things might go to machine learning. When I’ve been looking at this, I’ve said, what are the tough things about building this kind of setup from an architecture and from an engineering point of view and the places where we’ve struggled the most is saying like, okay, the platform side, what is in that? Well, what’s in the platform side before I transitioned to being the head of the platform team here by new year, so this is what I’m spending a hundred percent of my time on. But when I’ve looked at this, it’s an area where I would say this for example, not a very mature market when it comes to having services with the cloud vendors that allow you to easily do a data product, API domain, API or data catalog or all of these things that we want in our data mesh architecture.
18:54 I would say it’s emerging. I know Shamar is also building a company that’s maybe aimed at providing a lot of these things, but still this is something where it’s not something you can go and grab off the shelf as of now. Then the second part that we found challenging is not all the things that happen on the left side here where we need to take data from a Kafka topic and persistent in our cloud systems and stream it. That’s fairly easy, but it becomes more complicated when you say what happens when you need to join several data products to create enrich products? What do you set up with data contracts? What kind of technologies do we think is most beneficial for that when you’re moving from a traditional batch set up to streaming, how does it all interfere? So this is also something we’ve been spending a lot of time in on and still at a stage that I would call experimentation.
19:54 We also said like, okay, this discussion was mainly had inside of our engineering teams, but we said for it to really succeed it needs to sort of have a broader discussion because a data mesh implementation if it needs to be really pure also needs to take in the organizational elements and in reality we have wanted to say, okay, you need to talk about what is maybe the end states of how an organization should look like, where we really give domain ownership to the teams where we really try to work with data products that are connected, and this is still evolving, but this is also where we started to see some of the challenges.
20:46 What we’ve landed on is trying to say, okay, we wanted to then create these smaller units in our infrastructure where each team could have their own set of resources. So these were some of the sketches where we discussed what is the granularity you need to go into? How do we make sure that teams, we can track them at a grandly level, but it doesn’t become so scattered that the infrastructure becomes very tough to manage. Yeah. Then let me see where I jump now. Then we had some of the challenges, and this was two years ago where I think we came up with this picture and said, we think this is some of the considerations. This is where we want to move. It can solve the challenges we saw in the use cases earlier today I’m in COHA now where it’s half past eight, but earlier today I was training my daughters under seven soccer team and I’ve been training soccer teams for a while and there’s an expression that says good on paper shit on grass, meaning that it looks nice in a plan when you draw it off, but when you need a bunch of 27-year-old girls to execute a soccer drill, sometimes what looks really good on your drawing beforehand doesn’t really become a good thing in real life.
22:24 So two years in, what is maybe some of the conclusions that we have come to, we should have spent more time in making some tough architecture choices at the beginning I would say it’s because some technologies just has been my experience are better fit for executing a more decentralized setup. I think it’s tough to execute a data mesh on a centralized SQL server. I think you need to be able to split it in team sized bits but still have them connect easily and we face some difficulties there. We’ve also had some parts of our organization that were really ready to come and work with us on these issues. For example, when it came to owning the data products while others didn’t kettle and during this journey when we were doing a lot of this, we had a change in senior leadership, which meant that the momentum for data mesh really, really it needed to be retold.
23:39 So what are some of the things that we are looking at right now? I think we allowing some domains to go at a faster pace than others and say like, okay, they can lead the way on some of these topics and with others we need to be pragmatic because the organizational buy-in is not there and not there. We’ve also said like, okay, to build some of these stuff like data catalog, we are seeing that more and more of the vendors that we are using start to have native offerings that makes that an easier journey or where it becomes easier to share things between platforms, especially here with Databricks, which is really big in our organization, is become easier to read data from other data sources to have a catalog that scans all the data objects you have different places. So these are some of the elements that we’re working on now. Yeah, I’ll return a little bit to this one thing about moonshot, so I’ll just let it be there.
24:40 So a year ago we got a new leadership in our organization that didn’t have any attachments to data mesh at all and it was in reality that we had moved strongly in that direction for a year and actually it’s been cost tracking and cost cutting. That has been one of the main drivers in creating this new decentralized setup because we could see with the monolithic platforms that we had so many challenges and still have where teams are blocking each other, you don’t have proper transparency into who’s driving what cost. It’s a challenge from a compliance perspective because you can’t get very granular access to sensitive data objects and the team become very dependent on the platform team whenever they need to do something. So therefore you can say the decentralized setup, it’s been cost tracking that has been driving it a lot, but it allows us to push some of these other elements as well. So yeah, this has been, you can say the success over the last year or so has been that okay, the new leadership could see themselves in this accountability on costs and the dedicated effort to actually lower costs by going in this direction and we can strengthen a lot of these other elements as well and that’s why we are seeing a lot of natural adoption in this direction.
26:17 One of the things I’ve struggled with a bit, and this was before we’ve landed on before we landed on the cost tracking as driving most of our recent data mesh efforts has been how to create a compelling storyline for selling something like this. And the reason why it important in a business like ours is that we have a senior leadership that is not very technical. I don’t think they will. We have a CTO that sits in the top leadership of the company, but he doesn’t come from an engineering background. What really caught his eye around data mesh at the beginning was that a consultancy showed them the idea around having real time reporting on our consumers, how many consumers do we have, how many are signing up, where are they located in the world? And our CEO loved that idea as well and it’s because for them it became about this dashboard that they wanted and underneath it was executed by data products and ownership and streaming. They didn’t really care about that, but they didn’t have that dashboard that was very tangible around our consumers. So that’s what they wanted.
27:36 But one of the things I’ve said that we probably need to do is say if we want to build this out and say we want to transition to data MAs architecture, we need to do one of two approaches. It is either to go use case by use case, say we are building a data catalog because we have these teams that have troubles finding high value data products that we offer. We are decentralizing the monolithic architecture or the monolithic structure of our data platform because we want more cost transparency. Alternatively, you could go in a direction and say, we are going for this really big effort doing something like Alibaba’s single stay in China. This is a picture of it where they have one day a year if you know it in China is one of the biggest sales days they’re Black Friday where they have these giant big screens and are able to sort of say what are the sales around the country?
28:37 What products are really running up high and do at a big event? And in Pandora it could be some of the same saying, if we want to run global sales competitions where we have the latest inventory information go to all the right systems, you can use AI to do personalized content on our website and we can report the sales numbers and all of this interlinking. That could be some of the things we wanted to do through a data mesh but not do it with data mesh underneath but not mention it at all. In reality, we’ve probably landed more on the use case by use case setup. Okay, so last slide for me, what I’ve landed on after New Year’s I’ve been before I was the engineering manager of the teams that spent all of their time building stuff for the marketing and the retail and e-com organizations and now I’ve moved over to being the head of the data platform both on product and engineering and the vision that we’ve sort of landed on. You can see maybe some of the slides match our digital strategy, but it’s to say what we want the platform to be is basically a seamless link between data producers on one side and on the other side data consumers. And that is a data store that we want to be building for the company where all of this exchange is made and this is been our way of retailing to our leadership, which is not that technical either.
30:25 How can we sort of as a platform team start to build out these different capabilities we can place our efforts in data privacy and security building that throughout the platform. It can be in our DevOps initiative, which is automating our monitoring, our data quality could also fit in here. Putting more efforts behind ML ops and having that as a standard part of the platform as well and putting more emphasis on cost efficiency and transparency or building out, you can say the data marketplace. So these are in the platform team now, the sort of strategic pillars that we have and say where are we placing our efforts, taking that to leadership and saying you decide this is where the demand we have fall. And then they can go into their leadership sessions and say, okay, this is where we need to see, this is where we want to see you accelerate this journey. There was a lot of content. That’s what I had Paul. So I think I’ll stop sharing my slides and then we can go to some questions.
Speaker 1: 31:39 Yeah. Awesome. Thanks so much Fred. Really appreciate your insight for everybody that out there. Definitely don’t be shy about asking your questions. I definitely have a couple that I want to ask and I will, I guess I’ll start. So one thing, you had talked a little bit about data contracts, so do you have a tool that you’re using right now or how are you implementing data contracts right now?
Speaker 2: 32:06 It’s one of these things where it’s actually coming a little bit from bottom up from the product teams right now, YMO files is the main part where you are specifying it. It’s one of the things where we probably have to figure out do we want to have a aligned strategy for this? But it’s mostly some teams have been doing it between each other. So I wouldn’t say it’s a big collective effort, but it’s one of those things where when this matures, we need to think about that more. But it’s been mostly by setting some configurations in Yammer files or putting in, you can say a standardized set of values that needs to be added to all data products, but there’s no enforcement mechanism. It’s not because it’s that strong as of now.
Speaker 1: 32:59 Oh sure. Yeah, we actually were talking about that at another talk, talking about data contracts and whether it’s enforcement or the notification and what’s the right way to go there. We have a question from Paul Gale. He said you talked about a platform, however the architect seems rigid. Do you try to automate, how long does it take to onboard a new data product?
Speaker 2: 33:23 Good question. So I would say our architecture right now is split between the three pictures I’ve shown you. So saying that the legacy platform is very much still alive. We’re trying to kill it. Then there’s this, you can say cloud monolithic structure that is weird in between step between the on-premise platform and then what we call the new decentralized setup. And on that platform it’s actually very fast. So we are using ServiceNow as a global company. This is the engine we have for doing tickets, getting accesses, but what my team is doing and then is then working with the ServiceNow API, we have Terraform scripts on the other end, which means that you can go in on the ServiceNow webpage internally and then say I want a new data area for me and my team. This is the owner, we are doing this and this. And then all of the creation is automated and you can start to work on it in Databricks primarily. And I would say you can start to do data products within a couple of hours. Great.
Speaker 1: 34:32 Awesome. Thanks. So I had another question. In terms of challenges, what percentage would you say are technical challenges and then the other, what percentage would be human challenges like getting buy-in? I
Speaker 2: 34:54 Would say if you could be controversial, say it’s a hundred percent, one or the other. No, I think it’s actually I would say 75% human challenges and then 25% technical challenges because there are technical challenges, but I think the human ones are the biggest ones. It’s around finding use cases where it hurts enough with the old setup that means that you need to do something else. And then attaching and then being good at telling the story of why we don’t just quick fix it, but say this is the solution to fixing it at scale throughout time as we know it right now. But there are technical challenges we are struggling with. One of the things we’re struggling with is now, okay, we’re doing this decentralized infrastructure and the team, we are spending a lot of time on Terraform. How do we empower the teams to control and own their own resources, but still allowing us to do constant improvements of the architecture without there being a misalignment between the Terraform scripts and then the resource. So we still need to deploy stuff on the infrastructure even though the teams are owning it, but how do you do that because we’re not in a situation now where there’s a maturity in our product teams to start working with Terraform. So there’s intersections where the challenges sort of collide.
Speaker 1: 36:27 Sure. Then also in terms of data quality, I mean are there feedback loops for that? Because you’re also giving people some freedom, but how is that monitored at this point?
Speaker 2: 36:44 It is not done on data product level, and it’s probably one of the things like it is one of those efforts that we’re strengthening the most this year. Probably next quarter I’m building the main components for data catalog and then we have the discussions around how do we at scale implement data quality checks in what we do. That probably goes together with data contracts as well. So all of these things are a live discussion on what are we doing. I think when we’re looking at data quality, we’re looking at great expectations like a lot of other companies to say, Hey, we don’t want to go out and shop and buy a platform to start with, want to see if we can utilize what’s there, open source on the market that we can try to build something on. And then if we come to a point where we say, this is not scaling or we are missing these features, then you’ll maybe go to the market. But for now we are approaching it piece by piece.
Speaker 1: 37:40 Sure, sure. Yeah, that makes sense. I have another, actually two questions from Paul Gale, I’ll start with the first one. So what about lifecycle management and versioning? How do you manage chance?
Speaker 2: 37:55 So change in I am guessing data products and maybe the data product resources. That would be my assumption. Feel free to write in the chat if that’s not what you meant. This is still
Speaker 1: 38:13 He made change. Sorry, how do you manage change? Change, but yeah.
Speaker 2: 38:17 Yeah. And I’m guessing here he means data product changes or the underlying or maybe the underlying resources. So I think on the more global level, one of the things I will try to introduce is more decoupling of the architecture, meaning that I think for us it will become essential to remove the underlying connection to the specific resource that is serving you the data product. And what I mean by that is that right now most of our end consumers, they for example, go to Databricks Unity catalog or they can go to our Asia SQL server and that’s where they get the data product. But in reality, I think it’s better and they get that from an explicit server address often or endpoint. What I want them to connect to is data pandora.net/data products so that we have that layer in front of it where you are not connected directly to the underlying resource, but that we can shift that around that.
39:27 We can change it up and modernize it, but we can make sure the endpoints are stable so that downstream consumers can still rely upon the data products when then talk about data product management. That’s also something that we are still maturing on. We a lot of, we don’t own the Kafka platform internally, but we are talking a lot with them about schema evolution, but I think that is where you probably need to define that in your data contracts. So it is not an area where we are very advanced. It’s still version 1.0 for a lot of these teams.
Speaker 1: 40:04 Yeah. Yeah, I mean what you’ve talked about seems what we’ve heard from a lot of people in that it’s about making gradual changes. I think that also I think helps with the human element as well, right? Because if you’re not dumping all these changes on people all at once,
Speaker 2: 40:26 And it’s also just to say as of now, it is still at a level where people can handle that through dialogue and all that. I think it becomes a bigger issue when you’re really scaled up and a lot of people are using that work, then you need to have it. Otherwise it will be a pain in the certain place over and over again. But that’s not where we are right now. So it’s going from the biggest pain point to solving the next. The other catalog is the one that we are seeing as a big challenge right now.
Speaker 1: 41:03 Okay. I’s see another question from Paul. Do you follow DDD principles, bounded context, contact maps, et cetera?
Speaker 2: 41:15 Yeah, so Pandora has spent a lot of time in our overall operating model where we also work with product like there’s product managers in all teams. In all teams, and a lot of training in domain driven design. In reality, I think it could be sharpened even more. I think we’re quite good at saying that people work in these pillars, product lines where all the teams are sort of saying, this is your domain and your area and you design for this part of the organization. But I think if you were really strong in domain driven design, I think organizationally we would probably have an easier time just sliding into a data mess set up. But yeah, a lot of places that that’s not where we are. Some areas our market, all of our consumer data and marketing setup is very ripe for this kind of setup. But when you’re talking about finance and a retail organization, that they are not there. So it is a little bit more old school in the organizational structure.
Speaker 1: 42:29 Sure. Okay. Oh, Paul says, couldn’t agree more. All right, well yeah, I think that’s all the questions I’ve seen so far. Does anybody else have any other questions? Well, I think that’s it. Well, thank you so much Frederick. Really appreciate your time. I know you’re probably very busy and it’s getting late and I appreciate you working around our schedule. So thanks so much and yeah, a great talk.
Speaker 2: 43:00 It’s been a pleasure. And Phil, if you have any questions, things you want to discuss, I’m always open. Find me on LinkedIn. If you can spell my middle name, then it’s easy to find me and I’m always up for dialogue there.
Speaker 1: 43:14 Alright, great. Thanks so much. All right, have a good one. Bye everybody. Bye.
Data Mesh Learning Community Resources
- Engage with us on Slack
- Organize a local meetup
- Attend an upcoming event
- Join an end-user roundtable
- Help us showcase data mesh end-user journeys
- Sign up for our newsletter
- Become a community sponsor