Alexandra Ebert: Hello, and welcome to Episode 45 of the Data Democratization Podcast. I'm Alexandra Ebert, your host and MOSTLY AI's Chief Trust Officer. Today, I have the honor and the pleasure to introduce Mastercard's renowned and simply brilliant Chief Privacy Officer, Caroline Louveaux is our guest, who is also Mastercard's newly appointed Chief Data Responsibility Officer. In today's episode, Caroline will share more about her new role and why it was needed, and give you a lot of actionable advice on how you can be successful in today's even more complex data and AI ecosystem.
Particularly interesting if you're a privacy pro, she will also reveal her approach on how to create a privacy and data governance program that not only becomes an enabling function to the business but that your business counterparts will love. Yes, you heard me right on that. We'll also chat about what's fondly called the tsunami, or alphabet soup of the European Union's new data and AI regulation, and how you can set up your organization for success despite the increased legal complexity. Of course, we also touched on the topic of AI governance and how to enable AI innovation in a trustworthy manner, as well as how privacy-enhancing technologies like homomorphic encryption and synthetic data can contribute this, and what is needed to bring these new technologies into your organization.
Lastly, Caroline shared her top priorities for the years ahead, and why she's so passionate, particularly about data and AI for good initiatives. You can see there's much going on in just 35 minutes of recording, but it's definitely one of the episodes that you don't want to miss. With that said, let us dive right in.
[music]
Alexandra: Caroline, welcome to the Data Democratization Podcast. I was actually very, very much looking forward to doing this episode with you for quite a while now, so I'm super happy that we made it today. Before we dive into all the topics that we want to cover today, could you briefly introduce yourself to our listeners and maybe also share what makes you so passionate about the work that you do at Mastercard?
Caroline Louveaux: Thank you so much, Alexandra. Yes, we made it.
Alexandra: Absolutely.
Caroline: My name is Caroline Louveaux. I'm the Chief Privacy and Data Responsibility Officer at Mastercard. I joined Mastercard about 15 years ago. I've seen the company transform from being a traditional payments company to becoming true data and technology company. As you said, I'm super passionate about the work because we face some unprecedented challenges that are at this intersection of law, technology, and society. These are really, really important topics that are going to shape the future, our future, the future of our kids, and of the generations to come, so it's very exciting.
Alexandra: I can imagine. It's definitely a very exciting space to be in. We have so many topics where I want to tap into your wisdom, into your experience, into your advice, but maybe to start out what I really like whenever you present at one of the leading privacy data AI conferences out there is your philosophy on how privacy should actually enable organizations. Could you tell us a little bit about how you approach privacy protection, and how it should happen to actually enable organizations in their data and AI practices?
Caroline: Absolutely. Our recipe for success at Mastercard has been privacy by design. It has morphed over time into a data responsibility by design approach. What that means is that we embed strong privacy and data controls into everything that we do by default into every single technology service solution. It's both a top-down and a bottom-up approach. Top-down because it's endorsed by a CEO and by a board of directors. Actually, privacy and data responsibility are part of our corporate objectives, which is absolutely key to instill the culture across the whole company.
Then bottom-up, because it's embedded into our product development process, in practice, our privacy team works closely together with product development teams, technology, security. For every single new technology, new solution, we ask ourselves together, important questions like can we collect less data and still achieve the same objective? Can we encrypt, de-identify the data? Can we provide people with more transparency and control over their data? If it's done well, privacy by design is much more than just a compliance exercise.
It's a true business enabler, and it can become a competitive differentiator. Actually, at Mastercard, the business wants privacy at the table. When they negotiate with a customer, they brainstorm about a new idea because they realize that it's part of the success.
Alexandra: I can imagine. I think it's so important, as you described, to not only have it as something where the C-suite says, okay, we want to go in this direction or the board, but actually also empower those actually working on the solutions on, "Okay, what does it mean to incorporate and implement privacy by design?" You also mentioned data responsibility, and if I remember correctly, your role as Chief Data responsibility officer is a quite new one. Can you expand a little bit on why this was added to your previous responsibility? Is there sharpened focus now with that role?
Caroline: That's a great one, and yes, I've changed my title very recently. The reason is that, first of all, we have data responsibility principles at Mastercard and someone had to be accountable for making sure that all our products and solutions align with these responsibility principles. Very recently I have been lucky to expand my role and my remit to data management, data governance, AI governance, and everything that is actually touching data, and so it made sense actually, to reflect that expansion of remit in my title. Very excited about this new journey ahead of me.
Alexandra: I can imagine. Congratulations for that. Then definitely sounds like a lot of work that's now added to your plate.
Caroline: For sure.
Alexandra: I think this also requires you to have the cool team that you always mentioned at conferences that helps you making all of this happen, maybe you already shared that it's important if you want to reach privacy by design, to have it as a top-down as well as bottom-up approach. Any other important advice, particularly for the privacy professionals listening, on how to be successful in today's data and AI ecosystem?
Caroline: Yes, there are so many of them, but I would like to mention three of them, Alexandra. I think that one skill set that becomes really mission critical to navigate this complex AI and data ecosystem has become risk management. If you just look at AI, AI is obviously creating huge potential, but has also the potential to create some risks for people, for society. AI raises novel challenges that are really, really hard to solve because it brings to the table different values, different interest that are not always aligned and sometimes conflict with each other.
Being able to assess their risk and come up with creative solutions to mitigate them becomes really important. The second tip that I would like to say is that no one can solve any of these challenges on their own. It requires partnerships with multiple, multiple teams, with product teams, innovation teams, marketing, security, technology, audits, you name it. The ability to build trusted relationships across a company is absolutely key as well. Then if you want to be successful today, you have to have that growth mindset and to be eager to learn again and again and again and to adapt to stay relevant. Obviously, all of this is a journey, right? There is no final destination, so let's all enjoy the journey.
Alexandra: Absolutely, and just keep going. This actually, also brings me to a next topic that I want to discuss with you, which is what is fondly called the alphabet soup or tsunami of new regulation that we see particularly in the European Union with the Data Act, Data Governance Act, Digital Markets Act, and so on. Some of them are already into effect, some are just being drafted. What is your perspective on the current regulatory developments that we see in the European Union?
Caroline: Yes, as you said, Europe, and to be honest, a lot of other governments around the globe are issuing more and-
Alexandra: That's true.
Caroline: -more regulations to maximize the benefits of data and technology and minimize the risks. We know that Europe has the ambition to set the global standards in this space, exactly how they did with the GDPR for privacy. I think we support, and I do support actually, Europe's objective to promote responsible innovation. This is very much in line with my day-to-day job and mass commitments to privacy and data responsibility, and having clear rules of the game, clear guard rails as to how companies and government can handle data and technology is also key to build trust in the technology, in AI, for example.
Trust is obviously super important to company successes, to the growth of all digital economy. This being said, there's also a risk that these tsunami of regulations or alphabet soup would not deliver the desired objectives because if you think about it, they all impact data, and so they create a patchwork of complex and fragmented standards in this space that are really, really difficult to understand, costly. If it's difficult for large organizations like ours to navigate and to understand, how would SME handle all of that? Would they have actually even the results and the expertise in-house to be able to assess and understand what it takes to get into compliance? I think this gets amplified by the fact that a multitude of regulators are going to be competent in this space.
We have, obviously, data protection authorities, but you can also expect AI regulators, data governance agencies, antitrust agencies, sectorial regulators like the financial regulators in our case. All of them are going to have something to say about that. They may reach different conclusions, different interpretations, and that may create legal uncertainty and that may have a chilling effect on innovation that is going to harm consumers and SMEs in the first place. This tsunami of regulations hold a lot of promises, but they could also have unintended consequences.
Alexandra: Yes, I can absolutely imagine. We already saw this with GDPR, that it was much easier for large organizations to comply, which had both the technical as well as the legal resources or the financial means to get support, but it already posed a challenge for smaller organizations. Now with all the different regulations coming in effect, which as you mentioned, overlap each other, and sometimes you also have these inconsistencies in context of what different terms mean in one law versus the other, that's definitely something that we hope we'll sort out in a while, but let's see how this is going to go ahead. Any other challenges that you foresee? This was the big, high-level perspective on, okay, it's hard to comply with all of them at once and have the resources. Any other challenges that you foresee in this context?
Caroline: It's going to be costly, et cetera. Let me actually continue a little bit on what I just said, and then actually, we can go into the specifics. I think because of all the potential for overlaps and inconsistencies, it's going to be really, really important for policymakers and regulators to talk to each other. We've seen how difficult it is in the context of the GDPR for national data protection authorities to align on the GDPR, which is just one piece of regulation. Having a strong cooperation mechanism between all these regulators is going to be key if we want to maximize the benefit of data sharing, AI, et cetera, while minimizing its risks.
This is going to be a delicate dance, and the devil is going to be in the details, so it's going to be really, really difficult. I think another point that is also worth mentioning is that Europe does not exist in a silo. We live in a highly global and interconnected world, so it's going to be really important for Europe to also align with other democracies around the globe on key definitions, on common principles, common regulatory approaches. Today we see that countries do not even agree on what AI means in practice, how it should be defined.
Then at the end of the day, we also know that the law often lags behind the technology. It's going to be really important for governments also to open the dialogue with industry, civil society and include a variety and diversity of perspectives and views, including lawyers, engineers, but also ethicists, anthropologists and all these different disciplines that can add, different perspectives to the dialogue. I think that multi-dimensional conversation is going to be really, really important there.
Alexandra: I think so too and maybe also a more iterative approach towards regulations since technology is a European front, but you, of course, also operate on a global scale. If you had, I don't know, three wishes to grant, how you would wish that you would approach all this new regulation that is coming up, is there anything that you would advise regulators to consider in adapting, maybe, or changing, or adding with everything that's ahead?
Caroline: Actually, I just mentioned actually what I would say. I think it's about also working with-- Let me give you an example of something that is really difficult. I think everybody agrees that algorithms have to be fair. That we want to make sure there is no bias, discrimination, et cetera. I think we can all align behind that idea, but in the how it's really, really difficult. Regulators and policymakers are not telling companies how to do that and expect companies to figure that out. Actually, what does it mean, fairness? Then, for example, at Mastercard, we have only a limited data set.
Alexandra, we have your card number, but we don't have your name, your address, your phone number. How do we make sure that, for example, a fraud score that we have about the transaction that you make is fair? Do we need to acquire additional data to get that to be able to do so? If so, how do you reconcile that with the principle of data minimization under the GDPR? Again, this is just actually say that I think it's going to be an iterative regulation process, if you want, and a conversation and dialogue is going to be needed just to reach the right outcome.
Alexandra: Agreed. This also reminds me of a survey that was done by Microsoft research and some academics, I think back in 2021, where they surveyed leading fairness practitioners, AI fairness practitioners. They found that the number one challenge they face in their day-to-day business is to even know whether their algorithms exhibit any biases or not because they lack the data, the sensitive attributes.
Caroline: Exactly.
Alexandra: This is also why some regulators, for example, now consider to provide diverse synthetic data to organizations to at least have some testing data to see, "Okay, are there some fairness improvements that we can make?" Of course, this can only account for one part of the puzzle, and there's so much else that goes into that, so definitely a challenge.
Caroline: Absolutely.
Alexandra: Maybe one thing, again, to this alphabet soup, I can remember that I think it was one of the last IAPP conferences that you mentioned that you and your team put a lot of work to prepare for all the different acts and regulations ahead. Can you share, particularly with our listeners, which also come from large organizations, how did you do that, or what were your lessons learned in how to be set up for success to comply with all of these different laws?
Caroline: Yes. Thanks, Alexandra. Let me actually say upfront that if anyone says, actually, everything is under control, that have done it, they're probably lying because I think we're all learning as we go along, right?
Alexandra: Sure.
Caroline: I'm happy to share a little bit of strategy that is articulated around three pillars. The three pillars are governance, technology, and people. In terms of governance, I already mentioned that we have a data responsibility by design approach. What that means in practice is that we have one single process, one tool, one impact assessment to assess new solutions, new technologies under all these different laws. That actually enables us to have a holistic approach to data, and it provides a business with a one-stop shop so that they just love it.
Then we have a governance council that we have established to which issues, high-risk issues are being escalated for review with executive oversight. This is in terms of governance because, at the end of the day, you can not have a separate process to assess each of these different regulations. It's just not sustainable. In terms of technology, we often talk about the fact that technology creates issues for people, for society, but we also think it's part of the solution. We're assessing how technology can help automate our compliance efforts, and how it can streamline how we support a business.
For example, we are working on technology to understand, in real time, what data we have, where it comes from, who has access to the data, how it's being used, protected. These type of technologies and automations are absolutely key to make our compliance efforts scalable and auditable so that the team can focus on things that are really novel, that are challenging.
Alexandra: Sure.
Caroline: Then talking about people, at the end of the day, talent is our biggest asset, and so we are really remaining laser-focused on our talent trying to reskill and upskill our people to make sure that they are able, in a position to address the challenges of today and of tomorrow. Over the last couple of years, we've invested and trained people in AI, of course, but also in biometrics, in digital identity, in privacy engineering, in cyber, and many other areas. I think all along it's also important that we've kept our CEO and our board of directors fully in the loop so that all of this is part of the strategy planning. It has worked so far.
Alexandra: I can imagine. It sounds like a very comprehensive approach, and, of course, it makes sense to approach it holistically as opposed to having a very fragmented landscape talking about an holistic approach in terms of data. When we think about AI governance, many organizations I talk with mentioned that this brings about new challenges. We already mentioned fairness earlier. Can you share a little bit specifically in the context of AI? AI is nothing that's particularly new to an organization like Mastercard, but can you share a little bit about the elements that have to be added to your overall processes in governance when you think about enabling AI innovation in a trustworthy manner?
Caroline: Yes. Thanks, Alexandra. Yes, we've actually been investing in AI governance since we have been investing in AI because we knew that it was raising new type of challenges. What we've been doing is we have developed an AI governance framework that, again, we are updating on a regular basis that is a practical guide for data scientists to help them understand what type of bias that they can look for and how to mitigate for that bias. Now, I make it seem simple, but it's really hard. We already mentioned the fact that AI can bring new types of challenges, new tensions between, for example, bias testing and data minimization.
Another tension that we face is, for example, between transparency and security. If we want, actually, to provide a lot of transparency to people about how our fraud algorithms work that are based on AI, we may give ammunition to the fraudsters to circumvent the fraud tools and to game the system. These are all element and tension that requires companies to sometimes make some tradeoffs. This is reason why and I think that was new. At least at Mastercard, we have established that AI governance council with executives across the company to be able to review these high-risk AI use cases and make sure that they comply with the law but also align with our data responsibility principles.
In some cases, it's also important to get external views. We look for input from academics, from experts, even from consumer panels. Hearing what people think about your AI innovation is literally entrusting.
Alexandra: I can imagine.
Caroline: It also helps you guard against your blind spots. Sometimes opening up a little bit the conversation and hearing about different perspectives is absolutely mission critical. Obviously, this is iterative, and I think we're all learning and continue to learn.
Alexandra: Sure. I think that's the way to go because nobody has all the answers, particularly for things like bias, fairness, and so on. Since you mentioned that you have this oversight board for AI, which assesses high-risk use cases, who decides whether a use case is high-risk or not? Is this a decision that's done by data scientists specific, I don't know, ethical specialists in the AI space, or is it again, a collaborative effort together with legal teams, privacy pros, ethicists?
Caroline: That's a great question. Actually, we have a framework that we put together so that we have a consistent way of assessing these use cases and to qualify high risk the same way. We actually mention all the criteria. There's a whole methodology behind it, and this is actually decided with lawyers and data scientists together, and we review on a regular basis. We also audit this just to make sure that our methodology works. It's not like a one-off like, oh, is it high risk? Yes or no? We have a framework behind that and a methodology to do that well.
Alexandra: Makes sense. Of course, if you talk about AI governance, again, this brings up regulation, namely the AI Act, which is arguably one of the most discussed pieces of AI regulation. Of course, we don't have the final text yet, but are there any areas that you foresee to be a challenge, particularly for larger organizations to comply with?
Caroline: I think there might be a couple, Alexandra. I think we already discussed a little bit everything that relates to bias testing and fairness. Another one may be about transparency and explainability. The fact that actually we need to explain to people why AI reaches certain decisions. I think that makes perfect sense. Transparency is absolutely key to get people trust in the technology, but we also know that AI can sometimes create the black box effect where it's not always possible to understand in details why AI reach certain conclusions. I think this an area that is also going to require more research, more investments, and more dialogue about how to do this right, and does that require, for example, that AI should not be used if it cannot be explained? I think these are important questions that are still need to be discussed.
Alexandra: That's true. Maybe one another question, particularly again, also for the privacy pros listening, you described that AI governance is something that you looked into right when looking into AI itself, but for many organizations I talk with, they're right at the beginning of their AI governance journey. Are there any recommendations for the privacy leaders out there that you would tell them to ensure that they're more successful in the AI space? Maybe also in terms of how to upskill their teams, what's important for privacy pros, which come from more traditional organizations to be competent in the space of artificial intelligence?
Caroline: Yes, absolutely. Let me give maybe a couple of tips, but I'm sure that there are more than that. First, I think it's really important to get senior management on board to get the support, to get the right level of attention, to get potential resources if needed. I think this is always actually a non-negotiable. You have to have senior management fully on board. Second, I think there's no need to start from scratch. Usually, companies have existing processes, existing tools. It's always good to build on what you have already, so maybe actually making an inventory of what processes and review process you have, and then build on that.
Then, yes, people definitely and there are more and more tools and certifications out there. IDP, for example, has launched their AI governance materials and certifications. I think there are tools out there to get educated and trained on AI, which is absolutely a must. Then the last piece that I want to say is that, again, AI governance is not actually something that can be done by one individual or one team. Effective AI governance requires input from a multitude of different teams and so establishing that multi-stakeholder governance forum is [unintelligible 00:25:48] important to get it right.
Alexandra: Definitely. Maybe also talking about the finding the balance between regulatory compliance and facilitating AI and data innovation. We also talked about PETs and privacy-enhancing technologies in the past. Can you share a little bit about your perspective on privacy-enhancing technologies, particularly in the context of facilitating AI innovation?
Caroline: Absolutely. Pets have huge potential, of course, to be able to enable data to be shared, to be analyzed so that we can extract all the value from the data but in a way that is fully privacy-preserving. At Mastercard, we're looking at many of these types, including homomorphic encryption, synthetic data, multiparty computation, et cetera. Synthetic data has obviously a lot of potential. I think there's, a lot is being said right now about synthetic data, how it can be used to train AI models so that you don't use weak data and you don't risk breaching people's privacy rights.
To be honest, if we want to make the moves of these PETs, I think we need a couple of different things. First, we may need some clarity from regulators about how PETs can be used in a way that helps with privacy compliance. It's still unclear to many privacy professionals how PETs can really help, and because it's costly to leverage PETs, you need to have an incentive for companies to invest in this space. Knowing actually how it can help with compliance is, I think, going to be really important for PETs to be successful. Then the second area is we may need also more research, more education in this space.
It's still a very nascent area. We know that each of these PETs has pros and cons, and making sure that everybody understands the benefit of that, how they can be used for different type of use cases, is going to be really needed if we want PETs to to be adopted more widely.
Alexandra: It makes sense.
Caroline: I do, I hope it's going to be the case because I think it's, there's huge potential there.
Alexandra: Definitely. Since you mentioned it needs incentives, and of course, you elaborated then on understanding how they can help with compliance, can you elaborate also what else would be beneficial to facilitate the adoption of these new technologies, and also for which audiences? Let's say for the C-level audience or even board members versus for the privacy professionals, for the data teams. Just curious to tap into your experience here working at such a large organization as Mastercard.
Caroline: I think you need different things for different types of audiences here. Definitely, at Mastercard, we're lucky that our senior management is really behind PETs and investing and a true believer, but I think that's unusual, unique. I think that in many organizations, CEOs, board investors don't even know what it's, and hey, homomorphic encryption, who can even remember the name of it, right?
Alexandra: Yes.
Caroline: I think we are going to have to have a translation exercise here about actually what I can bring, that's one thing. Then for the people implementing them, I think it would be useful to have a little bit more guidance and research about what each type of PETs can bring, what are the limitations, the benefits because we can see that actually, some PETs are better fit for some use cases, and others for others. I think having a clear framework around that may really help people doing the work. The data scientists, for example, and the AI scientist.
Alexandra: Sure, this makes sense. This is also something that we experience quite often because particularly, synthetic data is oftentimes a technology that can help with facilitating privacy preserving data sharing, but it can also help with AI governance, fairness, and so many other things, which, of course, if you're new to technology, makes it a little bit challenging to understand where to put it, where to place it, and in which context you can actually use it. I want to actually ask you since you mentioned that you're in this unique situation that your C-suite is actually behind PETs and promoting adoption and research into these areas, do you know why that's the case?
Was there some specific education initiative that came from more like middle management or bottom up, or were they just like one like that came into the organization with the understanding that PETs will be a very important piece in terms of data-driven innovation. How did this came into existence?
Caroline: This is a great question, Alexandra. I don't have a perfect formula or magic solution here. I think Mastercard has been investing in privacy and data for a long, long time. If you think about it, back in 2018 when the GDPR came into force, we launched Trūata. Trūata is that company that we co-founded with IBM. The whole purpose of Trūata was to anonymize data and to conduct data analytics on behalf of clients that would be fully compliant with a GDPR. In a sense, we had already been investing in this space, in the space of combining innovation with privacy and finding solutions that actually work for both, achieve both.
I think it has only been that journey, and we have also a lot of expertise in-house about data analytics. These people get super excited about PETs, and so I think it's also a culture of the company that is very techie-based and data and privacy-focused that has helped us.
Alexandra: I can imagine. Since you mentioned that you have this vast expertise in-house, not only from the legal perspective but also from the technical perspective, just out of curiosity because I know some organizations are more on the route towards building everything from scratch and doing their own solution in terms of secure multiparty computation, differential privacy, synthetic data and the like, versus others turn more towards already built systems, how is this approached in your perspective in large organizations? When are decisions made to build it from scratch versus to go with solutions that are already developed and can be purchased?
Caroline: I think it's a great question as well. At Mastercard, we always compare both options, build or buy. It always comes with a whole research and investment plan about actually, the pros and cons of each. Sometimes we buy, sometimes we build. It really depends on the situation, on the availability of the teams, on our expertise, on a lot of different things, but we're always looking externally at vendors and suppliers to see what they offer, whether we can partner or we can invest in them. We're also not only inward-looking but also outward-looking.
Alexandra: Interesting. Since we have to come to an end, maybe two last questions, and I'll start with the first one. You shared at the beginning of our episode that you have now this expansion of your responsibilities in terms of data governance, data responsibilities. When you look two year, three years ahead, what are the areas that you are personally most excited about to work on?
Caroline: I'm going to say two things. One, may not be as sexy but it's equally important is, I want actually to be able to automate as much as possible because I think the issues that we face are, as I said, unprecedented, and I want actually, the team to have headspace and time to think, and to explore, and to partner on the difficult and new issues so that they don't have to handle the, basically, day-to-day, or the business as usual stuff. Automation is going to be really mission-critical to our success, I think. The second area maybe may sound more appealing is everything that relates to data for good or data for social impact.
I think actually, that Mastercard has been in this space of financial inclusion, data for social impact for a while, but I think we can do even more. Not only Mastercard but with other public and private partners. How can we leverage data in a way that actually contributes to finding solutions to societal issues? This is something that is really close to my heart, and I hope we're going to see more of that.
Alexandra: I fully agree. That's definitely also one of the areas I'm most passionate about, and why I'm so happy that we're collaborating, not only with private sector organizations but also public sector organizations to see how can we make synthetic data openly available to actually democratize the access to data. Hence, also the name from the podcast because there's so much potential, both in data as well as AI. I think if we don't open it up in certain areas, then we're definitely limiting society's progress.
Caroline: Fully agree.
Alexandra: -effectively using data for good and AI for good. Maybe the last question, you already had so many valuable pieces of advice for our listeners, but again, coming back to this emerging field of AI and data privacy laws, do you have any final piece of advice, ones for the privacy pros listening, and maybe also for all those interacting with privacy professionals, data scientists, AI and analytics leaders because sometimes, of course, there are some communication challenges and the like. What's your piece of advice for both of these target audiences?
Caroline: I think I would actually give them the same advice that actually they need each other. They need each other's perspective in order to be successful here. They should embrace the diversity of thoughts and the fact that there may be disagreements is a good thing. They'd better start to make friends if they're not friends just yet.
Alexandra: [laughs] That's definitely a good piece of advice. Caroline, thank you so much for coming to the Data Democratization Podcast. I think there were many things for listeners to take away from, and I hope we can continue this conversation, and maybe in the future, invite you back to get some updates on what you've learned in the meantime.
Caroline: Absolutely. Thanks so much for the invitation, Alexandra.
Alexandra: Thank you.
[music]
Alexandra: See? I think I didn't promise too much when I said this is an episode that you don't want to miss. I'm so grateful for everything that Caroline shared, and I feel there are a lot of actionable things to take away regardless of whether you're a privacy professional focusing on AI governance, or in general, bringing data and AI capabilities to your organization. As always, if you have any comments, questions, or remarks, we are looking forward to hearing from you. Either write us a short email at podcast@mostly.ai or get in contact with us on LinkedIn. Until then, looking forward to having you tune in for our next episode.