🚀 Launching Synthetic Text to Unlock High-Value Proprietary Text Data
Read all about it here
Episode 38

Building trusted ML products with Karin Schöfegger

Hosted by
Alexandra Ebert
Karin Schöfegger is a seasoned ML product manager who knows how to create successful AI/ML products and mitigate the risks involved. In this episode, she shares her insights about challenges in building AI products. Tune in to learn about:
  • How to align the business side and the data science side of product development
  • What's the difference between traditional software development and machine learning development
  • How to bring customer understanding into data science and engineering
  • What are the most common traps in data science
  • Why it's essential to work with realistic data instead of picture-perfect datasets
  • How to get buy-in from stakeholders for ethical AI

Transcript

[00:00:09] Alexandra Ebert: Some AI products are a huge success, but most of them are not. What are the success factors when it comes to AI product development? How can you build AI products that support your business objectives, entice your customers, are trustworthy, and that actually get deployed? Which, as you're well aware of, isn't the case for the majority of AI products. According to Gartner, a staggering 85% of AI projects fail.

While it would be somewhat overselling to tell you that you just have to listen to today's data democratization podcast episode to fix all of your AI product challenges, I'm sure you will find today's episode worthwhile. You can learn about the key obstacles to avoid when it comes to AI and ML-based product development, the concrete actions to take to get you closer to successfully deploying AI products, and how you can effectively collaborate as an AI product manager with both your business functions, as well as with data scientists and last but not least, you will find valuable insights when it comes to trustworthy ML and AI product development.

With that said, it's my pleasure to welcome you to Episode 38 of the data democratization podcast. I'm Alexandra Ebert, your host and Mostly AI's Chief Trust Officer. My wonderful guests for today is Karin Schoefegger. Karin is a true expert when it comes to developing and deploying trustworthy ML and AI-based products. She has over 15 years of experience in this space, working for companies like Google, for YouTube, for example, IBM, and even the FinTech Unicorn N26. There's a lot to learn from Karin. Let us dive in.

[music]

Welcome to the data democratization podcast, Karin, I'm so excited for today's episode. I was very much anticipating our conversation. Before we dive into ML-based products and how to make them trustworthy, could you briefly introduce yourself to our listeners and also share what makes you so passionate about building machine learning-based products.

[00:02:30] Karin Schoefegger: Thanks for having me here. I was very much looking forward to joining you after our first get to know together and learning about your podcast and having the opportunity to share some of my learnings.

A little bit to myself, the last five years, I worked at Google as a product manager mostly on AI-powered products. Since September I'm a freelancer. I embarked on a consulting and coaching path and I'm excited to see where it brings me. Originally, I started my career as a mathematician. I studied mathematics with a specialization on artificial intelligence, which was back then not a hype at all.

[00:03:05] Alexandra: Yes, I'm actually surprised that there were courses about that.

[00:03:12] Karin: I did my master thesis on artificial intelligence and I coded a reinforcement learning algorithm back then myself. We even worked on a distributed network of computers already back then. That was like 2009, 2008. Then I worked a couple of years in research, but I left research to work as a consultant for IBM and as a business analyst for Global Blue. Then suddenly, deep learning was everywhere and I'm like, "What is that?" Then I researched it and I learned that well, actually, it's artificial neural networks, but just a nice learning.

[00:03:54] Alexandra: Very interesting. You basically worked in AI before it was cool. How exactly did you transition then from consulting roles and research roles into product management?

[00:04:06] Karin: That is a very good question. It was more by coincidence. I was working at Global Blue back in the days, which was a FinTech company. They invented tax-free shopping. They went through a huge digital transformation. They used waterfall models to build their products, but a new investor wanted more efficiency in the company, and they brought in a new CTO, and a new CEO and we completely changed the way we worked.

Naturally, a business analyst is in a good position to become a product manager in that environment and I was already always the kind of person who would challenge if a client comes in says, "Oh, I need a blue button here." I would always ask, "So why do you need a blue button here?" Usually, we always came up with a different solution. The solution wasn't the blue button but a different process or a complete different way of working, for example. I think it was already in my nature to take that role.

What I also like about it is versus just being a scientist or an engineer is that you can take on so many different hats. Sometimes you need to be the lawyer or sometimes you need to be the user's advocate, sometimes you need to be the engineer, depending who you talk to. In a way, you're a jack of all trades, know a little bit about a lot of things, and you don't know much in detail.

I also learned that there's a difference in how I tell my story, because if people look at my CV they all think, "Oh, you started already out in artificial intelligence, you ended up with Google, there's such a red line through your CV." I don't feel that there is a red line, a lot of things happen by coincidence, like me becoming a PM. I think one of the key messages is also that you own your story and there's always different ways to tell it.

[00:05:55] Alexandra: That's definitely a good point to emphasize. Maybe to set the scene also for what's to follow. In one of our earlier conversation, you pointed out that it's still surprising for you how many business leaders don't know how product development, more traditional software product development, differs from AI-based product development. Just to set the scene, could you walk us through the key differentiations here?

[00:06:18] Karin: I think there's two topics that I would mention here. One is, while nowadays if you open the news, there's AI everywhere, as a buzzword, but people don't really know what this means. AI is a super-broad field and quite often when people talk AI, they mainly mean machine learning and building machine learning-based products comes with also a lot of differences on how the systems work and how they're being built.

For example, traditional software development, engineers code the rules and how the machine makes the decisions. With machine learning, the system learns from data, so it learns its own rules and quite often, depending on the models that you use, these rules are a black box. The way you build these products is more like how you work with research teams rather than, like UI engineers, for example.

It's much harder to estimate how long will it take to build it and when will be ready, but there is still in this mindset of a lot of business leaders that, "Oh, we need to be able to manage that, we need to be able to know when we are ready to launch," which is much, much harder to do and to predict when you build these kinds of products. That is one point. The second point is around expectation gaps.

I think there is still also a lot of misunderstandings what actually AI can do for you and for your business, given also the data, for example, that you have, and what it actually can do, given your context. This also needs to be managed well.

[00:07:52] Alexandra: That makes sense. For the first point you mentioned, is there a way how to marry together this uniqueness of ML-based product development with the expectation or the old way how it is done? Or is there more mindset change or something like that that's necessary and maybe also more continuous way of working?

[00:08:11] Karin: I think it's both. One is the mindset change, that you just need to become much more data-driven also in your decision-making, you need to build up an experimentation culture, for example. The other thing is also to rethink how you work as an organization, certain frameworks and methodologies work better than others. For example, in Europe or Germany, particularly, I learned that Scrum is still quite prevalent, but then in Scrum, you're supposed to be able to predict what are you able to deliver within two weeks, which is much, much harder to do when you build machine learning based models.

There are potentially frameworks that you may want to look into that are more suited for more research and development teams to build these kinds of thing. I'm a huge fan of Kanban or Scrumban, so a mix of both worlds, for example, and with most of my teams I ended up working in such a variety, but then every team picks up their own practices and modifies it so that it fits their needs. That's what I would recommend.

[00:09:12] Alexandra: Understood. The second point, you mentioned about expectation management and what ML actually can deliver. Do you also see tendencies that organization just want to play around with AI, ML to be cool, but to identify the business cases? Or are there also some challenges between the AI engineers working on something cool, and then not necessarily figuring out how can this provide value to the business? Do you see an emphasis on one or the other problem or what's most common out there?

[00:09:42] Karin: I totally see both. On the one hand, obviously, engineers also want to work on cooltech. They want to find cases where they can use it and experiment with it and learn it for themselves as well because even for their own career, I think it is very important that they understand this new technology and know how to build it and so on to advance. On the other hand, I definitely also see it on the business side that because AI is so hyped right now, that everyone also wants to jump on the hype train, so to say. For example, there was already a couple of years of study that figured out that most of European AI startups don't even use AI.

[00:10:17] Alexandra: I think I remember that.

[00:10:19] Karin: [laughs] I think this is totally fair quite often, especially when you want to go fast to market, maybe like a rules-based system is sufficient enough for the current user base that you have. Then over time, you grow into actually building out your own AI models, which is totally fine. Sometimes you don't even need ML because it's too risky for the domain that you're in. Using a rule-based system may also deliver better results and satisfy regulatory requirements, for example, much more.

[00:10:49] Alexandra: If you say rule-based systems are sometimes sufficient are there some areas where you say, well, ML definitely is the way to go? Or what are kind of the key benefits that products that incorporate this new approach can bring to organization?

[00:11:03] Karin: I think there's certain areas for a company, for example, if you're in the social media space, there is no way to do without AI. If you work with massive amounts of data, there is likely a good chance that you can deliver unique value with ML. There's certain environments where your users are already used to getting very personalized experiences so they just expect that from you as well, for example.

Then there are certain areas where it's much harder and riskier to start using AI like domains like the legal domain, the financial domain or medical domain. Quite often you may start using AI but you actually have a human in the loop. You don't let the AI make the final decision but you use it as one input point for the human decision-maker to come to his or her conclusions, for example.

[00:11:54] Alexandra: Also, if you think of, I mean you worked at Google, YouTube, N26, so I would assume organizations either of that size or that are more poor digitally native have a different approach to how it's data management having the data in order and then awareness what they actually have. What would you say are the prerequisites for organizations that you've encountered in your consulting business or earlier maybe at conferences? What are the prerequisites that they need to have before they look into ML-based product development?

[00:12:25] Karin: A couple of years ago, I would've said that you need to become data first before you come AI first. You really need to make sure that you build the data repositories, have a good data make it ready for serving as well. A lot of people also underestimate that just having data is enough but there's actually a lot of effort going into making sure that the data is a good data and can easily consumed and is properly cleaned and so on. Nowadays, I think my answer is a little bit different because now there is also a rise of AI as a service companies. If you just want to feature a certain part of your product with AI, you can just plug in an API and you get AI out of the box.

There, obviously, you don't need to train your own models, so you don't have such a need for having vast amounts of data. Obviously, this comes then with other risks again, like what if this provider gets out of business or just suddenly raises their prices, et cetera.

[00:13:27] Alexandra: Or even some ethical considerations-

[00:13:29] Karin: Exactly.

[00:13:29] Alexandra: -where you don't necessarily know how was it developed. Can you give us a more practical examples? For example, for those organizations who are not yet data first, what type of AI-powered capability might be a lower-hanging fruit for them to incorporate in how they currently do business?

[00:13:46] Karin: I think if you want to start out as a company in using AI and you haven't done it before, I think it's good to choose problems or use cases that others have solved already in a similar environment. For example, personalization or classification we know is easily feasible with AI. Then you have a much easier path to prove to your business stakeholders and the executives, for example, that these AI-powered features make sense to the business and provide value and only then move to the harder problems.

What you often also see is that companies have this idea of using AI and then they choose a problem that's way too hard to solve, for example. Then it's very hard to prove within a short amount of time or shorter amount of time that AI adds actually value to your business.

[00:14:35] Alexandra: You already mentioned two very interesting and I think important aspects proving to the executives that AI makes sense and then also the time duration. What would you say are the success factors when it comes to both building ML-based products but also first starting out with it, how to ramp it up, how to get in this direction?

[00:14:54] Karin: One thing I think I see is quite often companies still tend to focus more on the technical challenges rather than what else is needed to make AI succeed. The best companies who success succeed with AI, for example, are also very good at managing the sphere of change. Obviously, with AI, there also comes a way of how we work but also how our users interact with our product. Just to give you one example, and I think the majority of our listeners will know that but machine learning will always fail at some point. There won't be a hundred percent precision and recall for a classification system, for example.

As a company you need to think about how can my product potentially fail in the future? How do I deal with that? These companies, for example, also design for failure to think in advance what may happen and what should the user then be able to do, or they think about how do we actually communicate to the user that we use AI and that it may fail. Different companies may make different decisions here but I think the important part is that you think through these points. Another aspect, for example, is that because AI is such a new technology, there is also different ways on how bad actors potentially can threat your product or your company.

Adversarial attacks is just one example of it. I think the more users your product has and the more risk this proofs to your company, you definitely also need to consider it ease points and discuss with your teams and understand the risks.

[00:16:36] Alexandra: I completely agree that's so important also, again, from a responsible AI perspective. One thing that just came to my mind, you mentioned that as a product manager in the space, you wear plenty of different hats. One of them is also being the user advocate and of course, I would assume developing products at some point or the other could also benefit from actually talking to the customers, to the users.

How to navigate these conversations and still a broad junction of the majority basically sees AI as something that's either terrifying or that just hugely overestimate the capabilities of what AI has today. How to navigate this when you do user interviews or interact with users in the development stage?

[00:17:19] Karin: A very interesting question. Honestly, I think users shouldn't worry about what technology powers their product. They should have a seamless and valuable experience for them which in turn reinforces my point that as a company to think about how may these new products fail? How do I communicate to users in these failure cases? What can they communicate back to me? For example, at YouTube when you have a news feed of videos and then suddenly there's a video that you don't perceive as a news video, what does it do with the user?

Does it limit the trust in the system or don't the user care because they assume, "It's a mix of interesting videos I may want to consume." It really depends on which context the AI system is exposed to the user and how the environment around it enables them to use it effectively. Another example is how can they communicate these failure cases, for example, to the company and what does my team do with these failure cases that are reported to us, for example?

[00:18:30] Alexandra: That makes sense. We also talked about the success factors just a moment ago. To bring it back to this topic, what would you say are next to the things you already mentioned, other important factors to consider when developing products that are ML-based?

[00:18:44] Karin: I think one core aspect is also to start with your own team, to build products that work for everyone. I think the easiest step or the most easy step to do is to hire a diverse team that you bring in multiple perspectives as you already develop these products which means that you have people of different backgrounds with different experiences on the team that contribute to it. I think the second aspect is also very closely related to company culture.

I think you also need to have an environment that offers psychological safety to its employees so that if they understand, "Oh, this AI system may fail in these new cases," or "I'm not sure if we actually understood all the bias aspects that we have in our data." That these people also feel safe to speak up so that the engineering team can actually look into understanding the data in certain aspects more deeply or come up with ideas on how to mitigate these biases, for example.

[00:19:49] Alexandra: Understood. One other thing since we talked about convincing the executives, I assume with Google potentially this wasn't that big of a challenge since Google is a company that's heavily relying on artificial intelligence, but from all the different companies you worked with, did you ever encounter skepticism towards AI? Do you have some tips for our listeners how they could go about this when they want to convince their executives about AI?

[00:20:15] Karin: I think you brought up a very good point that there is a huge difference in maturity for different types of companies. Google is obviously on the very extreme end, it's AI first, and AI is in its company, nature or DNA, so to say. For a lot smaller companies, I think they need yet to go through a certain maturing framework to actually grow. Not everyone needs to be AI first. I think the key aspect is also to figure out do I even have problems that are best solved with ML. Because ML comes with a higher risk.

The question is, do you really need to use AI or can you solve the problem in a first step with different means? I usually start all my talks with, well, there are cases where ML is best suited, but there are also cases where ML is actually not that well suited. I think the skepticism also often comes from not understanding how these products work and with the fear that it's a waste of resources. It takes way too long to build.

I think another aspect is also that when we talk about AI, quite often we have this dystopian view in our minds that AI will take over the world and [crosstalk] exactly, which is more like the science fiction perspective of AI, but this is also something that is a fear that is often also brought up by news media, for example. I think we also need to have a more realistic view on talking also about the cases where AI really adds value and then separate that from the science fiction world and use cases.

[00:21:57] Alexandra: You can't believe how often trying to get in this message when talking both with politicians as well this with journalists. That's definitely a point.

[00:22:04] Karin: Yes. I think there is also this aspect that quite often when we talk about ML products, we think about user-facing products. There is also a lot of use cases where AI can be helpful in internal processes. For example, like speeding up certain procedures that are in place or so, and this is something that at least when you look on LinkedIn and try to inform yourself what's going on in the scene, these are the cases that you read much less often because it's company internal or any B2B use cases you also read less often than B2C use cases.

[00:22:41] Alexandra: Understood. Are there any favorite internal or B2B use cases that you've worked on which you really enjoyed and thought, well, they made a big difference for the organization that you can talk about?

[00:22:52] Karin: Now I need to think a little bit more.

[00:22:55] Alexandra: Oh, good.

[00:22:57] Karin: I think any insights into data, when you have massive amounts of data that as a human, you cannot necessarily just understand anymore by looking at it yourself, AI can deliver a huge value that you gain new insights that you wouldn't be able to gain otherwise.

[00:23:16] Alexandra: Yes, agreed. This is also what we try to bring forward with synthetic data where many organizations are still in this phase where only a privileged small group of individuals ever gets to see customer data, which, of course, impedes what type of innovation is possible. It's so beautiful to see with those organizations who are pioneering when it comes to synthetic data, once they internally open up the customer database in a synthetic privacy-safe format and much more individuals creative minds departments can have a look on the customer base, what type of innovation comes out of that.

I think this is also highly important when we think of fairness, inclusion, and not only working with the average chain of agenda. For example, one of our customers had this experience when they first got to see their synthetic customer data was it was financial services. They found some pattern in the data where they went like, this looks strange. There are five different income streams of spending behavior no real human being can have. There must be a bug in your software. This was very in back in the early days of mostly AI.

When we looked into our software, we didn't find any issue with it so we urged a client to figure out what was going on in the real data. After weeks and weeks of bureaucratic internal processes, finally somebody could look at the customer data. Indeed there was a small fraction of customers who exhibited a certain behavior that nobody was aware of. I think this is one of the benefits that this data democratization can actually bring to see what is going on, how your customers actually behave, and not only work based on your assumptions, what you think will be of value to your customer base.

[00:24:53] Karin: Yes, that is a very good point that you bring up. It [unintelligible 00:24:57] a point that I brought up before that, before becoming AI-driven, you need to become data-driven. I think making data accessible throughout your company in a privacy-preserving way is really important that you can A, uncover these use cases, but also understand the impacts of the patterns of your data and how it impacts your ability to build products.

Just to give you an example, like at N26 when I joined, one of the features that we wanted to build was forecasting your spending behaviors. We wanted to give users a monthly report whether they see, for example, an oh, if you keep spending like this, this is what your account will look like in one or two months. Maybe you want to review that and change your behavior, for example.

[00:25:47] Alexandra: Basically, a behavioral tool to not get a coffee at Starbucks [laughs] every day because you'll be broke at the end of the month or something.

[00:25:53] Karin: Exactly. We did two things wrong, I think when we built this in the first iteration. One is that when we made the designs for the product, we always designed with ideal data. Obviously, we had like really good graphs that look perfect and you have regular transactions coming in, predictions really, really good. When we put out the dog food version of it to our own internal employees, it still looked also good, but then we brought it out into the real world and then we saw two things.

A, is that the design didn't work for users that didn't have many transactions or who had very exceptional transactions as well. For example, if you buy like a car, obviously, suddenly, your whole spending behavior is out of range. The forecasting didn't work as well anymore. The second one was that we also learned that with all the internal users that we tested, everyone used N26 as their main account. Obviously, they had a lot of transactions going on. Now, if you go into the real world, and it was still a very early startup in that state, not every one of our customers used it as their main account.

Obviously, they only had like a few incomes. They didn't even receive their salary on the account. Forecasting was also screwed. These are two learnings that I had at the time really understand the data that you're feeding into your system thinking about potential edge cases. Second is to design with real data and not with the ideal data. I think companies like yours would also massively help in being able to do that.

[00:27:39] Alexandra: Absolutely. Where did you get this picture-perfect data initially from? Was it from real employees or was it made of data?

[00:27:46] Karin: We used employees internally who were comfortable sharing their financial data to make sure that we have enough, because also the financial industries are heavily regulated environment, so you can't just ask your customers, "Oh, can we have a look at your data for our experiments," and things like that. We needed to be more cautious about which data we are allowed to use and whatnot. We work with very limited data stemming from employees who were okay.

[00:28:15] Alexandra: Understood. Do you have any other lessons learned from your previous roles that you think would be beneficial to our audience?

[00:28:22] Karin: Some of the lessons that I learned during my career, I think there is a few more and a few less obvious ones. In the last years, I think, with AI becoming more accessible, more people talk about how important good data is and the steps it takes to build these models. I think one aspect that's still heavily underestimated is the effort needed to operate the models in production.

You can't just launch a model and then let it be there and move on to the next project. These models will degrade over time and you will gain a new user base that will use your product in new ways. The model may not have seen certain data points and may react in weird ways. It's very important also to keep an eye on your machine learning models as you have launched a product and make sure that it keeps working well for your users.

Some other maybe interesting thing that I learned over time is around data collection and that where you get the data labels from, for example, also matters a lot. To give you an example from my time at YouTube, so I worked in a team in Paris that built a machine learning infrastructure for YouTube and we would help internal teams to train launch, and operate machine learning models. We also helped internal teams to build models. At one point, the gaming team approached us and said, "Oh, we need an eSports classifier so we want to understand what volume of eSports videos we have on the platform and potentially also build features on top of this specific content."

We used our own usual pipelines to get these special gaming videos labeled. The labeling team that we used at the time, they would stand in a standard way, get a video, understand is it music, is it gaming? If it's gaming, we would ask them, is it eSports with a few specific questions and they would rate it. We had troubles building this model and at some point, we decided it's not worth the effort, we paused that.

Then one partner team wasn't really happy at the point that we couldn't prioritize this. For their team, this was super important to get done. What that person then did is that they built a new data set to train the model and he used internal experts. People who could really distinguish, for example, an eSports video from a cartoon video because quite often they look very similar or they would be able to differentiate eSports from other type of gaming competitions and things like that.

That person was then able to launch that model within a super short amount of time just because the data quality was so much better. This is also an important learning that quite often what you don't need is complex models, what you need is much better data and data trumps models quite often.

[00:31:17] Alexandra: That's a good point. This also reminds me of one of the recent TED talks from AI pioneer Andrew Ng, I never know how to correctly pronounce his last name, where he emphasized that we're actually moving from this model-centric world of AI, how it's data-centric word, data-centric AI.

That this will also enable small business owners, medium size business owners to actually label their own data with their, of course, huge domain expertise in this specific niche to then get also value out of, let's say, I don't know what the example was, a medical peer classifier to figure out if there are any production flaws or something like that.

[00:31:58] Karin: I think the other thing that I also learned is, which is something that companies will experience as they scale, that multiple teams start building models, and usually, as it happens, time is tight so you don't document them well, and what happens is that another team potentially finds, oh, this team actually built this news classifier.

I want to build a news feature so I just reused that output of that model and I built my feature on top but then, obviously, it fails because the context is different in which the output would be used. Just to give you an example is that there's quite a different assumption what news means. For some people, news means hot news, but for some people news also mean weather news, entertainment news.

For example, whether or not you include this makes a difference, but also the purpose for which you built this is different. For example, if you just want to measure the amount of news videos on YouTube, you need to optimize for recall because you want to understand the breadth of it, but then when you actually want to build features on top of news videos, you may want to focus on precision because it's very important that the videos you base this feature on are actually news videos and any error could reduce user trust. Which metric you optimize for is very important.

[00:33:23] Alexandra: Understood. Definitely documenting your models well, particularly if you want to scale out with AI-based product development. I also want to briefly come back to what you mentioned earlier about the importance of actually monitoring models once they're deployed and keeping them from degrading or at least taking care of them once they do that. Can you give us a little bit of your insights into how this should happen in practice, on which intervals, are there any monitoring tools or what specifically to look out for to be aware once it's time to revisit the model?

[00:33:58] Karin: I think it also depends on the use case and how you're using it. Some models need to be quite often retrained. Like think about YouTube trust and safety. There is continuously new types of videos that are exposed on a platform that shouldn't be there. For these kind of cases, you need to retrain models or even built new models on a continuous basis. If you just want to have a very broad classifier that classifies between, is this a music video or is this a gaming video, you likely not need to retrain as often, but you still should keep a regular eye on is the output still what you expect it to be, et cetera.

[00:34:36] Alexandra: Would this basically be then also the responsibility of the team that developed the model or do you see in large AI-first organizations rather a separate governance functionality looking into that?

[00:34:48] Karin: I think that heavily depends on the type of organization, not just the size. Like Google is a [inaudible 00:34:55] you operate with empowered product teams. Quite often, individual teams can decide to build AI models and they decide to do most of the things within their team and then there are other teams that operationalize things much more to reduce the workload needed.

Then we would have in-house tools that help you do that as well, but Google is also a culture where you are not even aware of what all the tools exist, so maybe at first you start building it yourself and then you figure out, oh, there's this other team, it already that have something that I can reuse, but then other types of organizations are much more structured in how individual teams organize and what the infrastructure looks like, and there they may have standardized approaches on how engineers are supposed to do things.

[00:35:44] Alexandra: Understood. Also maybe thinking back to your extensive experience as a product manager, what would you say, how does it differ to be a product manager in the space of AI depending on the size of the organization you work with?

[00:35:59] Karin: I think you now have two questions within that question. One is like, what's an AI product manager, and what are the challenges, and the second question is, how does that role differ in different sizes of organizations?

[00:36:10] Alexandra: Thank you for specifying that. Maybe let's tackle first what's the chop specification of an AI product manager.

[00:36:20] Karin: I think I would define an AI product manager as someone who actually builds AI models in-house. Someone who works very closely with data scientists and machine learning engineers because a lot of challenges arise in building these products by making sure that you educate the stakeholders well about how long it will take to work with the team to timebox, for example, certain attempts and trials and put that experimentation thinking in place.

It's much easier, for example, if you use AI out of the box because then you don't have all the effort to train these models, you necessarily don't need the data, you only need parts of the skillset. You need to still be able to think about what errors may our users experience and how do I design for those. Depending on whether you built them in-house or not makes a huge difference, for example.

I think as an AI PM, and this is a question I get quite often is if a PM wants to move into becoming an AI PM, do I need to have, for example, a computer science degree? To be very frank, I think it helps. I have an engineering background and it helps me understand the lingo, for example, that engineers use. If new research papers come out, I have easier times to read them and send them.

I do believe that the basics that you need to build these products, you don't need to have studied computer science. You need to have an analytical mindset, but you don't need to have studied computer science and you also don't need necessarily to understand the math behind all these models, but you need to understand what kind of errors do they produce. For example, how do we measure the quality? I think this is learnable by a much broader-

[00:38:07] Alexandra: Group of people.

[00:38:08] Karin: -group of people. Exactly.

[00:38:09] Alexandra: Perfect. You mentioned analytical skills. Any other soft skills that you think will be highly important to succeed in a role like that?

[00:38:18] Karin: I think being able to pull all kinds of different stakeholders in your team will make you succeed much easier. I think a lot of problems that arise from using machine learning, they're not just a technical problem, they're socioeconomical problem. Being able to interact and bring in, for example, also ethicists or sociologists into certain discussions will make you, for example, succeed. I think communication still trumps a lot of other skills and capabilities that you need to have.

[00:38:54] Alexandra: Perfect. Now coming to the second part of my tech question, what is the difference if you work in an AI PM role depending on the size of the organization?

[00:39:05] Karin: I think as I left Google, I also realized that I was a little bit in an ivory tower because we had so many resources available. If I needed a copywriter, if I needed a certain person with a certain skill, I would just find that person somewhere. We also had vast amounts of data available. We had a lot of internal tooling available that helped us automate a lot of these processes and steps, for example.

If you go to a small startup, all of these things don't exist. A, you likely don't have the amount of data that big corporations have, but also you don't have the skill inside of your own company and potentially, not necessarily the funding to go and hire that skill on a contract basis. You need to operate with a lot less resources and also a lot less time to market. For a team at Google, they can easily take six months to experiment and see is it feasible to build such a model while in a small startup, when you have limited funding and limited time to market, you may be much more focused on how much time do we actually invest in trying things out.

[00:40:13] Alexandra: Understood. The time you were at N26, did you also I don't know, encounter projects that were stopped after a certain period of time because resources didn't allow to further investigate and see if there would be any beneficial result at one point in time?

[00:40:30] Karin: Yes, we did.

[00:40:31] Alexandra: Was there a usual timeframe or is it also something that you see with your consulting clients of how long do they dare to experiment until they say, "Well, maybe let's back off maybe let's try to find another way."

[00:40:44] Karin: I think it's very important to timebox, but also it's very hard to estimate if you are new to building AI products, how long maybe even need to have the first model version and understand what's feasible and not. One of the things that we also underestimated was that the first thing is not building the model and having the data, but making sure you have actually the right tooling in place and access to the data in a feasible way so that engineers can actually work with it.

These are things that are often overlooked. As you plan out big teams forget the data cleaning to be accounted for or setting up this tooling and then there's this expectation that, "Oh, building a model. We have the data, we have the engineers, we can just have the first version in a month." Actually, it takes two or three to be there. That's also very hard as you start out. I think-

[00:41:39] Alexandra: Sure.

[00:41:40] Karin: -the recommendation that I would give to every team who is to start out and haven't done it before is to start talking and finding other companies or teams that have done it before to learn from them potentially.

[00:41:53] Alexandra: That's always helpful, but I can imagine also in this context. One other specific question to the unique role of an AI product manager versus a non-AI product manager, of course, that you're much more in contact with data scientists. Are there any lessons learned or best practices in regards to how to effectively collaborate with data scientists as a product manager?

[00:42:14] Karin: Obviously, the first step is to try to understand what are the practices that data scientists use and what is the life cycle to understand their lingo, for example, but also understand how do you want to work with them. How do you set up the project processes, for example? I think key is also to partner with them and help them actually make a project succeed.

That is not only by looking at the technical aspects and how they work but also really making sure that you understand the problem that you're trying to solve. You give them, for example, the input on how the data will be used at their models produce. That, for example, helps them to decide do they optimize one direction or the other like what I mentioned before, the question of do we optimize for recall or precision is really important. I think this is where PMs can a lot of value and also designers can add a lot of value to the game as well.

[00:43:11] Alexandra: That makes sense. This also brings to mind the example you gave when we first talked about, I'm not sure in which context it was but the data scientists, the engineers being really fixed on making a scent prediction of, I don't know, your income spending that's yet to come. Then you said, well, for the users, it's much more valuable if it's not the exact amount with the scent amount being correct, but they need something different. Can you maybe elaborate what exactly this [crosstalk]

[00:43:39] Karin: Yes, I think this is something that I didn't mention in our current conversation, but I mentioned in our get-to-know session. One of the things that we also learned when building the financial predictions is that while the model, for example, may have an output of say your account will have 352.21 cents at the end of the month. Obviously, this is the output of a model because this is what it learns to predict.

Then for a user getting that exact amount actually may diminish trust and they may start asking, "How does the model know on a cent? How much money I will have at the end of the month of my account? For them, what's much more relevant instead of the exact amount is if the trend is neutral, if the trend is going up or downwards, and how severe this trend is. Is it a sudden trend or is it a slow trend, for example? Finding ways to not communicate the exact prediction of a model, but rather buckets, for example, or trends is much more helpful for user, A, in terms of trust, but also in taking action.

[00:44:44] Alexandra: That's a great example because I think it shows how important it is for all of us to be aligned with our customers and actually develop something that's of value to them. Then also, of course, of value to the organization.

[00:44:56] Karin: I think one other thing that I wanted to mention is that, quite often we are also so fixed on an idea of how a certain solution should look like that we are not open anymore. That can happen on both sides. The business wanting to use AI in one way or the other, but also the engineers wanting to build fancy models. My role as a PM, I often also see to challenge the solution that the teams came up with. Is it the best one?

What would be a baseline look like that doesn't use AI? Is it feasible? Really work very closely with the engineering counterpart to do the due diligence, not only on the business side but also on the technical feasibility side. What is a simple solution that we can compare to? Do we really need a fancy neural network or does a regression model solve a problem equally well? These are things you learn over time. This is also one of the reasons why I'm passionate about talking about these topics so that others don't need to go through a few years of learning by mistake and can get stuff better quicker.

[00:46:01] Alexandra: [unintelligible 00:46:01]

[00:46:03] Karin: Exactly.

[00:46:03] Alexandra: That's definitely a good ambition. One other thing that I really admire about you is that you are also a strong advocate for building AI products that are actually trustworthy. I think this is going to be even more important once we have regulations like the AI Act in place. Could you maybe walk us through what's of particular importance when it comes to building AI products that are trustworthy?

[00:46:26] Karin: I think quite often what still happens is that no one really thinks about it until the model is close to launching. Then people are like, "Oh, so are we fair? Are we inclusive? Are we secure?" I think the best results are achieved if you already build practices into the end-to-end life cycle and processes and you already start thinking about which user groups may be impacted. How may they experience the product differently when you are in the phase when you evaluate, is it a good business case that when you design the solution, not just when you have the models for example--

[00:47:01] Alexandra: Basically, similar to privacy by design, AI ethics, fairness, and so on by design.

[00:47:07] Karin: By design. Exactly.

[00:47:07] Alexandra: I assume this also makes it significantly easier if you start to think about these aspects right from the beginning as opposed to having to go through massive changes if you're so close to launching and have [unintelligible 00:47:19]

[00:47:19] Karin: Exactly. One funny thing that I also learned in that context is when I started as a consultant because I think within Google there is now a culture where everyone's aware that these things are important. We started to have fairness reviews before launching these products, et cetera. Outside this is not yet standard. Especially if teams yet start out using AI. They may know this is important, but they may not know yet what are some of the best practices. Then I started talking to teams about what are their needs, for example, in getting education around AI ethics, AI fairness, a trustworthy AI, people are like, "Ooh, that is a lot of costs to our company.

We can't invest it yet. We'll think about that later." For example, I learned if I don't call it AI ethics or AI fairness, but I talk much more generally about managing your risks, people are much more willing to very early on think about what do we need to do. To be fair, AI fairness is just one aspect of trustworthy AI. There are more aspects about making sure the models are robust and secure. Other aspects with regards to that, for example.

[00:48:33] Alexandra: Sure. There are many things that are important. Since you said that now with Google, there's a more general awareness for these topics. Can you remember, was there something about how they communicated this, how they try to make this culture shift that you think worked particularly well?

[00:48:49] Karin: I think making sure that not only engineers think about these problems, but training everyone in an organization to be aware of what are the risks in building AI products? What is your individual role's contribution to making sure that these products, for example, work well and making sure that there's really interdisciplinary teams thinking about this when you immediately already before you even start building these things, but when you are in the idea phase?

For example, I would often bring in our PR team very early on so that they also understand, this is the product that we plan to build. These are the things that can go wrong. Maybe we can work together on what fears you expect from our users or what may be taken negatively in the press, et cetera. Then already take that throughout the design process and making sure that we build the right thing.

[00:49:48] Alexandra: Interesting. Before we come to an end, Karin, I would also be curious to learn, how do you actually continue to learn in such a fast-moving space like artificial intelligence. Do you have any favorite site resources, courses that you can recommend?

[00:50:03] Karin: I think now it became so much that it's really hard to stay on top of everything that's happening, but I think the challenge is still that, especially when trying to understand how things really work and getting a little bit more deeper into the field instead of just like news articles about with which apps fail from a fairness perspective, is that still a lot of courses are mostly targeted at engineers and then there's a few very high-level courses that target at business leaders and what do executives know? There's still a lot of empty space for anyone in between.

What I try to do is to find people that do interesting work and then follow them on LinkedIn, for example. I join newsletters to stay on top and I have a few favorite newsletters that I always read if they have a new article. I'm also excited about that the whole space grows and I discover more and more people that talk more openly on LinkedIn about network and then start interacting with those. Then from them I also learn new things that they read and follow. It's still very bottom-up, but it also enables good conversations with these people as well.

[00:51:14] Alexandra: I can imagine. Can you maybe reveal one or two of the individuals who you think everybody should follow to learn more?

[00:51:20] Karin: For example, Google has a chief decision scientist, Cassie. I literally read every article that she brings out.

[00:51:29] Alexandra: Yes, she's great.

[00:51:30] Karin: I think it's also good to advocate for women in this space because still [crosstalk] 99% of the engineers that I interact with in this field, it's mostly male. I think it's also important to follow female voices in the field too.

[00:51:44] Alexandra: Perfect. One of your favorite newsletters that you can leave our audience with?

[00:51:48] Karin: Newsletters on AI, not as much. To be fair, they are follow more general PM newsletter, so I really like Lenny's newsletter, for example, because an AI PM still is a PM at its base and use succeed by using PM best practices as well.

[00:52:07] Alexandra: Sure, that makes sense. Since we talk about learning and courses, I know that you also did and are currently preparing additional courses, particularly when it comes to trustworthy AI and product management. Can you maybe walk us through who could benefit from these courses and for our listeners interested in joining, where can they find them?

[00:52:24] Karin: Yes, I also take a very product-lean development approach to my courses, not just to building products. I went to life cohorts on building trusted machine learning-based products and there, I didn't assume any prerequisite. I assume people don't know anything about AI or ML. What I also learned is that my audience is very diverse.

I think in the future I will launch a basic course that I make much more accessible to anyone from a price point perspective to really just understand the basic concept of AI and then have more special focus on what does it mean to build trustworthy AI or so, and I'm also experimenting with the format. Please navigate to my website and sign up for the waitlist and I'll keep you posted on what's next to come.

[00:53:17] Alexandra: Then we will put that in the show notes. To end, maybe you already shared so much advice with us today, but is there any last piece of advice that maybe you wished you knew already 15 years ago or since you also mentioned empowering women? Is there any piece of advice for women to succeed in this field?

[00:53:34] Karin: I think there's two things. One is, let us women help each other [crosstalk] to grow and be exposed. The second one is be more bold and dare to speak up as well and ask for more.

[00:53:51] Alexandra: That's not a good point. Well, Karin, thank you so much. I think I learned a ton and I hope I'll listen to it as well. Thank you very much for taking the time and coming on the show.

[00:54:00] Karin: Thank you for having me.

Ready to start?

Sign up for free or contact our sales team to schedule a demo.
magnifiercross