💡 Download the complete guide to AI-generated synthetic data!
Go to the ebook
Episode 46

How to (finally) move your AI products to production

Hosted by
Alexandra Ebert
In the 46th episode of the Data Democratization Podcast, host Alexandra Ebert engages in a conversation with Wolfgang Weidinger. Wolfgang holds the position of AI, Data Science, and Analytics Coordinator at Generali Insurance, Austria, and serves as the Chairman Of The Board at the Vienna Data Science Group. Wolfgang is a seasoned data scientist with a wealth of experience in overseeing AI, data science, and analytics projects. In this episode, he shares his expertise and experiences, offering valuable insights into the challenges of implementing artificial intelligence solutions in real-world business settings. Throughout the episode, Wolfgang delves into a wide array of topics, providing practical advice for those seeking to deploy AI models within large organizations. These topics include:
  • Selecting the appropriate tools for AI projects.
  • Examining the adoption of AI across various industries.
  • Highlighting the significance of soft skills, collaboration, and domain knowledge in AI initiatives.
  • Discussing the organizational role of data scientists.
  • Offering strategies to bridge the gap between business and technology.
If you're interested in delving further into the practical aspects of data science and AI, we recommend checking out "The Handbook of Data Science and AI - Generate Value from Data with Machine Learning and Data Analytics," which Wolfgang co-authored. Additionally, for those in Vienna, Austria, consider following the Vienna Data Science Group for exciting meetups and opportunities to connect with the local data science community!

Transcript

Alexandra Ebert: Hello, and welcome to this episode of The Data Democratization Podcast. I'm Alexandra Ebert, your host and MOSTLY AI's Chief Trust Officer. Today I have Wolfgang Weidinger with me, the AI Data Science and Analytics coordinator, as well as the data strategist at Generali Insurance. We're going to discuss one of the most challenging machine-learning topics. No, we're not going to talk about complex modeling architectures, LLMs, Transformers, ChatGPT, or any other area of AI that currently gets loads of media attention. We're going to focus plain and simple on how to move AI from development to production. Let's dive in.

[music]

Welcome to The Data Democratization Podcast, Wolfgang. It's so great that we finally made it and that we're here in our virtual studio. Before we dive into our topic of AI in insurance and moving it from development to production, could you briefly introduce yourself and also share what makes you passionate about the work that you do both with your day job at Generali, as well as of course, also all the work that goes into Vienna Data Science Group?

Wolfgang Weidinger: Thanks, Alexandra, for having me. My name is Wolfgang Weidinger. As you rightly said, at the moment, I'm heading more or less two functions. On the one hand at the moment, I'm Senior Data Scientist, AI and Data Coordinator for Generali Austria, that's one of the biggest insurance companies here in the country. On the other hand, I'm the chairperson of the Vienna Data Science Group. This is a nonprofit association, which cares about educating people about AI, raising the awareness about ethics, about trustworthiness. We talked about diversity there years ago. It was founded, oh, in 2014. Back then we had to explain what data science actually is. Not like, "Oh, yes. Okay, yes, yes. That's cool," more or less. At the moment these are my two functions.

Alexandra: Very interesting. This means next year there will be a big anniversary for Vienna Data Science Group.

Wolfgang: Hopefully, but yes, we are thriving at the moment. Your question also was, what makes me so passionate about it? Well, I think one of the easiest answers here is the people. For Vienna Data Science Group that's easy. A lot of people in the network, for example, in our meetup there are nearly 4,000 people now. We are making regular meetups with up to 100 people one or two times a month. That's quite a lot.

This motivates me because so many questions, so many use cases you can say, so many stories. That is for me quite motivating. The second, my day job here is-- insurance companies, I think there are some stereotypes about it. You can pick the one you think fits best. In my experience, there are passionate people there, knowledgeable, supportive. For sure, lots of data, lots of use cases.

Alexandra: That's true. That's true.

Wolfgang: Really a lot. Yes, for sure, this is something I like. To do that in Vienna, my hometown, that's quite nice.

Alexandra: That's an added bonus. Today, I really want to pick your brain about the problem that so many organization face when working with artificial intelligence, which is not only playing around with it but actually moving it into production. In one of our earlier conversations, you shared that most of the AI projects actually fail near the end. Can you explain why that is the case and what is happening there?

Wolfgang: Well, thanks for the question. Quite interesting question. Yes, and we actually talked about it. While at the Vienna Data Science Group, I also talk to other people like, "Do you see it like that?" and whatever. There is this quite well-known survey by Gartner, I think it's 2001. Back then they said more or less that 80% to 90% of use cases/projects don't deliver full value or no value at all, and 53% doesn't make it into production. Then I talked to people and basically, we agreed on these are still maybe a bit optimistic numbers.

Alexandra: It's not getting worse.

Wolfgang: Maybe the picture is in reverse. It's really depending on what you count in. This is the point. Why I think is pretty straightforward. If you say it from a business perspective, it's about expectations.

Alexandra: Overinflated expectations around the AI or hype.

Wolfgang: More or less hype. I spent an entire lecture last year on PI data actually talking about it. Why? Because I think hype or expectations that AI or data science can make a positive impact that's not bad. That's good, more or less. If the expectations are too high, it leads to various things. One thing it's often oversimplification. There are very good pictures about it, very good diagrams. If you delve more into detail here-- I would say I'm doing that over 15 years now, and back then it was still the question a lot about infrastructure. So like, do you really have data at all?

I am not talking about quality, I'm just talking to you have data, and so on and so forth. A bit exaggerating here, but still fundamental questions. There are still data here to learn. Even in a lot of organizations, you get some data. Let's put it like that. Then you have a notebook and a physical one or whatever, and maybe a Jupyter Notebook and you go and you have your scikit-learn or whatever, and you're doing a first model, and you can show some results. Maybe these results are quite promising because whatever.

This happens then in maybe a few weeks. Before something often happened like some organization is employing some data scientist, or ML engineer, or however you want to call it at the moment. He or she is producing that. Then it's like, "Okay, we have super results, so fire away." Maybe it's not the end of the project, but you can at least show that it's worth it. Then if you want to implement it, so really doing the whole thing like DevOps. You want it to scale. You want it to be secure. You want all the data privacy to work and so on. You have to solve a lot of things, a lot.

I think that this is heavily underestimated. That's why even a whole discipline is now emerging called machine learning operations, MLOps or AIOps sometimes. The point is that you can also see it from a positive side. At least, you often can say, "Okay, this would really make sense." There is this famous paper from Google, I think 2015, Hidden Technical Debt in Machine Learning Systems, something like that. There is this picture of what does it take to bring in machine-learning model into production. Graphics are always important.

Alexandra: Hard for a podcast, so I hope you can give us a very good description of this picture.

Wolfgang: Let's put it like that. There are a lot of different cubes on it. One of the cubes is the machine learning model. The whole picture is there are 20 cubes on there, and the whole picture is like that. The machine learning model is like that.

Alexandra: Just a tiny piece, just a tiny cube.

Wolfgang: It's a tiny piece on the road to bring AI into production. That's really hard. It's really a hard problem. It's engineering, it's legal, it's operations. Often this is not taught at the university, or it's more learned by experience often. I think one of the biggest hurdles there are expectations because it's a very hard message to say, "Okay, we can show you now we have very good results, but sorry, for the next weeks, months, we have to go into production." This takes time and it's depending on the rest of the environment. Think about large organizations.

Alexandra: This takes time.

Wolfgang: They have IT infrastructure in place, which is often, yes, it has time to mature. Let's put it like that

Alexandra: [laughs] Which is not as beneficial as with a good old wine.

Wolfgang: Unfortunately not. Entering your question, I would say it's a bit of expectations management, expectations, hype. On the other hand, the underestimation of what are the needed resources to do that. I could talk about it quite long because this is, in my opinion, one of the hurdles of building AI systems end to end. There is a second one, but I think we will talk about that.

Alexandra: Perfect. Makes sense. Since this is such an important point, maybe as a follow-up question. You said it's expectation management that many organizations presumably don't come prepared to have all the infrastructure in place, particularly from the technical side to be quicker when they want to move the models into production. Assuming that an organization did their homework prior to hiring the data scientists, have a beautiful technical infrastructure in place. What would you say are the most important milestones to hit for the operations teams, engineering teams, legal teams, to be in a good shape to collaborate with data scientists?

Wolfgang: I think it's quite important in my opinion to see this as a whole because it's one process. I think that at the moment we are in-- I would say in a state where there's a differentiation in the field. When I talk about data science, I'm still talking about everything. I'm talking also about data engineering, which is an old thing or ML engineering. Building AI systems means doing all of that. First thing is always awareness, and then you need certain skills. Where the skills are really located doesn't really matter. The organization needs to have it.

Organizations are often let's say very diverse. If you're talking about data strategy, it should fit to the company. What does this mean exactly? Often you have people maybe with more experience who can do data engineering, a bit modeling, maybe even ML engineering because they have a computer science background for example. I think the good data scientists or doing data science means having this bridge between-- often there is business and IT, not everywhere, but often you have this distinction.

In my opinion, a good data scientist is the bridge because he or she should really listen to, okay, what is the problem? What do I want to solve here? What is this all about? What is the question my customer wants to have answered? Often the engineering capabilities are more concentrated in IT. You need to be this translator very practically speaking. From a milestone perspective, I think that for sure, awareness, second, classical gap analysis.

Do I have these engineering capabilities? Simply speaking developing an AI solution is different from a typical IT solution. Why? Because if you train a model that's non-deterministic, it's stochastic, so it can all go wrong, very simply speaking. In the end, it really couldn't work. In the sense of, no, I can't answer this question to a certain accuracy or degree. This is quite hard to digest if you have very strict guidelines to develop something.

Alexandra: Even timelines to develop something.

Wolfgang: Timelines, milestone, you name it. Having this difference clearly in mind I think is very important. Second thing is, okay, what does this mean from a tool side? I don't want to underestimate the tool side. There are good tools now coming up, but a simple example MLOps is similar to DevOps, you build it, you run it. This one thing of developing code, and in my experience this is something which works for AI systems quite well.

Really developing in an agile way really iteratively works well because often in the beginning you are in this experimentation phase where maybe it'll work. In more traditional settings, that's not really there often. It really depends on that. Really awareness, having the capabilities, making a gap analysis, this is the typical thing. In the end I think it's acknowledging that it's not enough to hire one data scientist with a notebook. Very simply speaking, hiring one data scientist with a notebook who then will earn you a lot of money because in a lot of business cases are about that.

Might work sometimes, but often there is quite added effort on top of that. I think there's also no way around it, because I've also seen solutions developed who are then the more standalone. The adoption rate is often not really good.

Alexandra: I can imagine.

Wolfgang: You want it to be used.

Alexandra: Used to solve a problem.

Wolfgang: You need disintegration. Not an easy answer here, unfortunately, to be honest.

Alexandra: I didn't expect an easy answer, but thank you nonetheless for sharing your thoughts on this. Since you mentioned adoption, maybe if we take a step back or a high-level perspective. The insurance industry at large, globally speaking, what's your thoughts on how much AI is used there in production? What's this rate of AI adoption? Maybe even in other industries given your exposure to so many other industries via Vienna Data Science Group.

Wolfgang: Not only Vienna Data Science but yes. Well, to speak globally, this is quite some big thing to do. I think what might be some indicator is spend some time banking, also do a little bit of retail, wholesale, or a startup. My point is that for sure there are differences between all the industries for sure, but there are always disruptors like startups being in this industry. For example, in insurtech, so insurance tech, there are some startups I know of, and it's gaining some traction, I think.

Some time ago I read that this is not one of the most liveliest sectors, I don't know. This is just something I read. The point I want to do here is, there are really a lot of use cases there, but still this is something I have the feeling that in a lot of countries, for a lot of products-- and you talked about products. People, let's say, enjoy talking with a person, for example, and have very complex demands for the product, really complex ones. This is okay. No problem with that. If you talk about, what is AI used for?

It's often used for optimization, for example, or maybe automatization or augmentation. You support some people doing his or her job. The more complex the environment is, the more complex the solutions for that are. My point here is I have the feeling there's a lot of effort there. Even years ago, there was a lot of effort put into production for automatization. In the end, from a customer perspective, it doesn't matter if it's a rule-based system or something which is simpler. For example, you and me as a customer are interested that your claim is processed efficiently-

Alexandra: Quickly.

Wolfgang: -and that's it, more or less, speaking from a customer perspective. When I talk now about what's AI here, for sure, I just mentioned it. Vast amount of text are really one of the biggest data sources in insurance, everything really. You have a lot. The whole underwriting process, the whole claims process. I think we all had maybe some time some experience with it. To do that efficiently, a lot of NLP techniques are used for sure. I think a lot of effort in there for decades, sometimes. Now with AI, or ML, or NLP techniques, and so on, you are just doing the next step.

It's really something where you can build on them. This is my experience. You are doing maybe very simply also from other industries. 20 years ago you talked about structured data and all your effort was put into that. Now you're talking about unstructured. The questions are the business cases are still similar, and I think a lot of experience there, which I obviously like. Talking about other industries, I have the feeling that in banking there was some surge during the last years. This is the feeling I have there.

Again, customer, what do you want? Do you want to go to a branch or not? What are the people used to? How do you want to pay and so on? One simple thing is also for retailers. If you go to your supermarket and there are these electronic price tags, you can do a lot of things with that if you have electronic price tags, only for this use case. I would say that bigger corporations or the industries of banking and the insurance are thinking about it for a long time-

Alexandra: Understood.

Wolfgang: -and really investing a lot. At the moment, it's the next step.

Alexandra: Makes sense. Makes sense. Just briefly because you mentioned that it's this natural evolution it's the next step and that customers not necessarily care whether it's a rule-based simple system, or some fancy machine learning algorithm behind. Is there one area where you think there's the biggest promise or the biggest potential when applying artificial intelligence or machine learning approaches to the insurance sector? Something that you are most excited about, or is it again in line with your message of no over-inflated expectations, use it where it's useful, but don't expect things that are simply unrealistic in the next, let's say, five years.

Wolfgang: I think it's quite obvious because I've just mentioned the topic. At the moment we are seeing a revolution, which for you maybe out of your position working for MOSTLY is not so much a revolution. Because talking about generative things is something you talk about-

Alexandra: I've done for a few years.

Wolfgang: -quite some time now. Might sound strange for you and I agree to that a bit, but I also have the experience that there are some tipping points where the adoption rate really goes up. In the end, you don't really know what happened. Did it get a bit better, did it get a bit faster, or something like that? What I'm obviously speaking about is the advent now of large language models, transformer models, foundational models.

I think at the moment it seems that we see a jump again in-- I would say very generally speaking, usefulness. Not talking about the accuracy and if they're hallucinating. What in the background works, transformer models, for example, very interesting. I'm very excited about that because obviously, I said that a lot of unstructured data here, and I think about the possibilities which I have with these type of tools.

I'm not speaking about ChatGPT because this is a nice application and it's very impressive because it's really skilled. This is something I think it's really like, "Okay, wow." I'm talking about, what does this mean, how can you use that? This is what I'm passionate about here because I think it gives just more possibilities. In the end, I think that's very exciting because it has so many implications, what you can do with it.

Alexandra: Indeed, indeed, which actually also brings me to a question I absolutely want to ask you and gain your experience with that. One thing that I oftentimes hear when talking with organizations who want to become more data-driven use artificial intelligence, is that AI oftentimes only resides in the innovation department, but doesn't make it into the business units.

Because it's not necessarily well connected at the start of a project with a business problem or a business case that should be solved. Do you have the five best tips to find a good business case? Is there a department where they're always lying around where people can pick them up? How to come to a business case, how to collaborate as a data scientist with the business units to do something that's really meaningful.

Wolfgang: Top five. That's hard. Now, very, very easy. First one is really easy, go out there.

Alexandra: That makes sense.

Wolfgang: Meaning if you are in an innovation lab, don't stay there. Go out there to the business, to the people doing what you try to solve. Talk to them. Really follow them. Look at what they are doing every day. Really build interest, build connections there. My point is you can also catch that from an organizational level and we can talk about that, but in the end, if you talk about an innovation lab often sometimes it's a bit of a silo.

Alexandra: Indeed.

Wolfgang: The problem is not the innovation lab. The problem is the silo. This is a problem for all organizations, especially for big ones. When you talk about the AI, data science that's a horizontal industry. It's not vertical. That's why I so emphasize that people doing that need to have extensive soft skills that's vastly underestimated often. Now some functions are included like AI translator, data translator, or whatever. If you need people for that, okay, but I think nearly everyone needs to have it because it's so important for what we are doing. It's really indispensable.

Alexandra: To make it concrete, which soft skills are we talking about? I would assume communication skills would come in quite handy, but what other skills that are so important to have if you want to collaborate successfully?

Wolfgang: Well, sometimes I feel a bit strange in even mentioning it, but it's very quite simple stuff. You said communication, for sure, humbleness, respect. We talked about expectations. The expectations are often also quite high from a data perspective. People come from university or also from other jobs, and have very high expectations of what you can do, what you want to do, and often are a bit underwhelmed like, okay, that's it now.

It's just not true because in the end what we are doing is giving meaningful answers to questions with data. This is something often we build AI products, that's okay and you need that. You talked about soft skills, so yes, communication, listening to people, and really then having the respect that what they are doing is meaningful. What does this mean? It means-- I'll give you a second. I exaggerated here a bit.

Alexandra: That's always helpful.

Wolfgang: A lot of people react quite-- I wouldn't say hostile, but not so friendly. What you hear is like, "Okay, we are doing now this and this and you will be replaced." Nobody will say it exactly like that. Over the years I had this reaction a lot. What it is all about? It's about fear. People are afraid, and being able to deal with that I think it's very helpful. It's very, very helpful. You can put that into an organization. I give you a simple example. If you are a quite diverse organization or a bit more decentralized, it might make sense to put your data people/scientist at least in some capacity to a business unit.

For example, because then they can be touched. They're like, "Oh yes." They are not this then in the ivory tower doing whatever stuff we don't understand. They have to really deal with very concrete and very hands-on problems. I think the upside here is that often people are very, very happy if you solve small problems. "Really, you can do that?" "Yes. Come on. I can do that in one afternoon." "Really? Oh my god." This is very good for both sides.

Alexandra: Makes sense. This actually reminds me of one guest I once had on the show, which also shared that it has been so helpful in the past when you started out with AI teams and data science teams to mingle more with business units and actually start from their most pressing problems. Things that they are annoyed of that take lots of their time and they'll come back to this. AI shouldn't replace folks, but actually help them to be more effective and focus on the things they actually want to focus on-

Wolfgang: Absolutely.

Alexandra: -and you can see that this helps.

Wolfgang: Well, people are then very, very thankful. I would say humbleness is then like, "Yes, I can help you there. It's maybe not very fancy, but whatever it really helps you. You're happy for it. I can do it. Not in a minute, but with minimal effort." I think this is one of the bigger troubles there, I would say. On the other hand, there is the other extreme. Some organizations and also people are like-- how do they call this? The machine will solve everything.

Coming from fear that's not the complete opposite is super hype. Do something and all these things will be solved. Think about any use case you can imagine. This is the second quite big danger, and there you have to be very transparent from the beginning and very clear. We are just humans doing human stuff. We are not Harry Potter and we can't just-

Alexandra: Unfortunately not.

Wolfgang: Unfortunately. I think it helps there to be quite straight from the beginning. This is tricky and so on and also transparent. Again, transparent, communicate clearly. When I say it, it sounds a bit boring to be honest, but that is often the biggest hurdle.

Alexandra: I can imagine. I think it's super valuable that you emphasize this importance of also considering how you communicate new AI projects and not instilling fear within people. Maybe since our listeners always love to have tangible real-life stories, are there some AI projects that you've worked on that you found particularly exciting that you can share with us? Or maybe also general learnings of projects that didn't go as intended beyond what you just said of how to communicate the project, how to interact with the business partners?

Wolfgang: Thanks for the question. Well, actually prolonging what I just said and continuing it. I talked a lot about automatization, and automatization is something often people even don't want to do that. They're like, "We are really happy that this is now automated because we can do now much more meaningful work, which helps our customers or saves me some time," or whatever.

Alexandra: Sorry, you just said people don't want to do that. You mean data scientists or sometimes not interested in automation tasks?

Wolfgang: Thanks for the question. When I talk here it's about customers. Internal customers often like people working in some business unit for example, and they're doing some task and often they don't have the fear that they are then replaced. It's just a task which is like, "Do I really have to do that?" I give you a simple example. I talked a lot about we are getting a lot of text, emails, whatever. Doing a lot of NLP stuff, quite generally speaking, and a simple use case, or it sounds simple, is like a customer, so an external customer want to change their address. This is actually really often, people are moving and so on.

Let's say this is something where if you automate it a lot of people would be happy about it. Again, I talked about claims processing for example and we did that. All the address changes are nearly now semi-automated. What I remember sometimes a human in the loop, but that was huge in terms of, "Okay, fine." Also your details for collection like your bank details. I want to change my IBAN is something you really want to do manually. It's just six digits, come on. That's simple stuff. If we are talking about unstructured text here, so people are not writing in the form often.

Alexandra: With email.

Wolfgang: Yes. This sounds a bit standard and it is now. The point is it is super valuable and it helps that customers get their claims faster, for example, and people like that, me included.

Alexandra: Sure.

Wolfgang: This is something where you then, for example, can build on if you have further abilities with NLP. Sometimes you can maybe automate it, sometimes you can augment it. It's depending on the use case, but in the end, our internal customers, business units are very thankful for that, for sure. You talked about learnings, or you can also say things which didn't go very well. Let's put it like that.

Alexandra: Learnings.

Wolfgang: Too much to say now. Learnings I can tell you something, but in the end, it's a lot about what I told you before because a lot of the things I told you. I might have experienced in the past, let's put it like that, and maybe even have done it. The point is that this is the valuable thing of experience. On the one hand, it's where we talked about going into production, which basically means really not only from the infrastructure level but really from organizational level. To be more concrete here, in my experience, a lot of these things even don't have the chance to not deliver value because they are not a project because you don't find the people.

After the first contact everybody agrees, "Ah, that's not a project now. That's not useful," or, "We don't have data or whatever." There I can really see learning really, really, really work on this interface between, let's say, business and data team however you're organized. There what really helps is domain knowledge. This is really super helpful. Really learning the language of the industry and every industry has another one with terms you never heard about, and you are sitting maybe in your first or second meeting and thinking-

Alexandra: Where am I? Which language is it?

Wolfgang: -which language are they talking about? Ask, it's okay. It's really completely okay, ask. If you have that, try to make yourself useful. We talked about this, maybe, let's call it quick wins. It's a very nice thing to say, but it's actually about that. Make yourself useful and do something where 100% sure you can make an impact maybe with quite small effort. If you have understood your domain, well, you can even be more proactive there and you can approach people and say, "Well, I think that could work." Be more proactive here.

Alexandra: It's basically like first, not only wanting to work on the latest and nicest AI tools that are out there but actually taking on the more humble use cases which really save time or deliver value to your business partners. Then getting the domain knowledge in the process and learning to better understand what you're talking about. Then upping your domain knowledge so that you can eventually become more proactive in suggesting things that the business partners don't even think about.

Wolfgang: Because very, very often it's about trust, and to have this trusted relationship is something which evolves. It's very helpful then if you meet people eye to eye. I talked about respect and humbleness. Yes, you can use them everything you want, really super cool AI tools, everything. That's fine. That's super. Never forget at least working not in research and development is different. This is okay. It often has to have some reason, let's put it like that.

This is why often it makes sense then coming from an organizational level to maybe do something like a data science academy, AI academy, internal, where you pool your, I would say, data people, engineers, scientist, whatever. These people then go out and try to educate the rest of your organization because then they need this effort of trying to I would say translate what you're doing for a broader audience. This is actually quite helpful.

This is one learning from an organizational level that it really makes sense that the people doing it are at least involved in that. You don't have to do every YouTube video yourself but use these videos for support, but really the people then are seen. In the end, it's about this connection. This is something where all these things help. In the end, hopefully, everybody's happy and you have a really cool AI system. A lot of people then say, "Ah, that really makes sense." That is the optimal outcome.

Alexandra: Perfect. I would say that's already a perfect note to end on. Thank you so much for everything you shared with us, Wolfgang. Maybe as one last question, if people want to interact with you, where can they find you?

Wolfgang: Well, on internet. No. Just kidding.

Alexandra: What's that?

Wolfgang: I don't know. I would say that you can meet me best physically at one of our meetups in Vienna. In the end, this is also what I enjoy, so please I would be happy to do that. Also, talking about Vienna Data Science Group, we have a Slack channel. It's quite easy to go there, or you will find the details on our website vdsg.at. Thank you.

Alexandra: We can put that in the show notes for everyone.

Wolfgang: No problem at all. For sure, we also tried to answer some of the questions, so shameless product placement here now.

Alexandra: Oh my God.

Wolfgang: In one of the projects we did for the Vienna Data Science Group, it's really a book, Oh my God. Don't be afraid. It's really a book for people who are not the super deep learners, but the target group is people who know a bit about it, want to know more. It can also be non-CS background, or non-math, and so on. It's called The Handbook of Data Science and AI.

Alexandra: We will also put that in the show notes.

Wolfgang: We tried to put a lot what I just told you in there because we wanted to be really hands-on. We haven't found a lot of resources about that out there.

Alexandra: Cover really extensive ground with that book. I think it was a collaborative.

Wolfgang: Well, it gets bigger and bigger. We are working on the third edition-

Alexandra: Oh, wow.

Wolfgang: -and we have now 13 authors. We try to cover-

Alexandra: Cannot only use it to upskill with data science but also soon for your workout and do some lifting with this book if it gets big.

Wolfgang: Yes, yes, exactly. I think there was also an eBook there.

Alexandra: Where would be the fitness benefit of that?

Wolfgang: I don't know, but a lot of people have asked us, "A book? Can't you do that with ChatGPT?" We're like, "Yes, let's think about that."

Alexandra: That would also be enough to think about.

Wolfgang: I'm thinking maybe the next edition then.

Alexandra: Perfect, perfect. Well, thank you so much Wolfgang for being with us today, and people buy the book, follow Vienna Data Science Group, join one of the meetups. It's definitely an event to be part of. Thank you very much for being with us today.

Wolfgang: Thank you, Alexandra. Thank you for having me. Curious about next steps of MOSTLY. Come on, you should drive now. Generative is--

Alexandra: It's blowing up.

Wolfgang: In all heads.

Alexandra: That's true, definitely. Well, we will see what the future will bring.

Wolfgang: Hallelujah.

[music]

After having this conversation with Wolfgang I think it became obvious why it is so challenging to not only develop AI models but actually successfully deploy them in production. I hope with the tips he shared and also the lessons learned that you could take something away for your own journey towards developing AI that actually shows a business impact. If you have any comments, questions, or remarks, as always you can either write us a short email at podcast@mostly.ai, or you can also comment LinkedIn. With that said, I'm very much looking forward to have you tune in next week's time. See you then.

[END OF AUDIO]

Ready to get started?

Get started for free or get in touch with our sales team for a demo.
magnifiercross