🚀 Launching Synthetic Text to Unlock High-Value Proprietary Text Data
Read all about it here
Episode 22

The future of insurance with David Marock, insurtech and fintech expert

Hosted by
Alexandra Ebert
What does the future of insurance look like? Alexandra Ebert, MOSTLY AI's Chief Trust Officer talked to David Marock, a seasoned insurtech, fintech and proptech expert from London. David shared his insights on the state of AI in the insurance industry and what the current trends are in the insurtech space. In this episode you will hear about:
  • AI opportunities in insurance,
  • why traditional insurance companies are tech hesitant,
  • how to overcome the tech inertia in insurance organizations,
  • how regulatory pressure is changing minds about digital transformation,
  • why fairness is critical for insurance companies,
  • how synthetic data solves data access and privacy issues,
  • what to do about less tech-savvy business leaders, and
  • how the future of insurance will be driven by new technologies.
If you would like to learn more about how synthetic data is used in insurance and the financial sector, listen to our episode entitled Synthetic data engineering in insurance and banking!

Transcript

Alexandra Ebert: Welcome to the 22nd episode of The Data Democratization Podcast. I'm Alexandra Ebert, MOSTLY AI's Chief Trust Officer. My guest for today is the seasoned insurance expert, David Marock. David is originally from South Africa, and now he's working in the UK. For over a decade, he successfully ran and scaled an Intertek business before transitioning to senior advisory roles and board roles for insurance companies, wealth management organizations, as well as players on the InsurTech, FinTech, and PropTech scheme.

His experience really spans from being on the frontline and being a CEO, to being the trusted advisor who gets insights into many different insurance organizations, and what works for them and what doesn't. Today, David and I talked about the state of the insurance industry, and also the opportunities that lie ahead regarding InsurTech innovation, especially when it comes to AI adoption. I think we all can agree that InsurTech is a fascinating area, and David is someone who knows this area amazingly well. Let's dive into the future of insurance, and let's hear which insights David will share with us.

Welcome, David. It's really great to have you on the show today. I was very much looking forward to have this chat about AI in the insurance industry with you. Before we dive in, could you maybe introduce yourself to our listeners? What was your career so far? How did you end up in insurance? How did you land where you're currently?

David Marock: Hi. I'm David Marock. I'm delighted to be on the show. Thank you very much for having me. In terms of a bit of my background, I'm currently a non-exec on a number of different insurance and wealth management organizations. I also do senior advisor work for tech businesses, from startups to actually quite large businesses, along with covering from InsurTech, PropTech, FinTech, and also, in fact, PureTech as well. In terms of my journey to this point, I have invested pretty much from the start of my working career, I actually was in the insurance space.

I studied and qualified to be an actuary, growing up in South Africa, and then worked in a variety of different insurance and asset management organizations. Spent some time in fact, at McKinsey, and then went on to work at a London market insurer called Beasley and did everything from claims, underwriting, operations, and such like. Then most recently, was the Group Chief Executive Officer of a company called Charles Taylor, which did both Insurance Services and InsurTech across the globe.

Alexandra: Wow, that's exciting. I think it's safe to say that you're a true expert on the insurance industry. One thing that I'd be curious about you, run and successfully built an insurance business over Intertek business for the past decade, and now you transition to more to an advisory role. What's most exciting about it for you from this transition, from the frontlines, being on the frontlines, to now being more in the backseat and helping other people to succeed?

David: Well, along my journey, as you all have heard, there have been times where I've both been principal and where I've been advisory. From my point of view, both those roles have their joys. In terms of now the pleasure if you're like, I've been able to contribute to others and helping other businesses and other individuals succeed and solve complex problems on such a diverse basis, that way is just brilliant.

On the other hand, when I was running a business and building that up, that was also super exciting. That was about working with a team, with great people. Building up something in one case from scratch, but overall, as a business really growing the business dramatically and serving clients in numerous ways across the globe was also great. To be honest, both roles are fantastic. This one particularly suits me right now, that suited me well then as well.

Alexandra: I can imagine there are definitely pros to both of these roles. When you say growing dramatically, how large did the organization get?

David: The organization I was running grew from about £100 million per annum revenue to about £300 million per annum. We simultaneously tripled the bottom line with Botox, an InsurTech business alongside that, which grew essentially from scratch to doing £24 million per annum of revenue, and in fact, was profitable too. I could go on in a hive, we measured it, we went from 800 people across the globe to 3,000.

In a more meaningful way, we took a business where people hadn't been particularly enthralled about being an attendee, and grew it over time to a point where people, 90% of our staff rated it a great place to work that they could recommend to family and friends. On so many levels, a very satisfying experience in terms of growing a business and serving clients.

Alexandra: That really sounds like a holistic success. Could you give us like your top three success criteria for what made this possible?

David: Whoa. Well, first, it's always a guest going to come down to people. That was both the people in the directing, but actually in the wider organization, just lots and lots of wonderful people that together if you're able and willing to make it happen. The second comes down to an environment, if you like, the types of services, the types of tech. We were in the right faces, if you like, to make that happen. In the third, to be honest, I think this is true for almost anyone who grows a business, a certain amount of luck coming at the right time at various points, which enabled us ultimately to succeed.

Alexandra: I can imagine this, definitely a portion of luck, but you for sure you have to create the conditions to really make use of this luck when it's coming your way. One other thing I wanted to discuss with you, because now that you're in an advisory role and have so many great insights into different organizations on the globe, where do you see the biggest potential for artificial intelligence, especially when we think about the insurance industry?

David: When we talk insurance industry, it's worth thinking, I look at it quite widely. It's both for property and casualty or general insurance, and includes the life insurance. I would also broaden it wider into the wealth management space, which often goes alongside that, and then all the associated services and tech on that. In terms of the areas of greatest opportunity, in InsurTech, I think there's an opportunity for AI to play a meaningful role right across that value chain from the start to the end and right across all the key processes.

If we're talking on general insurance, for example, if you're in terms of segmentation, market segmentation in terms of client identification, in terms of underwriting, claims management, general operational efficiencies, there's a role for AI to play across that. If we're in the life insurance side, many of what I've just described applies equally as well. Even on the wealth management side, where I think part of the trend towards global advice, portfolio optimization, client engagement, et cetera. All of that can also be done, and there's a role for AI to play.

Alexandra: Yes. I can imagine that in the future, we will definitely see AI being involved in all the different areas departments, and steps of the process. What's the status quo? Do you see lowest hanging fruit when insurance businesses tend to start, when they begin with their AI journey?

David: Well, I think the starting point for most is more just the experimentation. I mean, wherever on the chain they play, they're looking to implement it. That is actually in the developing use cases. I'm afraid that can be a use case. I've seen use cases within claims, particularly on fraud. I've seen it on underwriting, and particularly on pricing, on segmentation, and such like. People are probably at this stage. There are an awful lot of companies at the experimentation stage. They've done a certain amount of work to build up some level of data which to, if you like, teach the AI models.

They've done a certain amount of work to test out if they used it, how would it work? How would it work either on a standalone basis? I think what is probably more likely working as an enabler with human intervention to make sure that it ultimately leads to the best results. I think people are experimenting right across the chain. I don't think there's only one area that people are trying. I think that we'll come onto it, but I think the question more is, well, when does it move from being something that's talked about a whole series of wonderful use cases into something that's actually used, if you like, in day to day operations?

Alexandra: Yes. Makes sense. Since you mentioned that the majority of businesses are still in this early experimentation phase, did you have the chance to get a closer look at those organizations that managed to successfully scale AI initiatives, and if so, what were the secrets, how did you manage to pull it off?

David: The surprise in a way, is that there aren't as many as you might expect that are genuinely using it and even where they're using it, what we're seeing for many it's, they're still at the using it alongside, they're still using it, if you like, with the training wheels attached. They're able to see that it works but not necessarily leaving it to just do its job. Clearly, they're very, what I would regard as the more mundane type areas, be it basic operations or to the extent it's built into some data optimization or search engines and such like, but that's not, if you like in what I would regard as the core insurance and wealth management aspects that's more in the supporting type angles.

Alexandra: What's the reason for that because it's still so early stage for them or is there also some fear uncertainty involved?

David: As an industry, the industry has generally been tech hesitant, I would say. That's partly a function of, if you like, the complexity associated with building technology to serve this space. It's highly regulated, it generally has to keep enormous amounts of data on policies that could have been written years and from decades ago, the complexity of moving from old systems to new systems, the costs of those project bluntly than the high proportion, they have been unsuccessful. All of that makes it as an industry and where you've got senior management, quite tech hesitant, I would say. There's definitely that. There's an enormous amount of data that's potentially out there, often it is not informed that is easily accessible.

It hasn't been captured in a way that can be mined and analyzed. The quality can often be quite mixed as well. Then to the point of it being a regulated space, there's enormous amounts of sensitivity in terms of personal information. Typically, information that you have to be particularly sensitive about in terms of how you use it. Then when it comes, even if you get to that point, now you've solved and you say, "I've now got Tech that can work with the AI models. I've got the data that I can actually use." It's then having the confidence that the AI model is going to produce a result that you can rely on, that you can trust.

Alexandra: Yes, many interesting points that you mentioned. Of course, we work with various insurances, both in the European Union as well. As in the United States, and there we know that they are actually sharing the top spaces together with the banking industry on how long it actually takes to access data because it's just so sensitive and this is where our synthetic data comes in a room and helps to faster access it. I actually want to come back to a point that you mentioned at the beginning of the insurance industry, historically being rather tech hesitant.

Provocative question, is this something they can afford moving forward? I'm just thinking about the competition that we see in the banking industry where more and more big tech players both in the United States as well and as in China start offering more and more financial products and, therefore, are definitely competing about market shares with the big banks at the moment. Do you think that this is also something that's in front of the insurance industry and what should you do about it?

David: On for a whole variety of reasons, I completely agree with you that there's the ability for the insurance industry to not at some level embrace the new tech that is out there and available. I think it's becoming incredibly hard for them to do. For a whole variety of reasons, one, as you've highlighted, there are players coming from other spaces particularly around the going to the banking, wealth management, and such who are bringing in some of that tech.

The number of InsurTechs that are out there building functionality that is both some of it enabling but some of it which is effectively out there to attack the same markets means that there's unquestionably a threat. This is both a thing that's forcing it but also something that's going to enable that change as well is the tech itself has moved an incredibly long way in the last 5 to 10 years.

If I think of the technology that was available 5 to 10 years ago, and you were as an insurer, looking at the nature, the scale of the project, both the tech to build the tech and make sure it actually worked or buy the tech and then adapt it and then do what was a massive data migration piece of work, et cetera. Some of those projects were to the point daunting or overwhelming.

The tech now has moved on so far, that what you can now find available to buy and adapt is unbelievably better than what was available before. The tools now available for data migration have improved. All of that facilitates the change. It's both created the InsurTechs being a threat, but it's also made it actually easier for the insurers to open their minds, if you like, to doing that.

A couple of other forces that I think are driving it are that, which will force that change. The one is around regulatory pressure. There is no doubt that the regulatory pressure from everything, from how solvency is determined to how clients are treated, is requiring insurers to behave in a different way to capture data differently, to engage differently, to measure things that they couldn't measure before and all of that is forcing them to move onto new technological solutions that will enable them to do that.

Then, lastly, and arguably, should be, you could say, could be first, is the client pressure as well. If you've got a client who's used to using an app, the whole variety of apps that allow them to manage their lives and in all the different ways, they don't expect that when they get onto the insurance and wealth management-type products, that they should be any different. Therefore, those insurers and wealth managers that can't serve those clients, in the same way, are going to be left behind particularly if both InsurTech, but also actually some of the more progressive insurers and wealth managers start to adapt their propositions by definition those that don't will be left behind. I think no one wants to be left behind.

Alexandra: Absolutely not.

David: No one can afford to be left behind, actually.

Alexandra: Yes. I think that's a good point that you made here that really also customers are demanding this personalization that they got used to from all the other different services that they're interacting with. To the second point, you mentioned the regulatory pressure. Can you give a specific example of how regulation requires insurers to measure something that they weren't able to measure before, what were you thinking of there?

David: I mean. the list to be honest, could be just about endless but if you look at the whole solvency to framework, and the requirement on a fairly regular basis and particularly if there are any material changes, be able to say, what impact will that have? How will that flow through our portfolio? What does that mean for us? To be able to do that, you need to have the data for your client base and everything else in a very accessible form, and you need to be able to determine what clients have exactly got, what policies with what terms, what conditions with what nature, and then being able to analyze it.

When you get an event, just to illustrate the point from the high-level solvency to down to, it could be an event around, let's say, a cyberattack and then the regulator comes out and says, "Tell me what exposures you have." You need to be able to interrogate your book in a way that can be done if your tech isn't up to it, if you can't answer those questions. But you're also getting down to the level of service expectations both from the regulators.

So there's a great deal of pressure around looking at how people deal with clients, let's say, during the COVID period and how do you know that you're treating your clients fairly? How do you know that you're giving them an appropriate service? If you're not able to measure the time it takes to deal with things, to be able to understand the nature of your clients, et cetera, you just can't do it, and all of that requires new technology. To be honest, it requires a lot more than new technology. You often have to change the way you're doing business as well.

There are operational changes that are required, but that is being driven through regulatory requirements and then you get to the GDPR requirements and everything else that comes along with that, which again requires you to be able to be confident that your data is safe, secure. That the processes are run right. That you're keeping only the data you should keep. Again, all of that keeps putting more and more pressure on insurers, wealth managers, and others who are, as I said, holding an awful lot of highly sensitive data. It requires them to put new solutions in place that they could be confident on.

Alexandra: Absolutely. Makes sense. Since you mentioned fair treatment this reminds me of one example I came across a few weeks ago from, I think he's a professor at Imperial College of London. David Hand, he just authored a book called Dark Data. I don't know the tagline exactly, but basically, it was about why it's a problem if there's something that you don't know. He illustrated in one of his talks a very nice example of an insurer in the United Kingdom because if I remember correctly around 2012, there were some regulatory changes that forbids that you could include sensitive attributes in your pricing models and, therefore, differently price, for example, woman versus male.

The example that he then showed was that this well indented regulatory decision, actually had the adverse effect of leading to higher prices for women and other people who have lesser risk, and therefore, it was not necessarily the ideal scenario for the society, as well as for insurance company. What's your take on fairness in the insurance industry? How well is this understood? What's currently undertaken? How will this change with AI moving more and more forward into the different processes of the insurance business?

David: Perfectly my experience across the insurance market has been that insurers and such like want to treat their customers fairly, want to do right for their customers. I don't think that's up to question. I think the reality is that if you go back in time while the intent perhaps was the-- I'm not sure that one could say that the full resources of the organization was focused on it to make sure that it was happening. What has clearly shifted, and it goes certainly in the UK that's treating customers fairly badge has been a whole variety of requirements to make sure from the point you design your product or service or solution, to its ongoing implementation that you continue to make sure that at every stage you can be confident that your customers have been treated fairly.

That can be your marketing. That can be as you've talked about there, your pricing or your underwriting. That can be the ongoing operations or administrations or services. Could be how claims are handled or investments are managed depending on where we're talking and all of that has, if you're like, that regulatory pressure has made sure that not only do you have to be thinking you're doing the right thing, you need to be able to demonstrate that you've done the right thing for customers.

You mentioned now pricing, for example, in the regulatory change on the impact some while back. For example, in the UK, there's been another set of regulation that's come out around that space and around pricing as it happens very recently earlier this year. That's forced the insurers again, to go back to the table and make sure that everything they're doing is absolutely clear, that it's treating customers fairly as defined in terms of the act.

That puts again a great deal of pressure on those insurers. It links a little bit to the AI topic because one of the challenges with if an insurer is using AI in their pricing models is to historically and today for many of them, they've struggled to be able to say, "If I don't understand what's going on in the black box, how can I be confident that actually, the result that it's generated actually did treat customers fairly? I think that's a very significant issue as well when we talk about AI's role in the industry is being able to be confident that actually, the model you've got is generating the outcomes you expect, and that you can demonstrate that those outcomes are fair to whatever group you might having to be dealing with.

Alexandra: Definitely, that's this whole explainability challenge. The regulation you mentioned, what's the name of it and how did it change or what did it up the bar of what has to be done to demonstrate?

David: It both up the bar. It covers a whole variety of different topics particularly around pricing. It upped the bar both in its requirements, but it also upped the burden or the consequences of non-noncompliance. In terms of the requirement to be able to sign off very explicitly that you've analyzed all your processes and you've demonstrated that they are treating customers fairly.

The burden, if you like, has gone up another level and it's definitely caused across the insurance markets a real sense of, can we get ready in time? It's not just both, this is insurers. It also impacts the brokers, because they need to go and demonstrate that they've treated customers fairly. This is not just once you get to the insurer, this affects right from the point from the consumer onwards, it's obviously very retail-focused in nature, this particular regulation.

Alexandra: Yes, but it's nice to see that there's this ripple effect because I think that especially regulations have an important role to play to really bring topics like responsible AI and also to board and C level membership. What's your opinion on the different other emerging privacy AI legislations, how will they impact the insurance industry both on a European level as well as in the UK?

David: I think you could even broaden out beyond that. I think privacy has an issue, if you like, is now genuinely a global issue. Certainly, you've talked about the UK and continent of Europe, equally in the US, Singapore, China. All of them are bringing out different forms, but broadly in the direction around looking at how information is used, and that's both in terms of what can you capture? How long can you keep it? What can you use it for? When you use it, how do you make sure that you're using it in a safe and sound basis? Then to the extent you no longer need it or no longer are meant to keep it, or how do you make sure that you destroy it in a safe way?

Then while you have it, how do you protect it? Obviously, with everything going on right now on the cybercrime side with ransomware, I was going to say, boom, but if you like, just an absolute epidemic, if you like, on ransomware. Even that stage of the process has become fraught with risk. Then assuming you've got all of that and you've inputted it into your models, whatever type of predictive type models, but particularly if you're using it for AI or machine learning type models, how do you make sure that the outcomes it generates you can trust?

That can be trust from the point of view we talked about earlier in terms of bias, but there can also be issues around that actually when you then apply that model in real life, how do you make sure that the group that it's applying to actually the model is appropriate for? How do you make sure that the model remains appropriate? I think that's a whole another topic, which is keeping models fresh and valid, is again a whole another set of challenges.

Alexandra: What's actually to consider there for our listeners, what would be your tips to prevent model degradation and really monitoring your models to ensure that there's nothing happening?

David: If you could rewind back not too long ago, I probably had this naive belief that if you can get the fantastic data and you can build your models and you can be confident that the model you've built is trustworthy in the way we are describing, I probably had the belief of great. Then you're often ready and you can go with that. It goes a little bit to your earlier question on industry maturity, and that is the ones that are more mature are now at the point where they're realizing, "Actually, I've got to keep monitoring that model and I've got to make sure that it's still generating output that is as I intended, and making the decisions that I can be confident on, or providing directional steers that are appropriate.

That can be just by time, that could be because you're changing your market, that could be because an event occurs like COVID. Any of those things can make your model no longer appropriate. Actually, building in the monitoring solutions, if you like, into your process so that you are regularly reviewing your models. I know some organizations literally put almost a use-by date effectively on every model. At those times, they certainly do a full refresh, but actually, during the time, they need to put in a solution.

One of the companies for whom I'm a Senior Advisor for, TruEra, actually have built a solution that does exactly that. Both test the models upfront and then does the ongoing monitoring to check for exactly these things, to be able to identify model degradation. Technological solutions are a key part, but it's not just the technology. Like everything in life, you've got to have both. You've got to have the tech, but you've also then got to build--

Alexandra: The organizational capacity around an interactive process.

David: Exactly, right. Then people have to build in both the people, the cost, and then the follow-through implications, because then when you find your models no longer doing what you intended, what do you do in that interim period? At what point do you have to switch it over, or at what point do you have to make sure that you don't get to that point so that your model continues to remain appropriate? You need to be able to be picking it up early, and then tackling it quickly, and then getting your model back out and operational again as soon as possible.

Alexandra: 100% agreed. From the examples that you've seen in the wild, what are common expiration dates that you've experienced? Are we talking days, weeks, months?

David: Barring dramatic change events, I think they're more in the three to six months, but the monitoring needs to be done all the time. To my point about when you know organizations are living it, I was in conversation with a company recently, and this is a large insurer, and they were talking about having a couple of hundred models. Then they were suddenly realizing, what if you've got a couple of hundred models and you've got to monitor them?

Actually, in addition, you need both the technological tools that I was talking about, and then you need the human oversight, if you like, on top of that, because you'll identify that something has changed, but then you've got to identify whether that change is a valid change or an invalid change, and whether it's made your model go from trustworthy to unreliable, if you like. Your data team has to have that capability. The organizations that have really got going on this are the ones that you will have seen have at least started addressing those issues and recognizing how large a challenge it is. This is not built and done, exactly.

Alexandra: Definitely not. That's definitely interesting to dive deeper into that. We've covered plenty of challenges that lie ahead of the insurance industry. Besides building this monitoring capacity and procedures, what else do organizations in the insurance industry need to put in place to be fit for this AI-driven future?

David: I probably hinted at it early on. The one is obviously you need to have the tech. It could be your core operating systems, your platforms, et cetera. That tech needs to be able to integrate your AI models into it. That might sound trivial if you're an insurer tech, but if you're an insurer that's still operating on a mainframe, that's almost impossible. They are. People don't realize it, but there are plenty of insurers still with very old kit, like green screens, and suchlike that when they start talking about, how do I integrate AI into all of the key processes I was describing before? That's an impossible task. Certainly improving your tech.

Then it's thinking through, and you implied it in one of your questions earlier, but where is that AI going to make the biggest difference? I think what a lot of companies are realizing is that in some cases, clearly in high-frequency environments, it could be about standalone AI, but in a lot of situations, certainly, as you move into the more complex end, it's about facilitation and enablement. Then there's AI human engagement, if you like, and making sure that that's done in a way that's productive, constructive, helpful, and useful, to be able to get better outcomes for clients ultimately.

You've then got your rest of your tech, you've got your wider organization. It goes much well beyond just your data team. There's a certain amount of societal and client-type acceptance required. People have to be comfortable that it isn't computer says no, it's not just that you are treated fairly, they've got to actually believe they were treated fairly as well. People have to believe that this works for them too. There's a certain amount of marketing aspects that need to also be in place.

Then I perhaps implied, but you have to actually have the senior management, the senior leadership, the boards of the companies comfortable that their business is now running in a different way. Some risks will have changed, some risks will have gone down. You will have new risks come in. Then, finally, again, I mentioned it earlier, but clearly, it's part of this, it's all around the data, and it's both the data gathering, the data protection, all those aspects are going to have to be in place.

Alexandra: Yes, definitely. Speaking about data, what's your opinion on synthetic data and also how it can be used in the insurance industry?

David: Going back, I mentioned earlier clearly the challenges around testing the models, the challenges around data privacy, and suchlike. To the extent that synthetic data helps address that issue and removes some of those issues, that's great. Then related to that in terms of the testing is obviously just being able to get enough data. If you're in high-volume personal lines, you're probably fairly data-rich.

As you move up the value chain into the more commercial spaces, the data availability is generally poorer, much more limited. Therefore, if you're then trying to train your models, you do need some way of finding that. As far as I know, the synthetic data, again, helps-- Maybe solve is too strong a word, but it certainly helps in improving that situation and enables you to build models that you can be more confident about how they work.

Alexandra: Definitely. Then we're also engaging more and more with regulators because many of our clients in the financial services industry now have these discussions with regulators about how to demonstrate how their models are behaving. This is something where you can just only look at the code to understand what's happening in this black box AI algorithm, but you actually need realistic data to test the system on different types of uses, different types of customers to build your own opinion and understanding whether there are some incidents of bias and discriminations, or whether this is behaving as intended. I think there are different parts where synthetic data can really help, especially with the sensitive data that we have in the insurance industry.

David: This is a number of years ago, but I was working in a situation where we were looking at doing an AI-related piece of work, and the data we had, we were very reluctant, for example, in that particular case for the AI firm to be able to take that data offsite in that particular case. This was in a pre-cloud world, but in that context, just to be able to test whether the AI model made sense and working with that supplier, I think we spent at least three months getting through all the privacy issues, regulatory issues, et cetera.

Again, that would have been an example to your point and the way the regulator uses that you just described. We experienced that. Clearly, if we had had a beautiful set of synthetic data that we could have handed over without anxiety associated with what happens if, that again would have been very helpful. I think the type of synthetic data type side, the tools that I was talking much, which as I said, TruEra have got in terms of both model design, the model monitoring, and others that are out there. All of that is I think key to building comfort from the insurers themselves, their senior management, the regulators, clients, all of that, all of those help in the adoption of AI.

Alexandra: I can see that. I also have a question about this confidence from senior management. Before that, since you said in the pre-cloud era, I'm afraid that the problem you described is something that we see in the post-cloud era, or not post-cloud era but cloud era, just exactly the same way playing out. So many organizations have this need to collaborate with Fintechs, Insurtechs, startups.

Also, to validate solutions from vendors being AI vendors or different vendors, and nearly all of them need access to data and it's taking just months and months.

David: I think what's changed, if you'd like, so I think in the time, the ability to access the data has got easier from a pure technological perspective, i.e. from on-site premise type thing to cloud-based. I think what's got harder, which is to your point is the regulatory requirements and the regulatory burdens have gone greater if you like. Then the need for it has also got great in the way you were describing it. I can certainly imagine how imbalanced-- this has gotten more difficult because of that.

Alexandra: Definitely. Then we also have all these cross-border data sharing use cases for global enterprises where, for example, data is collected somewhere in the States, but then data science teams are located in Europe or in the UK. Then, of course, there are so many regulatory challenges in both that you can't just get rid of if you use something that's anonymous like synthetic data.

One other point in that direction since you mentioned that technology became increasingly complex over the past few years, do you see a tendency within the insurance world that they rather prefer to buy something as supposed to build it themselves, or is it still something that's evaluated on a case-by-case basis?

David: I've no doubt it's evaluated on a case-by-case basis. If I talk trend, I would say the trend has moved heavily towards buy. I always think, unfortunately, buy somewhat understates the story. I think most insurers are doing a buy-- the core platforms are buying a lot of the core kit that goes around it. There's an awful lot of work required to actually knit it together.

Alexandra: Absolutely, especially with this legacy infrastructure, all that’s already been in place.

David: There's definitely the best of breed in-- it could be in customer relationship management. Then you might find someone has another technology they use for the policy administration system and another technology for the finance system. Even if you're putting aside legacy, you've still got quite a complex knitting together required. You've then got, as you've mentioned, the legacy system type issue as well to the extent that you're running legacy systems alongside as well.

It's a complex environment for people. Particularly, if they are operating on a cost as you were talking about before, both in terms of data but also systems across multiple jurisdictions, again, you've got further complexity coming from that. Now that that's more possible, that complexity is more visible.

Alexandra: Yes, definitely.

David: The other thing as well is even the off-the-shelf platforms, certainly, if you're talking about well-established insurers or wealth managers, et cetera, typically, they can't just use it as is even if they've knitted it together, so it does require a fair amount of work to make sure that it is able to cater with the quirks of the particular policies that that insurer or wealth managers, if you like, with the contract terms, with the arrangements that have been in place for that years or if not, decades.

There's always an element to build on top of every buy that I've seen. I think durables I think are pretty unusual today.

Alexandra: Is there also need to adapt to certain internal policies? Just one example from an early conversation came to my mind where somebody-- I think it was either the insurance industry or banking industry shared an example of an innovation project that was not possible because an internal policy sets that signatures from customers had to be retrieved in a written form on paper.

Due to this policy, of course, a purely digital process that was much more customer friendly was in noncompliance with this internal policy so they had to start really early and actually discuss whether they're willing to change this policy. Do you see examples like that happening more often than insurance businesses?

David: Let's learn internal policies. We talked earlier about regulation. If you're planning to roll out a solution across multiple geographies, I take your example there, signatures. There are some countries in the world where you have to get a wet signature by regulation, let alone whether you have an internal policy. Then there are other countries where taking a digital signature is perfectly acceptable.

Now, if you're rolling out a solution and you're rolling it out across with clients who are going to be in different geographies, you actually run the risk that a client-- if you allow your client to do that, you may actually be creating a situation where you're no longer compliant. In one country what you're doing is perfectly except to another, so you're actually having to check.

Certainly, as I said, if you're a multi-geographical or if your solution is operating across both commercial, large industrial and personal retail, all of those, the rules are not the same, so what's acceptable in one might not be in other. Then yes, if you overlay that into internal requirements where there's generally, historically, the attitude has been "Well, if I can be safe, I'd rather be safe," so it takes quite a lot.

I think the particular issue you mentioned there, we probably leapfrogged as a consequence of COVID in terms of people's internal policies did get changed as a consequence of that particular situation because I think there was a recognition that there really was no choice, but there was a recognition both internal. Also, that they'd be able to go to the regulator and explain why there was no choice. Let alone the first signature was the witnesses.

I think the reality is everything starts, it sounds very simple when you start it off, and then when you actually implement it, you realize the complexity is enormous, the variance are enormous so not straightforward at all.

Alexandra: Absolutely not. Definitely, a hard task to tackle but as you pointed out the thing, this is at least one of the benefits that the pandemic brought us that now, also governmental organizations digitize themselves and really created conditions that will help the globe in digital transformation efforts because we saw over this past two years that there was not really another choice of really having digital processes place.

One point I want to come back to, you mentioned it again already several times during our conversation, how important it is to really have confidence also on C-level membership, and leadership, and board member level, how to instill this confidence? I can imagine one part is, of course, a technological challenge, that you can make AI systems that are more explainable and less black box, but it would assume it's also AI and data literacy issue from senior leaders, but curious to get your take on that.

David: I would distinguish perhaps between the senior leadership and the board potentially even as well. In general, you've not necessarily got many of those individuals have not come from a tech background. They're not going to be familiar with a lot of the technology to the extent that they were close to a technological projects that may have been decades before so they will have no appreciation.

They'll be aware of changes just by being consumers and being in the world but they will not have been necessarily live in tech, and both its opportunities but also, its risks. They will be very conscious of the headlines that they have been on a variety of different topics where tech has come up and not necessarily been-- has not delivered in the way I hoped and has left senior management and boards in very awkward positions.

There's no doubt that that group is in general, and there will always be exceptions to this rule, in general, are not typically highly tech-savvy. As I said, to some extent, they arguably had good reason for being cautious because they've seen enough tech projects fail in a variety of different ways to know that things don't always work out quite the way you expected, and they're very conscious to our earlier discussion on regulatory risk. They're very conscious that a failure isn't going to be treated in a, "Oh, but don't worry, we will address it in our next version. "Actually, the regulator's going to be, "Now you need to now make good all of these customers who've in some way have been let down." The costs of that can be extraordinary. They're very conscious that the risks and the debt, the potential damages associated with both financial and reputational are very substantial.

When you've got that, if you think of that as your starting point, and now you're bringing in to them and saying, "Let's talk about machine learning, AI, you name it, you've got it," there's a fair amount of education required as in, around what it is, how it works, what are the benefits? What risks does it introduce? How are those risks going to be mitigated?

How do you get confident that the people that they've got are the right people to able to that if you're like, all of that has to get addressed before I think you'll see boards and senior management saying, "Yes, let's use this in full implementation and get the benefits and accept the risks if you like."

Alexandra: When you talk about education, is it self education, is it relying on internal champions to prepare all this information for you as a senior leader of the company or what would be the best medium to address this education?

David: I think in reality, most senior leadership board members, et cetera, lead pretty busy lives. If it's going to rely on them taking the initiative and seeking it out and finding the right sources, I think that's a big ask. Realistically, every board, every senior leadership team will have a certain amount of time, or they should typically allocate around learning and development and that should be-- clearly, one of those topics should be around the new technologies and all the types of things we've talked about up to now.

I think that unquestionably plays a role. I think your point around champions, I'm a big believer that all change occurs, pretty much because there's someone who's championing enough that will overcome if you like, the natural inertia in the organization to make it happen. That could be that the-- Whether that's ahead of a business unit and I've definitely seen those, or it could be ahead of data, or it could be in the insurance world, it could be a group actuary or it could be anyone in the organization who's got a role where they actually say, "I think this could make a real difference," your CIOs, of course, as well.

Any of those individuals turn around and say, "I really want-- I believe in this." That's probably why when I talked earlier about use cases. I think I would argue part of that is testing it for the organization, but part of that is education within the organization, including the senior leadership and the board to say, "Okay, we're trialing this. We're seeing how it works.

Here's the benefits we could get. Here's how we're going to mitigate the risks. Are you ready for us to move to the next stage if you like?" I think that is all part of it, part of that change process. These things always take time and they do require advocates who are enthusiastic enough about it that will make it happen.

Alexandra: Yes, absolutely. It's definitely not an easy playing field to introduce that big of a change like AI technology to an insurer. One other point that also came to my mind when you highlighted that part of the reluctance to jump on the AI train comes from the risks they're fearing about, being at regulatory risk regulatory fines, but also reputational damage with bad press and having to pay repercussions to customers whose data for example, was leaked.

I once had a conversation with the chief data officer from another organization was a global leading bank. He told me that there's one other aspect why they're so careful when it comes to complying with regulations and doing the right thing, because they see they have the need to move forward, to innovate, but having a public scandal not only leads to this risks that I just mentioned, but also leads to the regulators being particularly aware of your company.

He told me that in the history of his organization, they had one incident where for three years after privacy breach, they had to go through regulatory oversight over and over again. Basically, half of the company he said was occupied with preparing documentation and satisfying the requests of the regulator, which of course significantly slowed them down in this global ways of being the first mover, having the better customer service and so on and so forth.

He really was pro-technology, but very aware of which technologies he could introduce to make it as safe as possible and to mitigate to risks. I think this is also an important point to consider, which makes it better understandable why there is this initial reluctance.

David: Now, this is a, probably a-- I was checking for someone more on the side and this was in terms of cyber risk as it happens rather than specifically on the AI side. They were commenting-- They deal with very large cyber claims if you like. They were commenting that they were finding with organizations that it was taking a little bit to your point, 6 to 12 months following a cyber breach of whatever the organization had been planning in that 6 to 12 months, pretty much got wiped out.

All the other initiatives that might have been on the go, got onto the back burner while the organization was dealing with the aftermath of the cyber breach. Because you've implied the regulated to interest, but it's all the stakeholders that suddenly have a heightened interest. At that point in time, the propensity to feel like to goal plate the solution becomes very high.

Every aspect of the organization probably gets a layer of additional protection, arguably beyond what's even required, but to make sure that all stakeholders are satisfied that the organization is now safe, if you like as safe as one can be in the cyber world. I think that would be an example again, of just how distracting, how completely overwhelming a breach is, but I think equally would hold true on any type of tech-related scandal. I think an AI model that bluntly discriminates against a group--

Alexandra: It's definitely something that you don't want to see in the press with your organization name attached to it.

David: Well, partly press, but in addition to press, I mentioned along the way, I used to manage claims and whatnot, and I dealt with a lot of large complex claims particularly with the US focus. In that side, you just have to imagine that the lawsuits that comes in the event of it becoming evident that you've discriminated against, and then they find out that the model that was being used had a bias in it that discriminated against a particular group and you just imagine how a jury will respond when they say, "A, it's clear, you did discriminate, B, it wasn't just an occasional or a few bad Apples in your organization. It was actually-"

Alexandra: Systemic.

David: -"the core model was designed as such to discriminate against that particular group," I think the juries will be unforgiving in that consequence. I think the financial and organizational and reputational, you just keep. It just gets bigger and bigger of how that plays out, particularly in environments where the legislation that's in place around anti-discrimination behaviors, making sure that people are not acting in a discriminatory fashion, that's particularly the case if you look at the US and Europe. There's not going to be an easy ride if that occurs.

Alexandra: I think so, too, definitely not going to be easy, but very important therefore to make everything in your power or do everything in your power to prevent this from. Also, since you mentioned cybersecurity, especially with this boom or pandemic of ransomware that we've seen in the past, one of my earlier guests on this podcast was TD Bank's, Claudette McGowan. I think her job title is global executive officer cyber security.

I can't remember whether we talked about this on the show or earlier before we record, but, she also said that was-- or we also talked about synthetic data and that simply by using synthetic as opposed to sensitive real customer data in every application possible where you don't absolutely need production data, you so drastically reduce the attack surface, that you already from a risk management perspective have achieved a major, major-- or made major progress to prevent something like that from happening. Really different aspects where I think synthetic data can help you, but talking about synthetic data and also other technologies, my last question to you, since we're approaching the end of our recording time, I'd be curious to hear, David, which trends insurance industry, specifically technological trends, but potentially also new and additional revenue streams, or something that's impacting that, which are you most excited about, and which do you think will have the biggest impact in the coming years?

David: I think the replacement of the end-to-end core policy administration systems and the digital front ends and everything else and the ability for data to be transferred across those organizations all the way from individuals or corporations to the ultimate insurance and along the chains in a straight-through processing way, that is unquestionably is happening. It's a trend that I think enables some of the other trends that I'm very excited about as well. That then links to all the data analytics, both the data pools, which I think now become far greater, and the level of information.

Then if you're able to, on top of that, put the AI solutions and machine learning solutions and such like to be able to make use of all their rich data, both the structured data and the unstructured data, and use that to be able to design better solutions, target clients better, underwrite them better, price them better, manage their claims better, you name it, that data and analytics that then come with that is super exciting implied in that. You could argue it's plumbing, but I think it's critical plumbing. It’s clearly the shift towards the cloud and moving again, allowing you to have a level of computing power and access if you like, and richness is super exciting.

I think and it's-- I don't know, you could argue it's a technological trend, or you could argue it's a business trend, but it probably is a bit of both. I think the insurtechs are leading the way on this, but the sort of being able to offer solutions for clients in a way that was just never possible in a very manual policy, paper forms, and everything else, renew once a year or sending your thing, but the relationships and the way people can be served with the new technology, the way insurance can now be embedded into other solutions, and that could be, you talked earlier about open banking, but equally applies to other products and solutions.

Again, I think that just certainly makes it more accessible and makes it so much easier for people to get what they need, when they need it, how they need it, et cetera. That applies whether we're talking about the insurance, the wealth management. Obviously, it applies more generally to financial services as well. I think that it opened the box, if you like, of potential opportunities.

When I look now of what's coming out, the enormous variety of different solutions that are coming out to address just so many different aspects of the space, I think it's unquestionable as an industry, the industry is just going to-- ultimately, will be serving its clients so much better in time with the support of all this technology. I think there might have been a time if you go back some while that technology strategy was somehow separate from business strategy.

I think now they are so intertwined that in a way, some of my trends, as you can hear, are pure tech trends, but actually, I think what makes them most exciting is how they allow the business model to change.

Alexandra: To serve insurance to people at that point in time when they actually needed most. What type of embeddings were you envisioning, or what type of scenarios do you think would be particularly exciting?

David: Someone mentioned it to me recently, which I thought was an interesting point, that arguably one of the largest insurers in the world is Apple, and you might not have thought of, I certainly wouldn't have needed, to come up with them. It's partly because in their Apple Care product, which you tick when you’re purchasing an Apple product if you happen to, one to three year product care or the service package, you're effectively buying what is a very simple insurance product, but an insurance product-- when people think embedded it can range from, it's in there. I think recently you had Amazon talking about adding it into certain of their proposition.

You've got the players that are coming from well outside the financial services space. I also wouldn't underestimate, and you mentioned earlier if you go-- whether it's the open banking type of environments that are now you're seeing companies incorporate insurance as part of that. If look at even some of the companies where I'm a senior advisor, I’m in a company called Goodlord, which is a prop tech company, and they serve estate agents who are serving landlords on rental things, insurance has become a very important component of their business, because they've incorporated it into their solution, they've effectively embedded insurance into their solution.

I think the interesting thing is that is it can be popping up all over the place, from financial services all the way to just by name your business has a potential to incorporate an insurance element into it, that the client then gets the whole thing, and then the challenge is making sure it all works together.

Alexandra: Absolutely.

David: I think that will create hot, fascinating sets of opportunities. That's the ranging from the micro, clearly, when you're dealing line also spent quite a bit of time which is on the large, more complex and there it's in a different-- it may not be about embedding per se, but then it's about constructing solutions that give the clients what they want, and where the insurance is more than just insurance. What you're seeing now, and you're seeing it in cyber, as an example, we've talked a bit about the ransomware and such, those providers that are providing cyber insurance are also now increasingly providing some form of cyber service as well.

They're providing incident management and education and training and a whole variety of associated services. I think they've both embedded where the insurance is ending up in someone else's product, but I actually think we're seeing the trend in the other direction as other solutions become more accessible, where people are bringing the non-insurance services, products, and solutions, and they're being embedded into the insurance solution as well. I think that's super exciting. I think that creates a much more valuable solution from a client's perspective when it all joins up.

Alexandra: Definitely. I think we see many examples where insurance become more proactive and also help with prevention what comes to mind here is Humana’s Chief Analytics Officer Slava Keener, who is a big advocate of proactive health. There was just a podcast interview he did with Forrester a few weeks ago, where he showed how Humana proactively delivered healthy food to its members to prevent them from getting sick during the pandemic, and also prevent them, of course, from the costs associated with treatment.

I think there's so many opportunities that I'm already very excited to see how this will play out in the years to come. Yes, as mentioned, unfortunately, we are approaching the end of our hour. I think we could have talked much longer. Thank you so much for being on the show, David. It was a true pleasure. Thank you for everything that you shared with the listeners and with me today.

David: Thank you very much. It's been an absolute pleasure. Thank you for having me.

Alexandra: Thank you, David.

Alexandra: Yet again, a great guest with really great insights for us. After listening to David, I think we all realized how critical innovation will be for insurance companies, especially if they are to remain relevant in the coming years. Let's pull together the main takeaways. Takeaway number one, AI opportunities are everywhere in the insurance industries.

In insurance industry, AI can be leveraged across the whole value chain and it really involves all key process. David shared that from market segmentation, client identification, underwriting, claim management, to general efficiency, there are really opportunities to leverage AI around every corner. He also mentioned that in the wealth management side, there's opportunity to disrupt for example, with Robo Advisor Services portfolio optimization and the client engagements.

Takeaway number two, the time to innovate is you guessed it now. Also, there are countless opportunities for AI in insurance as we just summarized. Many insurance companies are rather tech assistant. Now there's a huge pressure to embrace new technologies mainly to increase competition with new disruptive intro tech players and also of course customer demands. The good news according to David is that the technology came a long way in the meantime and that it's now possible to use data and AI at the scale.

Also at the level insurance companies need and require one example he shared with us here data migration tools which are now readily available. The third takeaway, regulatory pressure is changing minds. Regulations on how solvency is teacher mind and how clients are treated requires insurance companies to overthink how they're interacting and treating their clients. They also have to measure things differently than before. A more data centric approach is coming where data access is a mission critical feature.

You can only proof what you measure and also GDPR requirements put pressure on insurance companies to get their data in the right shape and in the right order. According to David, we can see that minds are changing due to the regulations. Take away number four, fairness is critical in the insurance industry.

Fairness really something that affects everything in insurance, from product design to implementation marketing, pricing, underwriting, various services and also how investments are managed and fairness can be considered also as a regulatory pressure where insurance companies need to not only do the right thin but also be able to demonstrate it to comply with regulations.

For example, fiat pricing is regulated in the UK which is forcing insurers to go back to the table and mind everything with fairness in mind. One of the challenges when it comes to using AI is that the insurance industry oftentimes don't understand the model as well enough that they use and the confidence and the fairness of the models they use is limited.

Consequences of non-compliance we all know are pretty dire and the whole industry in the whole insurance sector is scrambling to comply according to David. Discriminatory AI decisions are not only dangerous from a compliance and PR perspective but can also bring down costly lawsuits on a company, so definitely something that should be avoided. A biased AI model has the capacity to create systemic discrimination against group. That would be very hard to defend in front of a jury.

The financial damage on top of the reputational damage, David emphasized, can be huge, so definitely watch out and ensure that the models you use are as fair as you can make them. Take away number five, privacy nowadays is a global issue. There is a pandemic of cyber and ransomware attacks, especially in the wake of this cyber crime boom using synthetic data instead of production data wherever possible reduces the attack surface significantly and mitigates a lot of the risks.

Models especially AI models need to be trusted and need to be privacy at the time of implementation at the time of training and also afterwards when you monitor the models for performance degradation. Model testing has a data challenge and synthetic data removes this barrier because it can be really shared and accessed. Training models also need an abundance of data and synthetic data helps solve the situation quite well.

Third party AI developers need access to meaningful data and handling that over is a major issue for insurance companies. Synthetic data can take anxiety out of such situations. Tools like Synthetic Data are really something that can help with AI adoption in the insurance industry and helping companies to be more confident in the AI technologies that deploy during this journey. Take away number six, C-level and also board members need to be educated. David highlighted that C-level confidence in AI is a crucial ingredient of success. A lot of leaders don't have the tech background that is required nowadays.

Education around machine learning and AI would be beneficial for C-level management, for boards, before they say yes to certain projects. David recommended that boards should allocate time to learn about especially those new technologies or more precisely in the time they allocate for learning. They should learn more about those new technologies and bring themselves up to speed. Internal champions are also helpful and can help organizations to overcome the inertia that they have in the past when it came to tech adoption and innovation.

Take away number seven, the future of insurance is tech driven. Data pools are becoming greater and they're creating new revenue streams, putting machine learning and AI solutions on top of data to better target on the right better price, better manage claims better is something that's definitely coming. The shift to the cloud brings also a level and access to computing power that the insurance industry hasn't seen before which can also be transformative.

David also highlighted that introtechs are leading the way in offering personalized solutions to clients and insurance products are oftentimes also embedded in other solutions to make things more accessible. It's not necessarily that a single purpose insurance platform is built but that you can access insurance at the right point in time and at the context that it's relevant for you. Let's say access to car insurance when you are renting a car for your trip home or something like that. Players coming from the outside of the insurance space revolutionizing the industry.

According to David, for example one of the largest insurance companies in the world today is in fact, Apple since they have insurance services embedded into the product. Amazon also talked about entering the insurance space so insurers can expect that competition is fierce and will be increasing. One other aspect that I found memorable is that David also mentioned that insurance is becoming an important part of Prop Tech solutions.

Cyber insurance is also growing field but on the other hand it's not only about embedding insurance services into other platform and context. It's also key other way around so that non-insurance products and services are embedded in existing insurance products like providing training with cyber insurance or providing proactive health services from a health insurance company.

To summarize, we can really conclude possibilities are fascinating and more or less endless. I really want to thank David again for the time he took and for taking us on this journey on what's next for the insurance industry. I also hope that you, as our listeners, found this episode worthwhile. If you have any questions as always write us to podcast@mostly.ai. Until then, see you next time in two weeks with our next episode.

Ready to start?

Sign up for free or contact our sales team to schedule a demo.
magnifiercross