🚀 Launching Synthetic Text to Unlock High-Value Proprietary Text Data
Read all about it here
Episode 48

48. Driving Impact with (Gen) AI for Financial Services with NVIDIA's Malcolm deMayo

Hosted by
Alexandra Ebert
Welcome back to Season 4 of the Data Democratization Podcast! In the 1st episode of the new season, Alexandra Ebert sat down with Malcolm DeMayo, NVIDIA's VP of Global Financial Services, live at Money2020 US to dive into what it takes for financial services organizations to succeed with data and AI at scale.  Malcolm shares insights on using data as a true differentiator, why Responsible AI practices are more important than ever, and how modern privacy-enhancing technologies - like federated learning and synthetic data - help tackle common data challenges. He also sheds light on how Gen AI is impacting talent and the surprising ways AI assistants might help organizations counteract the negative effects of employee churn. And, of course, Alexandra asks Malcolm for his vision for the future of AI in financial services. 

Transcript

(0:09 - 0:24)

Welcome back to the Data Democratization Podcast. I'm Alexandra Ebert, your host and  MOSTLY AI's Chief Trust Officer. And who would have thought that this very episode already marks the beginning of what is already season 4 of our podcast.

 

(0:25 - 0:57)

I'm beyond excited to kick off this new season with something truly special, because this very first episode was actually recorded last week at Money2020 in Las Vegas, which is probably the biggest stage and gathering for banking and technology. And joining me is no one other than Malcolm DeMayo, NVIDIA's VP of Global Financial Services. And in today's episode, we will dive deep into what it takes for financial services organizations to truly succeed with data and AI at scale.

 

(0:58 - 1:28)

Spoiler alert, your tech stack is important, but there's so much more to it. We also talk about the role of data as a differentiator, and Malcolm shares how to overcome common data challenges and why responsible AI matters more than ever. He also sheds light on AI's impact on talent and connects the dots between the widespread use of AI assistance and how this will impact an organization's ability to actually counteract the effects of employee churn.

 

(1:28 - 1:51)

And of course, he leaves us with his vision for AI's future in financial services. So I promise you, there's a lot to take away from today's conversation, regardless of whether you're in financial services or in another industry, and regardless of where you and your organization are on your journey towards embracing data and AI at scale. So with that, let's dive right in.

 

(2:00 - 2:32)

Malcolm, it's such a pleasure to have you on the show. That's actually the very first episode of the brand new season four, and we're recording live at Money 2020, so very much looking forward to have a conversation with you today. Before we dive in, could you briefly introduce yourself to our listeners and maybe also briefly share what makes you so passionate about the work that you do at NVIDIA? Sure.

 

Thank you, Alexandra. It's a pleasure to be here with you and your podcast. I look after financial services for NVIDIA as an industry.

 

(2:33 - 2:50)

We are very focused on developing our strategy, our go-to-market partnerships, building ecosystem relationships. And obviously, one of our key themes is helping organizations in financial services use AI for good. Passion.

 

(2:53 - 3:04)

Technology has been, it's been an amazing ride for my career. If you can think back, technology just makes our lives better. It's technology for good.

 

(3:04 - 3:18)

If you think back to when I first started, we had to carry rolls of quarters around to make phone calls. And so we're just so much more productive. Our daily lives are just so much more frictionless as a result of technology.

 

(3:19 - 3:39)

And it's just so much fun to see how these practitioners in large financial institutions, how creative they are in taking the technology and using it to improve the services they provide. Absolutely. And it's definitely a stark contrast when you think back to the times where we had cables and coin-operated phones and everything, and now Gen AI being the hype and everybody discussing it.

 

(3:39 - 4:09)

What do you think is behind this huge interest that we see in Gen AI? Yesterday, it was one of the key topics that we heard over and over again throughout Money 2020. So what's behind this interest in the technology and how is NVIDIA helping organizations to actually adopt and advance this technology in their production settings? So technology has been relegated to less than 2% of the world's population. You had to have a computer science degree or an engineering degree in order to understand how to work with it.

 

(4:10 - 4:30)

And banking is a business where you're dealing with people all the time. Generative AI is essentially kind of a, it's the intersection of technology and people. Basically it is capable of understanding not just our language, but our images.

 

(4:30 - 4:40)

So it understands the world as we define it. And this is the language of banking, our language. So it becomes a very powerful capability.

 

(4:41 - 4:52)

Every banker is now a programmer. Every executive in a financial firm is now capable of using this technology. So it's an exciting time.

 

(4:53 - 5:01)

Absolutely. I think it's really fueling this democratization of AI. And NVIDIA's mission, Alexandra, is it's real simple.

 

(5:01 - 5:15)

We want to make AI available to everyone. We want to make sure that AI is used for good. Our first observation of our founder, 30 some odd years ago, was that CPU scaling was ending.

 

(5:16 - 5:50)

CPU performance scaling was ending. And the whole, every industry, not just financial services, had become dependent on CPU performance advances to advance software, which allows us to make people more productive, drive cost efficiencies through automation, and generate new revenue streams. And so the opportunity to leverage AI to accelerate these kinds of capabilities, improving productivity, reducing costs, et cetera, is enormous.

 

(5:51 - 5:56)

That's where we're very focused on helping them and teaching them how to use this technology. Makes sense. Makes sense.

 

(5:56 - 6:13)

You definitely need the foundation and the computing side to make all of these benefits reality. What I liked yesterday about one of your many talks that you had is that you started with dispelling some myths about AI and generative AI. Would you be so kind to reiterate and dispel them again for our listeners who couldn't attend in person? Thank you for attending.

 

(6:15 - 6:43)

So the first myth I talked about yesterday, and there are many, but the first is that firms have to wait for regulation before they start using AI. And it is just so not true. AI has been in financial services for decades, and it's being used in work streams and workflows that improve the way we work with customers, improve their experience, improve our ability to detect fraud.

 

(6:44 - 6:59)

In fact, the leaders that have leaned into AI have hundreds of AIs in production. And these same leaders have plans for hundreds more. And I always like to end with, it's the leaders that are winning the race right now.

 

(7:00 - 7:17)

Definitely. And I think also particularly the financial service industry has been highly regulated ever since. So I would say they're uniquely positioned to, of course, extend their risk mitigation frameworks, but it's not that they would need something completely new, something that has never been seen before to make sure that they use AI in a compliant manner.

 

(7:18 - 7:20)

And then there were two more myths that you tackled. The second myth. Yeah.

 

(7:20 - 7:22)

I'm taking too long. Sorry. All good.

 

(7:22 - 7:29)

All good. We have enough time today. The second myth that I talked about yesterday was AI doesn't deliver value.

 

(7:29 - 7:39)

There's a lot of concern that this has been overhyped. Gartner has published their, you know, and are we... 70% of all AI projects fails. Nobody gets revenue out of it.

 

(7:40 - 8:06)

Exactly. And then you have JP Morgan coming out and saying that AI in their commercial loan process saved hundreds of thousands of hours of lawyer time. And in addition to that, their chief operating officer, Daniel Pinto, has already stated that they are on target to deliver one and a half to two billion in new value creation this year using AI across the firm.

 

(8:07 - 8:27)

And there's so many examples. That said, the third myth that I talked about yesterday was the fact that AI is only building AIs is only available to large tech companies or financial firms and large financial firms. And there's nothing further from the truth.

 

(8:28 - 9:03)

With the advent of APIs like OpenAI's ChatGPT or AWS's Bedrock or Google's Gemini or you fill in the API, the advent of open source models that are pre-trained, meta with Lama, Mistral with their Mixtrals and so many more. This makes a... And the fact that NVIDIA is putting our platform in every cloud and it's available through every server vendor makes AI available to any sized firm. That makes sense.

 

(9:03 - 9:39)

Coming back to the second myth that you dispelled, you mentioned JP Morgan as one of these stellar examples. From your experience, getting a few behind the curtains with so many financial services institutions, what's the percentage of organizations that are really getting value out of it versus those who are just starting to begin with AI or what we've also heard over the past years? Organizations that maybe focus too much of AI as a technology, as this shiny new toy, but maybe fail to connect it to business problems and breed outcomes. So of course, no proper statistics, no survey, but from your gut feeling, where are we here currently in terms of organizations really deriving value of AI? Great question.

 

(9:39 - 10:00)

I mean, it is very regional, Alexandra. So if you look at evident AI's maturity, AI maturity index, the top five banks, not one of them is from Europe. There's three North American banks and actually four North American banks and an Asian bank.

 

(10:01 - 10:36)

And so it's a regional... By region, it matters. But if you look at one of the big surprises from our survey, our FSI, state of AI and FSI survey for me was just how broadly financial firms are now moving resources, budget to allocating it for AI projects. How intense the competition has become for talent, little sneak preview of our latest survey, which isn't out yet, is that talent has actually trumped data.

 

(10:36 - 10:45)

We will actually come to that later on because I found it quite fascinating. As the number one challenge. And so it's basically becoming very pervasive.

 

(10:45 - 11:00)

More and more institutions are realizing that and it's a board level focus. So more and more institutions are focused on how are we going to adopt AI in the short term and in the long term as well. Yeah, it makes sense.

 

(11:01 - 11:17)

That's also what I'm hearing over and over again with my podcasting guests. If the C-suite supports it, if the CEO is a leader in that space and really pioneering it within the organization, then that's a good success factor to make sure that the organization actually manages to transform and benefit. But maybe coming to the broader picture.

 

(11:18 - 11:42)

In your experience, what does it take for a financial services enterprise to truly succeed with AI at scale? Well, it starts back at the top with support from the executive team and they need to create an AI strategy. And that strategy isn't just about technology. It's also about the partnerships they're going to form.

 

(11:42 - 12:03)

And it's also about how they're going to culturally evolve using AI. So strategy is number one. Then it's really about prioritizing based on what's important to them from an overall strategy, aligning to the overall growth and cost efficiency strategy of the firm.

 

(12:04 - 12:17)

And then finally, it's really about communicating, not just internally, but internally is very important, communicating to your employees. This is important to us. This is how we're going to win.

 

(12:17 - 12:37)

And making sure employees understand they need to learn how to use AI. So those are some high level, the investments have to follow. Obviously, if you don't have the investments in technology, it's going to be very difficult to try to implement AI.

 

(12:37 - 13:02)

Definitely. But also the people component that you mentioned, being in the AI ethics space, what I see over and over again is that organizations really also need to address the concerns, the fears and this AI literacy gap that most employees have. Understandably so, given that in the media, in the news, in Hollywood, you either see AI being slightly overhyped, to put it nicely, or creating doomsday scenarios about AI taking all of our jobs.

 

(13:02 - 14:07)

I think if you really want to have this change management process on the route toward success, it's crucially important to not only address the technology, of course, you can't do without it, but also the fears and the conception of people, what AI is and how they can actually use it to benefit and partner with it. Since you mentioned that the top five banks benefiting from AI said they don't come from Europe and I'm based in Europe, my question would also be to you, do you have any advice on what you think Europe needs to do differently to also set their financial enterprises up for more success in the AI space? Well, in the last several months, we've seen just about all the major brands out in Santa Clara, so Europe's behind, but not asleep. I think it's just the things we've just talked about, they need to set the AI strategy, they need to allocate resources, they need to communicate out to their teams and not wait for regulation, essentially get started and really start moving quickly.

 

(14:07 - 14:49)

Yeah, we heard that also yesterday from one of the panelists you had on stage, I think it was mistake number one being moving much too quickly, mistake number two being not moving fast enough at all, so having to strike the right balance here. Another concept that was introduced yesterday during one of the talks was the concept of an AI factory. What's your take on that again, with your opportunity to see behind the curtains within so many financial services institutions? Does it make sense? Should everybody build up their own AI factory? Yeah, so think of NVIDIA, NVIDIA is the engine of AI, and a word and its meaning is as inseparable as NVIDIA is from AI at this point in time.

 

(14:50 - 15:31)

If you want to develop AI applications, you have to have the right infrastructure. Why? Because, if you don't, you can certainly try to run AIs on CPU servers, but you will use more power, more energy, and you will spend more CapEx, I'll give you an example. One of the talks Jensen did, 1,000 CPU servers replaced at a CapEx cost of $10 million, drawing 10 megawatts of energy, replaced by 16 GPU servers at a fraction of the cost, at a fraction of the power consumption.

 

(15:32 - 15:54)

When you think of AI for good, there's a big concern in the world, rightly so, around energy consumption. AI is part of the solution, it's not part of the problem, it's part of the solution. We can absolutely make technology usage more energy efficient because of the extreme acceleration we deliver.

 

(15:55 - 16:30)

As Jensen likes to say, you can do more with less. That's the goal, I would say. I'm also curious, since we discussed Gen AI being the buzzword of the hour, since we had JGPT launched two years ago, in your experience, what's different when an organization wants to make sure AI is used at scale versus specifically Gen AI? Are there differences that are worth mentioning, or is it if you have a proper AI factory, a proper framework set up, not really relevant, which type of AI you pursue? Gen AI isn't necessarily going to replace previous generations of AI.

 

(16:31 - 17:09)

Machine learning is still used in a very large scale way and will continue to be used in a large scale way. When you look at what Gen AI superpowers are, it understands language, it understands images, and so you start to think about all of the uses that it can deliver that complement. Firms will have a large AI model inventory in the future, and they need to focus less on the model they're using, what model they're using, and more on how they're life-cycling intelligence throughout the firm, and doing that in a responsible way.

 

(17:09 - 17:26)

Banks have very well-defined lines of defense, model validation, and governance today. You heard yesterday probably some have already recalibrated their governance and their lines of defense for generative AI. Makes sense.

 

(17:26 - 18:20)

I think with Gen AI also, since we see this extreme democratization and Gen AI being oftentimes a tool that nearly every employee within an organization can use, that this maybe also needs to be reflected in the educational efforts, in the governance efforts to make sure that each and every employee's use of Gen AI is properly guarded and steered. I'm also curious, since data was of course mentioned over and over again, you already highlighted that surprisingly by now the survey showed that data got rid of its top one spot on the concerns of executives and replaced by talent as a concern, but what do you think is important when we talk about data, both in terms of its role as a differentiator when it comes to using AI at scale, but also with all the challenges that we know come with the use of data, privacy, security, quality, accessibility, moving it to the cloud, et cetera. What's your take on that? Well, none of this is new.

 

(18:22 - 19:06)

Data and making sure that bias isn't in data, making sure that data's high quality has been a journey that banks and financial firms have been on for a very long time, but an AI model is like a smart university student. It knows a lot about the world and very little about your business, and so just like a new hire has to be trained on your business, so does the AI model, and so your data is very important. It is the differentiator, and so being able to create high quality data sets to train AIs with is a big important objective of financial firms.

 

(19:06 - 19:43)

At NVIDIA, we've built in our platform, our accelerated compute platform, an entire framework of microservices, and they're composable, so you can pick and choose the ones you want to use. One is a data curator that really simplifies the work of bringing data from various sources together and deduping it, compressing out the white space, extracting out all the nasty, messy noise, and so that you have a high quality data set. We're constantly making this better, so data is important.

 

(19:44 - 20:41)

It is the new oil. Exactly, and I think it's also fascinating to see that on the one hand, of course, you have this challenge of needing more and more data to train AI and needing to have the right data, but on the other hand, similar to what you pointed out with the sustainability and energy requirements, AI can also help us to address this with synthetic data, where we're also partnering with NVIDIA, where you can actually use gen AI technology to anonymize with AI and make sure that more data can be unlocked for AI training, to moving it into the cloud, and also for bias or fairness considerations, so it's really fascinating what we see here. I'm glad you brought that up, Alexandra, because some of the other things we're doing to help regulated businesses are to create what we call federated learning in our platform, so that you can train a model in Germany and you can train another model in Austria and share the knowledge without exposing the sensitive or regulated data.

 

(20:41 - 21:16)

Very, very helpful when you're thinking about how can we compete with bad guys who don't have these regulations, and also protecting your customers, protecting your customer's identity, et cetera, so we're building all of that into our platform. Exactly, so financial services, of course, are a prime contender for federated learning, including healthcare. The sensitivity of the data here, obviously, is also something that requires extra protection, so it's fascinating to see what all these privacy-enhancing technologies nowadays allow us to do to make sure that AI can be used in a privacy-preserving manner.

 

(21:17 - 22:02)

Now that we already talked about privacy and we also mentioned fairness explainability a bit, what I really liked hearing yesterday were all the data executive leaders that we had on stage from the different banks emphasizing how important it is to build in responsible AI elements into their AI platforms. From your experience, what is the right way to go about it? How to ensure that all the models that are somewhere developed, deployed, or used within the larger organizations nowadays adhere to those principles? Yeah, so if you think back again, AI for good, it starts with transparency. That means if we're going to create a foundation model, which we do at NVIDIA, you just mentioned that we've created models that are really good at synthetic data generation.

 

(22:02 - 22:44)

We start with making sure we have the right to use the data that we train the foundational model with. And not only that, that we create a model card and that we're explicit in sharing with anyone who uses the model how we trained it, what data we used to train it, so that there's no question around, do we have an issue from a legal perspective or from an ethical perspective in using this model? We think there's no regulation to do that, but we think that's really important. When we built that framework I discussed of micro-services, composable micro-services that are cloud native, it was really important.

 

(22:44 - 23:19)

We were the first, right after Chad GPT came out, we were the first to create guardrails. So that you have a guardrail for safety, making sure that the model doesn't answer questions mom would never want you to answer. Making sure that the model only connects to secure data sites that are approved by the organization and making sure that the model doesn't hallucinate, that we're fact-checking and that we're not allowing a wrong answer to go back out to either the employee or the customer.

 

(23:19 - 24:02)

These are examples of, as a tech builder, how we're helping the industry by promoting ethical and responsible technology. Yeah, there's also another example from NVIDIA that I really admire, I think it was in collaboration with Hewlett Packard Enterprises, where it was also about these responsible AI elements as a service, explainability as a service, etc. Because when we look at where we're going as an industry or as an economy, as a society, we're moving away from just the big tech organizations, just the largest of the largest FS providers to actually develop and deploy AI towards a society where everybody, small and medium enterprises, startup researchers, can make use of these building blocks of AI components.

 

(24:03 - 24:29)

But obviously, already now we have a little bit of a shortage of responsible AI talent and hence there is this need for scalable services and ecosystems that really ensure that all this AI that is being used and all these building blocks that will be implemented into new AI systems adhere to these responsible AI standards that we have as a society. And I think NVIDIA is really doing their part to make sure that we are going in this direction and that this is going to happen. So kudos to that.

 

(24:30 - 24:46)

One other element that I was quite interested to discuss with you, we have 15 more minutes left, is obviously the use cases. We heard it again yesterday on stage, many organizations when they're just starting out with AI obviously wonder what's the right use case to tackle. And there are different ways to go about it.

 

(24:46 - 25:24)

So some say it should be the revenue building use cases, others say cost saving internal use cases, maybe use cases with less sensitive data. What's your take on that and from your experience, what are the best use cases to start out and how should the trajectory look like after a few months and years with AI? The first, yeah, this is a great question, by the way. The first observation is that firms that integrate their data science teams with their line of business so that they're not waiting until the pilot or the proof of concept's ready to be tested.

 

(25:24 - 25:36)

They're actually engaging right up front in the ideation. They have more success in bringing a project to production, all the way to production. And remember, these are experiments.

 

(25:36 - 25:45)

So they don't always succeed. But you always learn. And so number one, it's bringing the practitioners together.

 

(25:46 - 26:12)

I'll give you an example. It's an old example, but I think it's really good. When JP Morgan first started using sentiment analysis in looking at how to inform their trading strategies, they were trying to score breaking news and social media based on a positive, a neutral, or a negative score.

 

(26:14 - 26:46)

When that got to the traders, the folks, the quantitative folks building their trading strategies, they're like, this really isn't all that helpful. We really need to know, in addition to the sentiment, we need to know, is this influencer, this person that put this blast out or this news, is it something that the world will pay attention to? So at any rate, it's really important that you bring the practitioners together when you're thinking about use cases. The second thing is, tackle a problem worth tackling.

 

(26:46 - 27:10)

If you're going to allocate resources, you want to get a benefit, whether it's improving productivity, or I think the value drivers are improving productivity, driving cost efficiency, and growing revenue. So, and some of the use cases can drive all three of those value deliverers. But at the end of the day, pick something worth doing and then allocate the resources.

 

(27:11 - 27:56)

So, you know, and so when you think about what are some of the use cases that we're seeing really catch on right now, a big, this is amazing, a big one is document analysis. So being in, you know, we just published recently a PDF NIM, NVIDIA Inference Microservice, another one of those composable microservices, NIMs, that allows our customers to inference AI faster, more performant, you know, sometimes five times faster, which means you spend five times less money. But this allows organizations to extract information from PDFs, which is incredibly difficult to do.

 

(27:56 - 28:23)

And they have so much information stored in PDF, and you have images, you have graphs, you have, so it's multimodal, and it requires, and so we've built a capability, we call it our PDF Blueprint NIM. And so that's available at AI.NVIDIA.com. That use case, that opportunity to analyze documents is taking off in financial services. Think about it.

 

(28:24 - 28:34)

Every bank has a gymnasium-size room filled with contracts. I would go as far to argue that it would not only take off in financial services, but pretty much every organization. You're 100% right.

 

(28:35 - 28:46)

And so that's one, document analysis. And once you've built that capability, you can apply that to hundreds of workflows. So it's a great starting point.

 

(28:47 - 29:01)

Customer service or customer experience. We've built another NIM, a customer avatar NIM, and actually we're demonstrating it here on our workflow, on the floor, the show floor. You can go see James.

 

(29:02 - 29:21)

James is an avatar. We've built a Blueprint to build avatars, and we're seeing a lot of interest now in using avatars to improve the customer experience. And again, once you've built that capability, it can be applied to so many different workflows, virtual assistant workflows, contact center workflows, et cetera.

 

(29:21 - 29:45)

So, and a third is fraud. Credit card fraud is expected to top $40 billion in the next few years as an annual problem. It really kind of took off, unfortunately, as a byproduct of COVID, where what happened in COVID is we have, we had an explosion in e-commerce and digital transactions.

 

(29:46 - 29:59)

So card not present transactions, which is a fertile ground for fraudsters. And so today at Money 2020, we have announced a new credit card fraud workflow. Again, in our theme of do not [...]

 

Ready to start?

Sign up for free or contact our sales team to schedule a demo.
magnifiercross