💡 Download the complete guide to AI-generated synthetic data!
Go to the ebook
Episode 21

Responsible AI by design with Maria Axente, PwC UK

Hosted by
Alexandra Ebert
Are you ready to take an organizational deep-dive into responsible AI? In the 21st episode of the Data Democratization Podcast, Maria Axente, PwC UK's Responsible AI lead, and ethical AI thought leader talks us through the challenges and solutions of AI adoption.  Tune in to learn more about:
  • the responsible AI skillset
  • the new mantra of computer science: don't go too fast and do not break things
  • why data scientists are like doctors
  • how to implement AI in practice
  • how to do risk management for AI
  • what is AI ethics by design, and how to make it happen
If you would like to learn more about ethically aligned design for AI, listen to our previous Data Democratization episode with IEEE's Dr. Clara Neppel! 

Transcript

Alexandra: Hi, Maria. It's so great to have you on the show. I was very much looking forward to this conversation because I've been following your work for quite some time. I think it's simply inspiring, all the work that you do, both with helping organizations to become more responsible when using AI, as well as with all the initiatives you do with regulators to help them to better govern this space. Before we jump into all my questions that I have about Responsible AI, could you share a little bit about your background and how you became the Responsible AI and AI for Good Lead at PwC UK? Also, if you could share a little bit about what drives you or personally motivates you to do this incredible work that you do, I think this would be an awesome start to our episode.

Maria Axente: Thank you for having me, Alexandra. A pleasure to be here. How did I start? I have an interesting career, to say the least, the fact that throughout the last- probably close to 20 years now, I've continued to reinvent myself in various capacities. I've studied very early computer science, believe it or not. I felt I wasn't really prepared to go towards a more technical career. Although, in my home country in Romania, such a thing was seen as normal. In my class, for example, half of us were girls, and quite a good chunk of our peer group ended up becoming computer science or, engineers. Therefore, it was absolutely normal for us to embrace the technology career if we wished to.

I chose differently, so I went to study business and management because I always knew that I liked to deliver things. I liked to organize and make things better and inspire others to do that. I ended up running businesses, transforming businesses, which was extraordinary, if I look back because Romania was just out of communism, and there were lots of opportunities there. There were not a lot of specialists, so whoever had a vision, they will hire a brave youngster with little experience to say, "Hey, I bought you a business. Go and run it," as my former boss used to do it.

Then one thing led to another. I reached a point when I said, "Hey, I need to take a different type of challenge." I came to the UK for an MBA and I was pretty convinced that now I want to go towards technology. I wanted to make sure that I can combine my business acumen with my passion for technology. I ended up starting my career in consulting around digital transformation and strategy.

When I joined PwC, I continued doing this on various domain, mainly focused on digital and emerging technology. At some point, PwC set up our Center of Excellence five years ago, which was a new venture, part of the wider business unit around harnessing various technology, blockchain, VR, AR, drones. Parts of this new setup around emerging technology was, how can we, as a professional services company, explore what AI means in the context of organizations we support and advise our clients? What do we need to be accounting for beyond the actual engineering challenges that technologies under the AI umbrella will bring?

With my colleagues here in the US, we started exploring, we started talking with the AI ecosystem that was forming about that time here in the UK. Gradually, we realized there is this layer of, how do you do it responsibly? How do you make sure that you account for all the implications? It's something that we know we need to be focusing. I was at the right time in the right place to such an extent that I can say I'm a pioneer in my company because it's something that I focused gradually since I arrived in the AI team five years ago, and I've developed an expertise by doing it, exploring it, studying it, engaging with experts. We ended up building a Responsible AI toolkit that will capture this expertise we had with clients around the world.

Here we are, 2021, and I'm still riding high and I'm still very passionate about it. You asked me what motivates me. I think it's the same thing that motivates so many around the world. How can we go beyond our personal interests? How can we go into making a difference around us? It just felt like I struggled a little bit with the jackpot because it wasn't just that I've been able to match with what drives me as an individual, but also what I'm good at. The fact that I found this sweet spot where I can bring my former self with all my past experiences, my acumen around business and business ethics and governance, which is what I've been focusing during my MBA, with my passion and energy, and being able to create a package that so far seems to be quite sufficient. It's been a tiring five years. A lot to catch up. A lot is happening in the field.

Alexandra: Absolutely.

Maria: In order to do a good job, you need to be up your game. You need to be able to follow what's happening and make sense and being able to translate that for your colleagues.

Alexandra: I think that's crucially important and, of course, challenging with the fast pace of developments that are currently and luckily happening in that space. You mentioned that you found your personal sweet spot because you can reconcile what you're good at with what you're passionate about. What would you say is needed to be successful in a role like yours? Is it the diverse background that you just mentioned, or there are some other tips you can give others who want to also move into this space?

Maria: Oh, that's such a great question. I would say that each of us are unique, and we have a set of skills, but most importantly, we have a set of natural talents. Whenever I found this notion of natural talent, I did a test to discover this talent. I think it was Gallup StrengthsFinder. I was amazed to understand what I'm naturally good at. Once I became aware, I said, "Hey, in order to be successful, you have to be honest with yourself. You have to discover who you are." I know that's way easier said than done, but once you understand that, you start to align yourself, who you are with what you can do. That's when magic happens. That's what has happened in my case. I found this alignment.

Sometimes, you might discover that actually, this world of Responsible AI might not necessarily be your cup of tea, but it's okay. We know that this is attractive because it's the best-paid job at the moment in the AI space. That's a domain where you can leave a mark on the future generation, but is it really for you? If it's for you, what can you bring from who you are as an individual to actually forge a career?

The reality is that it's still very, very early stages. We will need people with diverse backgrounds. We need people with out-of-the-box thinking, with resilience, with the capacity to critically think because ultimately, when we start exploring the moral implication of AI, it's all about being able to critically assess what's going to happen in various scenarios where we build artifacts that have been very different to the artifacts we've built in the past. I would say, be honest with yourself. Be sure that you know what you're good at, and bring that to whatever field you choose. That's when you'll be extremely successful and you'll make a difference around yourself. Either it's ethics of AI, or machine learning engineering, or whatever you choose to do. I think that advice is actually something that should go outside the AI domain.

Alexandra: Very wise words. I completely agree that diversity is definitely something that will positively influence the direction we are heading with Responsible AI. I also can imagine that not only having specialists in this discussion, but people who are generalists with a deeper understanding of various disciplines could really contribute valuable perspectives and insights to the discussions that we're currently having within companies on the regulatory side and in all the different parts of organizations and parts of the globe, where we're currently looking into how to make AI more responsible and ethical, which actually brings me to my next question before we jump into the more in-depth questions.

I think it's always quite good to have a constructive discussion to talk about definitions. Both the term Responsible AI as well Ethical AI mean a lot of different things to different people. What’s your definition of those terms?

Maria: We did spend a lot of time thinking about those different terms. We realized that what we don't do well in business is that we don't correctly frame a lot of these terms. When we started the AI Center of Excellence, we understood that defining what AI is, is really important to draw some boundaries. What we have done is to say, "Hey, we're not going to go into the space of saying what AI is and AI is not, from a conceptual perspective. We're going to define the boundaries of technologies we classify AI so that we know what sort of internal organizational capabilities we’re building, what type of people we’re hiring, and ultimately, what products we are building using the AI technology."

We've done this. It helped us a lot to actually clarify, for our colleagues, what do we mean when we say AI. Also, we gradually understand that this type of approach is needed when discussing with our clients whenever we have projects. I think there was- at least the last few years, there was quite a bit of a lack of clarity in our client's organization about where advanced analytics stops and AI starts. Because of that, there was a lack of clarity, who's doing what, and not having this it's a bit of a paralysis.

We started gradually helping our clients define where the AI is, define the use cases. With that, obviously, the big question is, what is Responsible AI? What is Responsible AI for us? Because we've very quickly seen in the last two years, we launched Responsible AI five years ago and we definitely were the pioneers, the ones that came up first, but in the last few years, there was a wide range of organizations using the term or setting us as efficient.

Again, in the same principles of framing the terms correctly, we say, "Hey, let's make a separation." Let's think of AI ethics as being a domain knowledge that is concerned with creating a vision of a good life with AI. How do we use AI in a way that we help humans flourish and protect the planet? That's a vision, that's exploratory, that's where we need to be working with philosophers, sociologists, anthropologists, all sorts of social sciences to understand AI in the contexts. Then on a more practical level, which concerns us in the trenches, is how do we translate this vision? That sometimes is more or less formed in some of the areas where we had the benefit of knowing the use cases and a lot of research being done to explore the use cases and create domain expertise that allow us to quickly implement that vision and bring that vision to life. That's what we consider Responsible AI to be.

Once you understand, what are the key ethical principles? How do you bring them to life? How do you create a robust governance structure, which is very much operational and how do you understand what are the key risks associated and look at the design and use of AI that accounts for the moral implication for a new way of governing for risk management embedded throughout the development life cycle, which is totally new way of building technology compared with other parts? It requires to expand the boundaries of responsibility of the engineers, bring in the fold different skills and different domains' expertise to be able to reach that vision that you need to set up for each of those.

Having this clarity has, at least from our perspective, almost put to side the big debates about AI ethics, and that will continue, but it's more abstract and it's more of how can we understand what will be coming, and allow us to focus on- if we are working in developing those use cases in those fields, this is what we need to be doing and this is how we need to start exploring to create robustness in how we operate.

Alexandra: Makes sense. Would you say that Ethical AI is the more abstract part of it and Responsible AI is then how to actually put it in practice, or how would you draw a line between those two terms?

Maria: That's how I put it. Ethics is, yes, not necessarily more abstract, I would say more exploratory, more visionary. Responsible AI is taking this vision whenever it's clarified and action upon it. How do create rules that will allow us to do this right? Also, during those rule creating, how do we start changing the way we operate the power structures, the mental models to allow us to create behaviors that are sustainable?

Alexandra: Makes sense. What are the biggest challenges you would say of making AI responsible and how can businesses overcome them?

Maria: There are a lot of challenges. I'm trying to slice and dice and see where we were coming from. Probably the biggest one I would say is that we need a new philosophy of computer science as a field. I think for many years and continuing now, the way we teach our computer scientists and the way most of the community behave is using the philosophy, can we do it? It's something that gave us Silicon valley.

There were no boundaries. There was pioneering and exploratory work. We ended up building technology with little consideration of the impact. We got to a point where now we need to stop as a society, as a global society, and say, "We need to rethink how we build technology." That's why I'm saying I think it's probably much more profound, these challenges, because we need to move from "Can we build it" type of philosophy to "Should we build it" type of philosophy.

We've been going through this process for the last few years since we actually started- acknowledged the dark side of AI, and we've seen the misuses, the under-uses, the abuses of AI. I think this type of philosophy needs now to be translated very quickly at different levels. On one hand, at the university-level, we start seeing as teaching ethics of technology, not as a module that you study somewhere in the ground, like an optional one, but being part of the mainstream subjects and embedded throughout, all the way down is how do we work with our data scientists, speak in a language they comprehend that the impact of what they're building is much more profound? While you can't shovel this responsibility on their shoulders because ultimately they are individuals, rethinking those boundaries of responsibilities is important. That brings me to the second challenge. The fact that in the long-term-

Alexandra: Before you continue, I just wanted to really point out that I so enjoyed the analogy you made in one of your earlier interviews this year where you said data scientists really should have the same level of care like doctors do, and really also have this awareness that what they're doing is impacting human lives. Of course, I also agree that it's not the sole responsibility of data scientists and that really truly making AI responsible and fair is a discussion and a conversation that has to be led at a much higher level and more inclusive level, not only within an organization but also with society as a whole. I really like this analogy that you draw there between doctors and data scientists.

Maria: Again, if you continue this analogy, if you think about it, yes, you have an hyper-criticals for data scientists for them to be really held responsible, especially when they are processing personal data because that data is not just number, it's people's lives and you have a profound impact on people's lives. Similar with what doctors do, in some cases, it might be a lifelong type of impact.

Secondly, we are not just relying- let the doctors by themselves. We create the system around practicing medicines that support the doctors, that gives not just the doctors, but the patients assist them where this can happen. We have to do the same for the data scientists. That's what brought me to the second challenge. This is transformative; transformative for the organization.

The fact that we are not even acknowledging in those conversations, the fact that doing Responsible AI, it's profoundly disruptive. You need to train people to think differently. You need to work and build processes differently. Also, when you are using AI, again, you have to have a different mindset. A lot of AI is entering organizations via procurement. We are absolutely unprepared to be able to scrutinize those systems. Being recruitment systems all the way down to chatbots, how can we actually be able to work with AI in a way that allows us to understand the risk that we are signing up for, but also how to create this dual- or set up a dual relationship between humans and machines?

I think the third biggest challenge is the pace of adoption. The fact that the last few years, and especially the last year or so, we've seen companies investing and they will continue to do so in building and using AI. It's not like we have a lot of time to think about it. While many might say that an AI winter is coming and might be possible on the research side, when it comes to actually adoption, we have a lot to work with in the upcoming years. Things will go even faster than was in the past because once companies have embarked into their digital transformation and being able to really consider data as their main asset and have more data different organizations and create the right infrastructure, suddenly you'll see that we actually have a solid foundation for applications of AI to be able to be deployed.

How can we actually turn all this quickly around, as quickly as we can, and make sure that when it's needed, we actually pause and press the brake so that we are not moving too fast ahead and breaking things? We've seen what has happened with Silicon Valley, maybe it's time to move ahead as quickly but also safely as possible.

Alexandra: Yes, absolutely. I agree with that, but also the third and also the second challenge that you mentioned, of not being prepared in the fast pace, what to do about it, especially with your background in change management and digital transformation, how hard it is to change procedures and change people. What can businesses actually do to make sure that there is enough time to pause, reflect, and not move too fast?

Maria: That's the biggest challenge. It's in the way you ask the question; pause and reflect. That's exactly what's needed, and businesses think that this "pause and reflect" is against the modus operandi of the capitalist society. You operate in highly competitive markets, and you don't have time to actually pause and reflect, and that's exactly what's needed in the world of AI.

While we don't have the benefit of only pausing and reflecting as academia had, they actually have a lot to work with coming from their side, and how do we incorporate, but the research has settled and has now started to feed into regulation and public policy, and actually start using it to do a difference day-to-day.

Yes, transforming organization is a long-term journey. We don't have to do it waterfall. That's why we have our agile philosophy to actually start doing it in sprints, and start focusing on identifying those areas where the biggest difference can be made and start applying the intervention in the area. The ones we’ve identified, using the impact assessment. Impact assessments are probably the craftiest tools we've seen out there. Everyone talks about ethical frameworks, but they are fuzzy. When you say framework, you think of something that has multiple dimensions, different structures, and are created with one organization in mind, but sometimes they're difficult to implement.

While impact assessments, it's a questionnaire that is set to elicit the impacts and the implications of certain decisions and actions. Using it, we also have a bit of history because we have the data protection impact assessments that are a legal requirement under the GDPR. We have other types of assessments like human rights, environmental, therefore, how can we learn how those impact assessments have been carried out in other fields to actually make a difference?

What we have observed is that there is a certain willingness from the data scientists and the technical team to say, "Yes, we know that this needs to be done, but can we support us by giving us a list of questions or lists of issues that we need to consider and help us translate those into design and governance requirements?" That's exactly why impact assessments are so valuable.

If anyone is looking to make a start and move beyond having boards, for example, and ethical principles which we've learned that a lot of our organizations have done, move into something more practical, start implementing impact assessments, and experiment first with it. We'll see how then running a continuous impact assessment, having that impact assessment mindset, will do wonders. Not only that you start seeing all those issues translated into proper design requirements, including the risk side, you will start working with a mindset because yes, we need to train more people to understand the ethical implications of AI, but also doing it and empowering people to explore by asking questions. "If I do this, who's going to be impacted and what impact it will mean," or, "How can I do to limit the possible negative impact," it will empower people to think differently, and I think that's something that's really important, sometimes we miss in the ethical field.

It's not just a push exercise. It's about pull as well. It’s about empowering people to take action and to feel included in this conversation. When you do that, that's when magic happens because people will end up doing the right things and there are a lot of people out there that are looking to do- are motivated about doing the right thing, so why not tap into that?

Alexandra: Yes, absolutely. I think that's very good and also very actionable advice that you just shared with our listeners. I also like the statements you once made about AI ethics being a team sport, and therefore, think it's crucially important that everybody on the table is empowered and has the right tools to actually contribute to making AI more responsible.

Talking about tools, in your interview, you also mentioned the toolkit that the Center of Excellence developed for AI, Responsible AI. What does this toolkit consist of, and is this everything that organizations need to make Responsible AI, or are there limitations to these tools?

Maria: I was thinking, reflecting what came first, is it the toolkit or the philosophy? I would say there was chicken and egg at the same time because we created-- As I said earlier, we wanted to see how we can bring the experience of PwC into building and managing AI in a way that accounts for benefits and risks. We created a governance framework that looks at a longer life cycle for AI that starts with corporate strategy. It's not just about focusing on the application development, you have to go deep into the organization to understand to actually have a comprehensive life cycle of AI, consider not just the upstream part of the life cycle, but downstream as well, what happens after you deploy, and you're required to do 24/7 monitoring of the solution.

Then further down the line, we start having various conversations and client engagement around the world around explainability, fairness, about supporting our clients, make sense of ethics. We came together as a global team, and we started iterating, how can we bring it all together, knowing on one hand that having a framework is very hard to work with. We should know that because that's what we do. We create frameworks for a living. Then we say, "Let's create something that is flexible, like a Lego toolbox where you can go and pick up your own bricks, and whatever issue you have around benefits and risk of AI, you can start from whatever matters to you most. Then you can build a roadmap, a journey that will lead you to that vision of ethical AI."

We created a series of assets, we have probably close to 30 now. They're both consulting or non-code-based and not code-based. Code-based will have the usual suspects; fairness, explainability, robustness, and security, data privacy, it's very much what we've seen other companies doing in this space. How do you test for fairness? How do you test and ensure that there's a certain level of explainability? While those modules have not been as successful, and we know now why, I think it’s basically because we are now entering the space of auditing. When you say, "Test for fairness," it's some sort of auditing or quality assurance for a specific set of indicators, and the companies are not there yet.

I think it's fair to say, well, auditing is something that will become more and more important in a way where AI companies are now ready to go there because the whole world of governance and risk, the processes around the development that are not stable enough yet. You can’t really go and assess until you have clarity; clarity about organizational structures, about processes, about roles and responsibilities.

That's why we have the second set of assets, which is supporting organizations to build those capabilities. Being able to define roles or responsibility, operating models that include AI as a Center of Excellence, or AI as a technology that's being developed and used across the organization. Also, the critical questions of ethics. What are the principles one organization needs to sign up? How do you translate, contextualize it, and sign up with different groups in your organization? How do you start creating operational rules that will allow us to see those values translated in design and governance requirements?

In this category, we have data in the AI ethical framework. We have a regulatory and policy observatory. We have a national AI strategy observatory that keeps track of various national strategies, and being able to observe what's the continuum for policy standards or regulation for different industries in different geographies. We have risk management type of tools, risk control matrix, risk steering, risk taxonomy, and all those have developed a procurement framework, all those- they’ve been developed as-- We started with the toolkit, we took it to clients, we helped them solve some of the problems, and then we realized there are certain gaps.

It's almost like it's an exploration that continues. For example, the AI procurement. We realize that most of the AI development or AI applications are not developed in the house, and they'll be coming via procurement. We started thinking, "What should our clients and ourselves need to be thinking when we actually acquire third-party AI solutions, and what happens when AI features are being added to current platform? What do we need to be thinking about? How do we scrutinize this? How do we train our people when they themselves in procurement or in the lines of service in the business units, they're acquiring those tools?"

This is the toolkit. It's a collection of different assets. We put it under the umbrella of Responsible AI. It's a toolkit but also philosophy. A philosophy because it allowed us to iterate further how should we embrace, how should we be thinking about it, and the reason I said it was a chicken and egg is that the philosophy allows us to go far and explore with our partners in academia and think tanks, but also being able to translate what we find in this exploration is something that is really meaningful and hands-on for us.

Alexandra: Absolutely. That sounds like an incredibly valuable resource. I think it also shows how holistic you need to tackle this problem of making AI responsible. I just recalled that in one of our earlier conversations, you pointed out that most of the AI governance frameworks that you see out there have the tendency to focus only on the technical outcome. Can you quickly elaborate on why that's an issue and why that's not sufficient?

Maria: It's important to understand AI a little bit different. We have two types of conversations. AI is a set of technologies, and when we say a set of technology, we think machine learning, deep learning, NLP, simulation, optimization, what have you, but on the other hand, we also have AI as a narrative; a narrative about how those type of technology that have agency, that are autonomous and adapt to external environment, that are stochastic in nature, will transform our lives, ourselves as individuals, and our social interactions. That's when we started having more visibility coming from research. What happens when you use this type of application? Also, we started acknowledging what happens when you have AI at scale? Courtesy of Silicon Valley and the big tech companies.

As a result, we actually start seeing firsthand what does it mean to have those self-learning algorithms in social media mainly in a lot of parts of the internet, and as a result, we started seeing- although we don't have enough scientific evidence that will allow policymakers to intervene to say, "But those are the unintended consequences we have." Bringing back from this example of how self-learning algorithms impact individuals and society around the world in the experiment- the use case of social media, how do we not start thinking a case that might not be as global in reach or have the scale that social media algorithms have, would happen?

While we still need to continue to keep an eye on the engineering type of challenges around AI, it's acknowledging that AI consumes huge volumes of data, and that data, it doesn't fall off the sky. It's not a natural resource that's being tapped. Big data is about people, and it's collected, it's processed, it's stored, has a lot of hidden costs we don't acknowledge for, a lot of hidden labor we don't account for.

How do we make sure that we understand that AI is neither artificial or intelligent? It's not artificial because it has a lot of physical resources that come to contribute to creating AI from the natural resources that gave us the infrastructure; the storing, the connectivity, our devices all the way to data, all the way to processing the data energy, and also the hidden labor.

In the same time, this data is a reflection of the structures that exist in our society, the power dynamics, the power structure. That alone is probably the biggest indicator, the fact that what we ended up having in a data set is a reflection of political context and we ended up working with certain structures, certain structures, or systems of oppression. We ended up encoding inequality, so many scholars have pointed out. How do we make sure that we became aware and we start reflecting, "Do we actually really want to do this? Do we need to wait until we send wrongfully people in prisons and destroy lives before we actually pause and understand that this will happen?" When you use that data set, it has some politics behind and you ended up applying an algorithm on top of it, and then it was extracted, it will not only codify an existing inequality but will automate it and scale it so that we ended up in trumping people, not just today, tomorrow, but for the rest of their lives.

That's why I think when a lot of people have dropped an understanding how data is being collected, go beyond the narrative that's been served for us by the big AI organizations, and start going and looking through the looking glass beyond what's the public narrative. How exactly do we build what we call AI? How much computational power and storage is needed, how much data do we need, and what are all the ingredients that we bring from building AI, and what are all the costs and implications?

Alexandra: Absolutely. I think that's so important too really. Also, a question as you pointed out, should it be built in the first place? If you answer yes to this question that you really thoroughly look into all the ingredients as you mentioned, and especially also the data to see if there are some historic biases in there that have to risk than end up in the AI system which will perpetuate this discriminatory treatment of certain groups, and this is also one of the reasons why we as an organization are also passionate about especially fairness in machine learning and work a lot on also fear synthetic data that really has the ambition to make more balance in fairer data sets to stop this perpetuation of bias and make ethical and fair machine learning easier to do.

Another component that you mentioned was risk management, and I recall that in the conversation we had in preparation to this podcast, you mentioned that it would be crucial for organizations to be better at risk management when it comes to AI and that this is potentially an area where especially financial services organizations and insurance organizations will be of advantage because they have this long history of risk management. What would be your advice for other industries on how to catch up and really be in a position to manage the risk of AI well?

Maria: This will probably be the most heavy-lifting in terms of building organization competency for most of the organizations because the ethic side, what mainly happens in the early stages of design, it's something that can be done with not a lot of investment. When it comes to compliance and 24/7 monitoring of the solutions, I think that's where organizations will need to invest.

I think as we discussed in our previous conversation, European Commission, AIF is provisioning for an ecosystem of compliance that needs to be built. Many organizations will very quickly understand that besides complying with the future regulation-- Complying with future regulation means organizational cost. Again, going back to what we said about changing the mindset of how do we develop technology, it means putting it on the same level with building it with actually operating it because it's equally important because ultimately of the stochastic probabilistic nature of AI.

That's where we would need to start looking at what sort of a compliance function we have, and who's doing it and how can we train professionals or rethink how we do compliance in a way that will allow us not only to respond to regulatory requirements but being ready to be scrutinized, audited and checked if we-- There's a voluntary need, and I think I've seen a lot of organizations like yours saying, "Hey, we guarantee a certain level of transparency in how we build our products in order to showcase and demonstrate how well we align ourselves with a set of key ethical principles, but also in the same time being able to operate in a robust way. At the moment there is still a lot of trial and error for many organizations that are not AI-first or are not technology companies. Ultimately, where we would see a lot of progress, it's not about small organizations, but how do you make it work in the context of public services or large organization that deliver value in certain-- AI will be used to optimize the delivery of that value. I think looking at compliance allows us to start looking at who should own the risk?

I think like with ethics because risk management is, in fact, one of the main disciplines of the ethical management in organizations, is to say that while someone is to own it, and that's very much linked to the nature of the businesses, say financial services or professional services. You'll have a chief risk officer that will look at all type of risk around the organization. When it comes to owning risk, especially technology risk, it's really important to assign an owner, and from there to be able to work in how risk manifests itself in various parts of the life cycle.

How do we make sure that not only that we have a good understanding of the types of the risk and how they might manifest? How do we start designing controls that will prevent that risk to happen? I kept on saying to many of my colleagues that, in fact, the most efficient risk mitigating technique is actually to think about ethical principles from the beginning, from the start. In operationalizing ethical principles is the best mitigation of risk of AI.

If you look at the risks of AI, the vast majority are ethical risk, is the risk of bias, opacity, lack of control, lack of governance, instability, security, hacking, Balsa, automating, all sorts of wrongs and biases, and inequality. Being able to augment the existing harms that exist out there. By thinking proactively by how do you actually implement it, you reduce significantly to those risk. You said, "You guys are thinking about how do we create fair data sets? How do we make sure that we consider fairness in the design early on?" That's the best way to mitigating risk.

By the time you go to the other end and say in the AI risk nomenclature, we have the risk of bias and discrimination, you more likely have done so many different types of control, a seamlessly embedded part of the development life cycle that you really don't need to do heavy lifting by the way inside. We specifically have to design controls for and go back and change the whole of development right cycle. While many organizations have not really made disconnecting between risks and ethics, it's where I think there's another hidden gem that we discover is the fact that do ethics right and then you'll have less risk to manage.

Alexandra: That's a very valid point, and actually my next question would follow to what you just said that many organizations seem to not have made this connection yet. I'd be curious because you already pointed this out in one of your earlier podcast interviews you did this year. That although many organizations are super eager to apply AI and to play around with AI, even if it might only be in the POC stage for many at the moment, that they seem to treat the ethical component and the risk manage component somewhat like an afterthought.

When you shared this point, it reminded me of privacy and what we saw with GDPR. Before GDPR, privacy often was treated as an afterthought of the product development life cycle, where when the product development was nearly done; legal was involved and asked is everything right with it or do we need to change something? With the onset of GDPR, the concept of privacy by design got more widely adopted.

I was just curious to get your perspective on whether the upcoming AI Act in the European Union and other AI legislation can lead to a similar shift in regards to ethics that we will have ethics by design. That how to design an AI system responsibly will be one of the first thoughts that organizations will have when they start thinking about deploying AI and not only concentrating on the technological components of it.

Maria: Trying to bring this ethics by design as a strategic initiative is actually the only way of doing AI. As I said it so many times, responsible AI is the only way to build AI. If it's not responsible, don't touch it. It's been our mission, the mission of the AI Ethics Community for a while now, it's actually to make the case, why do we need to do it? On one hand, we receive a lot of support from the GDPR. The fact that we have had this experience and this many organizations understood that in order to remain competitive and not just comply to self-regulation, they need to be more agile.

They need to be thinking of doing this as BAU, and for them to find the space be more innovative. By doing ethics or privacy by design, you almost say, "We change completely how we think about privacy. Because we create the robustness in our processes and we operate in an organized fashion, we have enough room to actually be creative. We have a space for us to be thinking, 'What else can we do if we already dealt with privacy? Not only that we comply with regulation, but also we have an advantage that we do it in a way that is beyond compliance, goes into something that few are thinking.' "

I think this first experience helps us tremendously. We've seen a lot of our clients, rather than jumping straight into the responsible AI, for many of them it's a big stretch. They look at something a bit more paced in terms of journey and say, "I have experience around data protection and I will expand it to data ethics." What we are telling them is to say, "It's fantastic and we've done it ourselves. It's doing data ethics right. You start preparing for AI ethics." A lot of the principles, the way you implement data ethics is in fact similar to the same principles of applying AI ethics.

You are looking to expand those ethical principles to all the uses of data, not just the ones that fall under the AI umbrella. The second big aid we had is coming from the misuses and the uses and abuses of AI. We had quite few of them. While we had certain use cases that went viral, and we are using it as a not-to-do-it, in our presentation from the Amazon discriminatory recruitment tool to Apple Goldman Sachs card, and few other examples.

I think the reality, the biggest boost we got into why we need to do this is that actually the social media. The fact that we have had so much exposure and so much interest around this topic because suddenly a lot pennies have dropped is the cataclysmic implication of doing this as an afterthought. I'm sure that many of the engineers at the YouTube and Googles of the planet and Facebook have thought about it. They've thought about it as they were taught to do it at the end. That's how ethics was taught in computer science schools as after thought and they went and applied it.

This is what we have got. We have this example of social media and how do we turn it around is really difficult and will take us years to find measure. In the background, is the perfect example or motivator of why ethics needs to be considered before. I think to a certain extent it's more powerful than regulation. I think regulation is treated as a bit of a stick. People will do what they have done with GDPR and say, "Then we are not going to do this. Pause all the initiatives, because we don't know what to do with GDPR it is too stifling, stop all together."

The case of users of algorithm in search and mainly social media, has actually showed implication. It's not only to a small number of people, now the news have traveled to almost everyone. How many people have actually watched the famous/infamous Social Dilemma on Netflix? That documentary regardless the fact that they portray the same usual suspects while white guy's successful for Silicon Valley in the detriment of the amazing women and people of color that do this, the documentary have done a huge favor to all of those in our field saying, "This is a social problem, a global problem. We need to all take a standard."

It's really funny because at the end, all of them they've been asked, "What can anyone do to actually not be caught up in this black hole?" The answer was as hilarious as almost I would say it was probably intended to be in this like, "Turn off your notification." I'm sure that a lot of people had got a good understanding of what happens in the digital world, and especially the young generation to say, "Yes, that's like narrowing. Just turn off your notification you'll be fine from the march of the algorithms."

I think as this example, if you turn it around, then we move away from whatever actions we need to be doing to change social media altogether and prevent the online harms, it gives us the ammunition to say, "They have thought about it as an afterthought. You see what has happened. If we don't want to go there with everything else we're building, let's do it differently. Let's think about it."

Alexandra: Yes, that's a good point.

Maria: I think it's working. It's working. Look at facial recognition.

Alexandra: Absolutely. There will definitely have seen some positive developments. I was just curious since you mentioned that regulation, you would consider more as some kind of stick that organization, of course, want to avoid. Isn't all the negative examples that we've seen in the social media space, also more negative incentive to start with responsible AI because, of course, you also want to avoid this reputational damage, which brings me to my next question if you can also think of some positive incentives for doing AI ethically and responsibly.

I know that you've done some work with the World Economic Forum on the business case for AI ethics for thing C-suite was the target audience, which I think is an incredibly interesting initiative. I'd be curious to learn more about, but also any other positive incentives or if you think that responsible AI can even become a competitive advantage for organizations.

Maria: It is a competitive advantage but on short term. I think this has to go beyond competitive advantage. This is about BAU, Business As Usual. Like always, when change needs to happen, there's always a dynamic duo on between carrots and sticks. That's how we function.

Think about when we were children and our mom wanted to convince us to have carrots or whatever legumes. She would say, "You have to eat the carrots and the peas before you get dessert." The stick in that was like eating carrots and peas. That is disgusting. You will do this because afterwards, you will have the carrot, which in this case was the chocolate cake. At the end, the biggest motivator is to say you'll become a better person because you've listened to your parents. In the end, it was a win-win situation.

I think we need to be thinking of how do you drive change by using those instruments, both carrot and stick in parallel, not in the same time, and being able to alternate those two modalities. First, acknowledge that doing the right thing has been forever difficult to put in numbers. When I was studying business ethics during my MBA, one of the challenges was that how do you demonstrate the business case of business ethics? It's difficult. How do you put on the balance sheet, the fact that you don't want to do business in certain countries, or you don't want to increase your carbon footprint? Hey, we are living in a time when actually doing the right thing wider than just technology it's becoming the norm.

COP happens in Glasgow those days. The whole of ECG now, it's becoming one of the main theme in business where we start aligning the business operations and strategy, not just with a discrete set of objectives that support the creating shareholder values, but we go beyond that. We go into people and planet. As a result, it almost signals that we don't have to be as precise of defining those benefits. We could think about those benefits in different terms.

That's a huge opportunity for us in the ethics field is to say that if this happens at the higher levels, society level, with the realignment of businesses to include profit, people, and planet as key indicators, that means that those are the type of the carrots we should be aiming to. If you start seeing AI being aligned to deliver towards those type of KPIs or AI being the main driver delivering towards a positive impact on the planet or on people, then it's going to be easier to demonstrate.

When it comes to the sticks, the sticks are necessary for multiple reasons. It's not just the fear of inflicting some risk as an organization, but again, being able to see beyond the characteristic, having vision. I appeal to a lot of the visionary business leaders to understand that once you put in action this mechanism of change with carrot and stick, there's a vision you're going towards.

You change because you want to achieve a vision. Then going towards that vision is bigger than the carrot and the stick. If you start thinking about that vision, then what sort of businesses you want to be in even 5, 10 years' time? What sort of impact you want in society? How do you want to be more of a B Corp type of company? Not only focus on your industry, and actually start looking at the impact you can have in society. Then it's easier to start looking, how do I get that? If I want to get there, I need to change. When I say I, I, myself as an organization, we need to change.

Therefore, how do we change? How do we accelerate? If we want to go faster, maybe we can use the stick because that's probably the most efficient one. Regulation is coming. Policy is coming. You're going to be harmed because customers are going to depend on it. If you want to have a more sustainable approach and bring people on the side, let's say, let's think of more carrots. The fact that we'll develop products that will have a positive impact in society. With that, we'll ended up motivating our employees. We have seen concrete examples of computer scientists that graduating those days saying, "I don't care about the money or being paid by the big tech companies."

Alexandra: I want to have purpose.

Maria: "I want to have meaning in my life. Yes, I want purpose." I think that's a huge opportunity for many organizations if want to look to attract talent, valuable talent, scarce talent. Doing the right thing and not just saying it. People are smart enough to read beyond PR statement. Actually, finding a way to transparently, consistently demonstrate that they are doing the right thing and everyone wins.

Alexandra: Absolutely. I would say that's definitely a big positive incentive for doing the right things with AI. Coming back to this initiative, you work with, the World Economic Forum; can you share a little bit about that? Are there some more concrete examples on what the business case for responsible AI is in there, or what's the stage of this work at the moment?

Maria: World Economic Forum does a fantastic job in creating guidance for business leaders around the world on different topics. Two years ago, they have drafted a toolkit for board members, board of directors around deployment of AI. The last year or so, we've been focusing on creating something for the C-suite. How do you create guidance for C-suite to understand the full complexity of AI? Where AI is coming from and how to manage it from an executive position. Part of that, we originally had a chapter on ethics, which was our desire to be the main chapter of the work.

Then we realized that we need to bring it a little bit down to people who might be intimidated by ethics. We ended up bringing together the chapter on risk and governance with ethics and created the responsible AI chapter. When I say chapter, we creating a guidance that will be published probably until the end of the year, hopefully. What we have said in that chapter is, on one hand, framing various terms, including what risk is to ethics and what AI is to responsible AI and so and so forth to ask the biggest questions why companies have to do it.

We went through this path of explaining it's about carrot and stick. When it comes to the stick is not just about regulation that will be coming slower, but it's about public policy that creates or signals the direction regulation is going in certain areas. It's how do you create this sustainable change mindset using carrots and sticks. We described very much what I've been talking in the last five minutes, I think the extra what we said is that it's really important to acknowledge for the cost of change and treat it as a change. If you do that, then you have more chances to actually navigate how's that vision as I said earlier.

If you treat it as a side effect or side exercise or something that comes naturally and you just give free rein to few pioneers in the organization that will run it, you're not going to go far. I think the examples we have seen from both IBM and Microsoft. World Economic Forum published some in-depth use cases of how those two organizations are implementing responsible AI principles. We will see that they have actually thought it as a profound and systemic change exercise. The way they set up different organizations.The way they work. The processes, the people are behind it. The guide, the toolkit, combined with those use cases are a very useful resources for whomever is thinking of nurturing and making this transformation their competitive advantage.

As I said, responsible AI as a competitive advantage will only work for a limited period of time. I'm saying this because I'm hoping that this will become the norm. We're not competing whose AI is going to be more responsible. All AI is going to be responsible. We'll have other indicators to optimize other than doing the right thing.

Alexandra: Yes, I also hope that we will live in a future where that is the case. You already mentioned Microsoft and IBM. One of my questions in regards to responsible AI would, of course, also be regarding the status quo of enterprises. I think with your background in professional services, you're in this privileged position of get quite good insight into the current state. Do any examples come to your mind of companies that are currently implementing ethical AI principles, or do you have some best practices to share with our listeners?

Maria: Definitely look at those two companies. Microsoft has been doing a fantastic job in thinking about how to transform the way they operate, aligned with responsible AI principles. They've been thinking about it probably for a few years now. Now they are ready to actually share this transformation because they have what to share. It's not just about a set of principles or policies, now they have feedback on how this has worked and being able to say, "Might be on-road towards developing best practices, this is how we have done it. These were the challenges that we have encountered."

Well, the World Economic Forum has developed, actually, some, as I said, in-depth use cases. There's plenty of material there to look deeper into those two use cases. I'm sure this is a part of a series around responsible technologies. Probably they will be bringing to life more companies with their best practices and how responsible technology is being developed. Worldwide, the biggest caveat is that those companies are tech companies. I think there's a lot of ideas of how do you deal with the companies that are non-tech. They already have a certain structure, a certain corporate governance approach, a way of doing business ethics.

I think my key learning here is try to build on what you have and adapt it rather than create a different structure or start from scratch in creating the new structure. Rely heavily on the structures you have, and being able to augment and prepare those to respond to challenges of AI. Say, for example, the committees or the ethical committees, many organizations will have that for their day-to-day business. How can you make sure that those committees are prepared to assess or to discuss AI applications and their consequences? How do you keep your boards of directors and committees with the right knowledge and expertise for them to be able to make the right decisions?

Alexandra: Yes, I think that's a very good point, and I think also more sustainable one, as opposed to trying to change by starting right from the beginning with a decades, maybe worth of history as an organization. Thank you so much for everything you shared in regards to responsible AI. Now, I would love to get your take on auditing of responsible AI systems.

Especially since, you just published the second in a series of papers on ethics-based auditing of automated decision-making systems together with Luciano Floridi from University of Oxford. Can you elaborate a little bit on what these papers are about, and why audits are important to ensure that AI is done responsibly, and maybe also what limitations of audits are?

Maria: That's a question that probably will take us a whole new podcast.

Alexandra: Okay, then.

Maria: I will try to summarize it and say that it's important now for us to go and check for quality what we have done. A lot of the conversation has been focused on creating the right structures, putting in place the right processes and tools to allow for responsible AI to be built. Now it's time for us to start saying, "How do we measure what we have done when measuring is not that obvious?" That's where the whole audit of AI or assurance of AI is coming to place. How can we go and measure the type of impact we had, and being able to rectify if that impact is not according with whatever standards have been defined, either internally or externally?

Well, the world of auditing of AI or assurance of AI, it's very, very early stage, and there are not as many groups that are looking into it necessarily. Also, because the fact that audit requires you to be extremely precise. That's why you start describing AI into what it is. We're talking about auditing or quality assure data across its supply chain, or we are auditing algorithms. What exactly we're talking when we measure, or we compare or we assess. I think going to more precises is something that the world of, or those who are working in the field of auditing of AI are looking into, making sure that we understand what is the term audit using the context of AI?

Can we actually use the term audit as long as we actually do not have legal frameworks that will translate those findings into compulsory activities? How do we make sure that we create the right foundation for those applications to be able to be monitored in various forms and modalities both internally externally, formal or informal? The work we've been doing with Luciano at Oxford Internet Institute is actually to look beyond the various instruments that could be deployed to use, to monitor, or to quality assess applications.

Are the engineering tools to test for bias, for example, or for explainability, or to be able to understand the whole range of decision that are being made across the different stages of the application and just say, "How do we bring it all together? How do we make sure that we understand that when it comes to implementing ethical principles, the interventions need to be bigger than the technical ones?" We need to go and start assessing how well we have done in implementing or aligning the whole of operations towards a set of principles and moral north stars.

That's why we came up with a new concept. It's a governance mechanism called ethics-based audit that allows us to have a holistic view of all the different interventions. How well those interventions have delivered us responsible AI. It's almost like it's a bit of a reverse engineering or operationalizing AI ethics. If you say that operationalizing AI ethics is about having principles and frameworks and policies and training and risk management. Ethics-based audit goes back and say, "If you have had a series of those interventions, what's the gap required so that you have more chances to achieve your responsible vision on long-term?"

By having this holistic approach and understanding not all the different types of intervention that one organization needs to have in place, it's how does intervention connect with each other? What's the correlation between having a board and an ethically aligned training and ethically aligned product development? What's the correlation between internal audit and the boards, for example? Create all those different feedback loops that are still required in us to understand the actual impact of AI. Why we are still early stages with our research, at core is about bringing it all together, making sure that organization start looking at the whole issues in responsible AI in a holistic way.

What we have seen and I think that's the reason we are motivated to do this work is that there's a lot of linear and pointed type of approach in this type of work. The systemic approach, the systemic mindset is needed for us really to be able to be on call when it comes to audit of AI. That's, as I said, the core of what motivates us is how do we bring it all together? How do we create a new mindset in approaching responsible AI that sees all the moving parts coming together and understand the systemic correlations rather than just separate actions that happened a bit like a piecemeal?

Alexandra: Did understand you correctly that since we're currently still so early as a society and as organizations to moving towards AI at scale, responsible AI, and not yet have a very clear and precise understanding of which exact steps need to be taken to reach this vision of responsible AI, that it's more about creating the conditions and the system that plays together and quite likely will put us in the best direction to actually also achieve this vision of having responsible AI.

Maria: Exactly. You're absolutely right. You've summarized it better than I did. It's about conditions. It's about creating the right context where AI can be developed. That's the trickiest part is because before you don't know what will be the intervention points. All the intervention points that must exist even if you might say those are not important, because of various reasons, not important for me as an organization. The reality is that all those intervention points need to be either formally or informally in place for this condition to actually deliver the outcome. Yes, it's about creating the right context that will allow for AI to manifest itself in a positive way.

Alexandra: That sounds very interesting. I'm already very much looking forward to reading the second paper and then also the sort of paper in detail. I have the feeling, Maria, we could continue this conversation for probably a few hours, but we are coming to an end. We need to close. Therefore, my two last questions to you would be just a very short recommendation for our listeners. Who, in your opinion, are the most influential, responsible AI leaders that our listeners would follow on social media. They should definitely follow you on LinkedIn and on Twitter because the resources you share are just incredibly valuable, but who else can you recommend?

Maria: Oh, how much time do we have?

Alexandra: Not much, not much.

Maria: Not much.

Alexandra: Maybe the first three that come to your mind. Of course, there's so many people.

Maria: There are a lot of wonderful people doing work out there. I've been really fortunate to meet a lot of them and call them my friends. While some of them are closer to my heart, others have equal valid points. I think it's more of an exploring who speaks better for you. I am a big fan of Kate Crawford and her Atlas of AI is an absolutely must-read book. She's one of those researchers that have been able to bring it all together, various research themes that has happened around AI in the last few years and translating in a language for laypeople. She writes beautifully.

I like Stuart Russell a lot. He's one of the big pioneers of AI, but because he's been in this for 35 years, believe it or not.

Alexandra: Hard to believe.

Maria: Yes. He's an engineering heart because he comes with this in-depth technical understanding of AI. He is able to look at things a bit different from an engineering perspective. I will highly recommend Stuart Russell and his Human Compatible book. Who else? I would say not necessary people but outlets that I think do fantastic work, Ada Lovelace Institute here in UK. The Institute of Ethics of AI at the University of Oxford. Probably my favorite group of researchers out there is the CFI, Center for the Future of Intelligence, Leverhulme Center at University of Cambridge.

They are looking at the implications of AI on our intelligence, ourselves as humans, and the dynamics in society. Not only that they will have exciting strengths of research like AI narratives, history of AI, gender, and AI, I think they have this benefit of expanding people's perception and mind in what else do we need to be thinking. The last thing I wanted to say about people to follow is that goes towards people that will challenge the status quo and allow you to see things differently. Even you might think that this is not necessary, why should I be concerned about history of AI? It's not something that I--

Believe me, you want to be concerned yourself, because in the history of AI, like in history of everything, like history of humanity, there's a lot of cues about where we going next. Yes, we still have AI and follow a young academic in Cambridge that actually is doing a lot of thinking around Jonnie Penn, who is looking into the history of AI. How he survey AI gives us a flavor or a little bit of indication of what's going to happen next based on where we're coming from.

Alexandra: Oh, wow, I think it just gave me a hint for a potential next podcast guest if he's up for that. That sounds super interesting looking into the history of AI via that perspective. Thank you for all these incredible resources. My last question to you would actually be, do you have any final remarks for our listeners or some piece of advice for everybody who would like to make a difference and contribute to making AI more responsible?

Maria: Be curious and curiouser. I think that this whole field requires people who are embarking this as an adventure. While many will see, "Oh, but this is my field of work and I don't want to do it, approach it this way." The reality is when I look at people like Stuart Russell or Kate Crawford, you can see their eyes sparkling with passion. Do people actually believe that what they are doing? It's not only that what they are doing can have a massive contribution on humanity, but it's their lifelong passion.

The only way to do this is to have a curious and open mind. Then allow yourself to be disrupted. That's why I recommended my friends in Cambridge and Oxford is that no matter how much you know about a subject, consider yourself as a humble apprentice of everything. That gives you the chance to explore new perspective, to acquire new sets of knowledge, and being able to have contribution that otherwise you wouldn't if you remain stuck in your box.

Be curious, be open-minded, and apply humility because I think that will get you a lot a long time. If there anything I've learned from those absolute masters and experts in AI is that on top of everything, their knowledge, their expertise, their passion, they continue to be humble and continue to embrace everyone who approached them with the same joy and passion.

Alexandra: Be humble.

Maria: Yes, be humble.

Alexandra: I think there are no better last words than that. Thank you so much, Maria. It was incredibly insightful to chat with you today and I really enjoyed the episode. I hope our listeners will as well. Thank you very much for taking the time.

Maria: Thank you for having me. It's been a pleasure.

Ready to try synthetic data generation?

The best way to learn about synthetic data is to experiment with synthetic data generation. Try it for free or get in touch with our sales team for a demo.
magnifiercross