🚀 Launching Synthetic Text to Unlock High-Value Proprietary Text Data
Read all about it here
Episode 34

AI ethics is science, not just philosophy - a pratical guide with Reid Blackman

Hosted by
Alexandra Ebert
AI ethics is a misunderstood topic. Business leaders often think of it as fuzzy and subjective. Reid Blackman is on a mission to change this and educate business decision makers about the scientific and very practical realities of ethical machines. Operationalizing AI ethics is not only possible, but imperative to mitigate the risks embedded into systems running on the power of artificial intelligence. Listen to the episode to learn:
  • why businesses should hire AI ethicists and why lawyers can't do the job instead,
  • how to build an AI risk mitigation program,
  • what are the pitfalls and best practices for implementing AI ethics,
  • the difference between responsible AI, ethical AI, and trustworthy AI,
  • how to write an effective AI ethics statement,
  • what's the difference between global explainable AI (XAI) and local XAI.
Reid Blackman has recently published a book entitled Ethical Machines and is a regularly published author at Harvard Business Review. If you would like to read a short summary of the most frequently overlooked ingredients of a successful AI ethics program, read his article entitled Everyone in Your Organization Needs to Understand AI Ethics.  

Transcript

Alexandra Ebert: Welcome to the Data Democratization Podcast. This is episode number 34 and I'm Alexandra Ebert, your host, and MOSTLY AI's Chief Trust Officer. Today's guest is Reid Blackman, a former ethics professor who's now the founder and CEO of Virtue, a consultancy for implementing AI ethics. Reid is also a regular contributor to Harvard Business Review and writes about ethical aspects of artificial intelligence, blockchain, and other emerging technologies.

Just in July this year, he published his highly anticipated book, Ethical Machines, which already climbed on the first bestseller list. Today Reid and I will walk you through how to think about AI ethics, particularly as a business leader. As Reid points out, for many business leaders, still artificial intelligence and the ethical aspects of it are a squishy hard to grasp topic. This, of course, prevents them from building effective AI ethical risk mitigation programs. Stay tuned to this episode to learn, not only how to think about ethics in the AI context, why it's not a subjective topic, but also how you can operationalize AI ethics within your organization. Let's dive right in.

Alexandra: Welcome to the Data Democratization Podcast, Reid. It's great to have you on the show today and I'm very much looking forward to our conversation, but before we kick off, could you briefly introduce yourself to our listeners and maybe also share what makes you so passionate about AI ethics and the work that you do?

Reid Blackman: Sure. As you said, my name is Reid Blackman. I'm the Founder and CEO of Virtue, which is a digital ethics consultancy where we particularly focus on AI ethics. I'm also author of the forthcoming book published by Harvard Business Review, Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI. I'm also the chief ethics officer for a nonprofit organization called the Government Blockchain Association. I advise and sit on the advisory boards of a number of startups.

Alexandra: Quite an impressive resume. Talking about the passion component, why do you like and enjoy what you're doing?

Reid: I've been doing ethics for the entirety of my adult life. Since I was 18 and took an undergraduate course in philosophy, I became obsessed with philosophy and ethics, in particular, I got a PhD in philosophy. I was a professor of philosophy for 10 years. During that time when I was a professor, I was number one mentor to startups and the reason I was a mentor to startups is because I also started a completely separate business that has nothing to do with philosophy or AI. It's actually a fireworks wholesaling business.

When I became an advisor to startups as a professor, I got jealous and I thought - Ooh, it'd be really fun and interesting to have my own new thing built around ethics. What would that even look like though? Presumably, it'll be something like an ethics consultancy. What would that look like? Then there was nothing at the time. When I started hearing years later the alarm bells that engineers were working around the societal impacts of AI, I thought, oh, there's the application.

There's a potential market for it. My passion or my interest for AI ethics, in particular, or blockchain ethics, which is also something I'm focused on quite a bit lately, is really just an outgrowth of my long-standing borderline obsession with philosophy and ethics, in particular.

Alexandra: That's fascinating. Then you're also a flying trapeze instructor, which I find particularly great because I'm also an aerial enthusiast myself. Really cool.

Reid: That was not part of the plan [chuckles] being a flying trapeze instructor wasn't part of the plan, but someone bought me a gift certificate. I took a class, I really liked it. I took another class and I was pretty decent at it because then they asked if I'd be interested in training to work there and, of course, I had to say yes, who says no to that?

Alexandra: Awesome. Does this balancing work? I know that's farfetched, but this balancing work of flying trapeze somehow translates into AI ethics and how to balance it, right?

Reid: I don't think so. I think [chuckles] there's totally separate. That said, at one point when I was a professor, my ideal day was when I did a bunch of philosophy and a little bit of trapeze and a little bit of the fireworks business. Those were my most fun days.

Alexandra: Indeed. Sounds like a good mix for one day to have, but let us dive in. You actually already took away my first question. You already mentioned your book, which I'm very much anticipating it will be published in July 2020, if I remember correctly.

Reid: July 2022, yes.

Alexandra: 2022, yes. It's Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI. What is this book about, more specifically, and why is it needed today?

Reid: Why is it needed? People, I think, basically, I hope understand. There's lots of ethical and reputational risks with AI. Lots of people have heard around things around bias or discriminatory AI. They might have heard about black box models. Surely they've heard about privacy violations, whether it's in relation to just data privacy or AI as well. Companies are getting wise to this, they're really understanding that there are serious risks.

There's lots of opportunity and they want to be able to take advantage of that opportunity, but they want to do it in a nonnegligent manner. I should say, it's not just about regulatory compliance because the regulations are really far behind. I think when you take the real risks that are on the table, you combine that with the lack of regulatory oversight around it, if companies care about their reputations, their brand, their relationship with their customers and clients, then they've got to take what is now called by many, AI ethics, really seriously. A problem for them is that they don't understand frankly much about ethics.

To a lot of senior leaders, it's subjective, it's squishy, it's messy. They're just not quite sure what to do about it. I wrote this book to help those leaders understand what the issues are, what the many sources of risks are, and what are the actions I need to take in order to put something like an AI ethics risk program into operation.

Alexandra: That's great. It's definitely something that's highly needed in the industries. When you mention an opportunity, do you refer to AI, in general, being an opportunity or particularly also doing AI ethics is an opportunity for businesses?

Reid: I was referring to AI in general. There are, of course, companies like my own, who think that there's a market opportunity for delivering AI ethics solutions, whatever that looks like, whether it's consulting and advising or whether it's a software solution, there's certainly opportunity there.

Alexandra: Definitely. I was just curious about that because I recently had this discussion with somebody from the privacy field where many companies also used it for their advantage and the differentiating factor in the product, which then led to this debate about it being if it's a differentiating factor in charge of the premium, what would the impact be for society if then only the iPhones in the world have supreme privacy protection, which not everybody can afford.

Reid: I think, for now anyway, some people want to be at the cutting edge of the AI ethics space. They want to build it into their brand. They want to brag about it, rightfully so. Apple likes to brag about their respect for privacy. Of course, it helps that it's not part and parcel of their business model to gather [chuckles] people's data. I think Microsoft-- Salesforce talks a lot about this, about their AI ethics efforts, it's part of their brand, at least from my perspective, Microsoft talks about it a lot. It's part of their brand as well. I think they don't speak as loudly as they justifiably could, but they're out there talking about it.

For the most part, I think while it is a competitive advantage, it is a brand advantage, it's becoming increasingly just the straight-up necessity because it's not like, oh, wouldn't it be nice if this were part of our brand that we take AI ethics really seriously? That's not what's motivating most of the senior leaders that I work with. That's in the mix, but for most, it's what I've said before, it's risk mitigation. It's we want to do really cool, amazing, profit-generating AI applications, create AI applications or we want to increase internal operational efficiency.

We can do that with AI, but we really need to make sure that we don't do this in a way that gets us on the front page all the time. We need to make sure that we can operationalize systematically identifying and mitigating those risks. It's really, I think, not about brand for most organizations, at least not the ones that I work with. It's really, you don't want to mess up here both for its own sake because you don't want to harm people. Also, because they don't want to harm their brand

Alexandra: Makes sense. With all the governance practices organizations have in place, it just makes sense that they extend this towards artificial intelligence and make sure that they also consider the ethical aspects of it. Talking also about AI ethics, there are these terms like responsible AI, trustworthy AI, and ethical AI floating around. Then oftentimes having conversations about how people use them, whether they use them synonymously or if they have different meanings for each of them. What are your thoughts on them? What are your thoughts on these terms? Do you use them interchangeably or do you define them differently?

Reid: I've a fairly strong view about this. This is actually something I talk about in the book. I really don't like the language of responsible AI. I think AI ethics is where the action is. That's where the risks are. I think that a lot of organizations are, frankly, just scared of the word ethics. They just don't know what to do with it. They know about regulatory compliance, but ethics is, it gets described to me by executives as being too subjective, too squishy, and so they run from the language, and then they talk about responsible AI, but then, when they start talking about responsible AI, then they start throwing a bunch of stuff into the bucket.

Not only do you get things like bias, explainability, and privacy, you get cyber security, you get things like model robustness, worrying about data drift, and then they pay too much attention to those non-efficacy things. They pay a lot of attention to the things that they understand, like building a robust model and cyber security. They sort of as it were, they score themselves on how well are we doing on responsible AI. They're getting decent scores because they're scoring themselves really well on making sure that the models are sufficiently robust to make sure that cybersecurity practices are up to their standards.

I think it's a way that companies accidentally take their attention away from AI ethics and put it towards the things that they really understand because the ethics stuff makes them feel uncomfortable. It's not just a new name, it's a name that, I think, my hypothesis anyways, that it's a name that they're more comfortable with literally, because they're uncomfortable talking about ethics, and it plays the function of drawing their attention away from ethics and towards the things with which they're already familiar and don't make them uncomfortable, which leave the risk on the table, of course, the ethical risk on the table.

The last thing I'll say is I think it makes things functionally difficult because then you've got in your responsible AI program, a whole bunch of stuff that it's not possible for any single person to own. It's not possible that someone's going to own all the AI ethical risk programs and cyber security and model quality. It's too much stuff. You've got this program in name, but really there's not a coherent thing there, there's just a bunch of stuff that's owned by different people. Then you might not pay very much attention to how all those people interact with each other.

I think that you need someone to own the ethical risk program, like this, there's AI ethical risk. Someone needs to own that, own policies, articulate the KPIs, track progress, and if you just call it responsible AI, then who's going to own it? Who's going to own all those things? Nobody

Alexandra: Interesting thoughts. Many things come to my mind, completely with you that you want to have a good balance with the different aspects of responsible AI. I'm just thinking also from an ethical standpoint, going away from AI and to a bridge and to the topic of robustness. Is it ethical to build a bridge that's not built to hold up the weight for everybody or to have a bridge that works perfectly well for people that are normal weight, but not for people that are overweight or something like that? I see a connection with a robustness that you want to have a robust system, in general, but also for all the different customer groups, and therefore, I see disconnection.

Also, the second point, again, I also agree that there should be ownership and there should be experts, but even if you would get rid of cybersecurity and robustness and just leave it with privacy, explainability, fairness, and so on and so forth, all of these already such huge topics that I'm doubtful if there's one expert who can truly be an in-depth subject matter expert of all of those. Ownership, anyways, needs to be tied to an executive. What would be the challenge of having an executive that oversees seven people with all the different subcategories and not only just the three or four you mentioned earlier?

Reid: There are a couple of things to say here, number one, yes, there's a Venn diagram. Take the Venn diagram that the circles include ethics, cybersecurity, model robustness or model quality, whatever you want to call it. You might even throw a circle in there for privacy, throw in privacy there as well, [chuckles] or regular compliance. Yes, there's going to be overlap. If you don't do your job well as a data scientist or as a model developer, yes, there could be some ethical fallout because you've made it functionally not very well.

To use your example, if you build a bridge that can't hold more than one ton and there are lots of greater than one-ton trucks going across that bridge, you've realized certain ethical risks because you haven't even made the thing properly functional, which was, it was supposed to be strong enough to be a good bridge. [chuckles]

Yes, there's overlap, but it's not as though the engineer's going to be like, "I've done all my safety tests to make sure they can hold-- Oh, also I should do some ethical check." They do their ethical work. They do, as it were, their ethical due diligence by virtue of doing their engineering work well. There's some ethical work that needs to be done well that's not a function of doing some other work well.

Alexandra: Not arguing that, but still. I remember a conversation quite vividly that I had in, I think 2019 at an AI conference in Tokyo, where the AI lead of a huge company that I'm not going to name bluntly told me, "We know that our models don't work well for Black-skinned individuals, but we just don't have enough data for them. It would be too expensive to collect them, and so we are happy if it just works for White people."

When I'm speaking on the robustness of a system, and also want to make sure that it is robust and performs well for each user group, then this is definitely something where the engineers and data scientists have to be involved and need to have this as a high priority because if the ethics guy or woman is coming over afterward or even at the beginning of the process and talking to them about fairness, but they have built it in a way that doesn't allow that, then I'm a little bit doubtful that it will end up to offer equal quality of service to every user group and equal robustness.

Reid: They decided that wasn't their goal. They decided that their goal was just to serve White people. Did they make a model that's really good for their goal of serving White people? Yes, they did. Let's just assume. Now, it's disgusting that they let that be their goal [chuckles], but that was their goal. Their technical specifications were met. They should have had different specifications.

If you like ethical specifications, this has to work for everybody and there might have been a good probably, there may, or may not be a good business reason as well. We need those customers, blah, blah, blah, blah or we don't want to get sued or whatever. Then they'd have to make sure that, let's make sure that it meets these technical metrics for, say, recognizing Black people.

Alexandra: In the end, we can agree on that, that it comes down to the direction executive leadership wants to take and that they really own to say, "We want to do AI the right or the ethical way," but nonetheless, quite interesting thought.

One other thing, maybe since you mentioned that you think responsible AI is a term that's maybe an umbrella term but distracting and shifting the balance away from the true ethical risks. Sometimes when I'm talking with lawyers to tell me that ethics is something that's hard to put into law because one thing that's ethical in Western Europe, is maybe not ethical somewhere in Southeast Asia or something like that. What are your thoughts on that as an ethics expert?

Reid: One, that sounds like a silly argument since things that are legal here, aren't legal over there. [chuckles] It can't be the case that because let's just say ethical norms vary across countries that it's too squishy or subjective or something like that because laws vary [chuckles] by all over the place. The mere fact of variance can't distinguish it from the utility of law compared to the disutility of ethics. That just strikes me as a silly argument.

The other thing to say is, in my general view, the issue is not necessarily about getting it ethically right, it's about doing your best to not get it wrong, doing your due diligence, doing it in a way that showcases like your organization has taken serious steps towards identifying and mitigating what a not unreasonable person would think is ethically wrong.

Alexandra: [chuckles] That's a good point. I also liked what you said in one of your, I think, articles from HPR that not being afraid to start and not having the goal to do it right right from the beginning because this is just an unreasonable expectation and it's such a bigger win if you just start thinking about it and trying to do your best as opposed to being paralyzed by the mushiness and squishiness of AI ethics, and not even starting to tackle the problems.

Reid: Although I will say, I actually spend a very early chapter in the book saying it's not subjective and it's not squishy. Here's probably why you think it's subjective and squishy and they're really bad reasons. I'll explain to you why. One of my first goals in the books is to say stop thinking about ethics as squishy or subjective. I want you to think about ethics as objective. Here's why you should do that and here's why there are good reasons to take it to be objective, not subjective. I was a philo professor for 10 years plus as a grad student, I taught courses.

One of the things that always stymied conversation about ethical issues is that you'd have students who would say, "Yes, but this is all subjective. It' just all varies by culture." Once that happens, people sort of shrugged their shoulders, and they're like, "Yes, it's all subjective. It's all squishy. What do you do about it?" One thing that I would continuously have to do is to say, "Why do you think it's subjective?" They would invariably give me the same kinds of reasons. These reasons are horrible.

[chuckling]

Reid: That's just not my view. Honestly, anyone who has real ethics training, for instance, a PhD in philosophy will say, "Yes, those are bad reasons." There might be other reasons for thinking it's subjective that are good, but these reasons are really bad. For instance, people say people disagree about ethics. Some people think X is right or permissible and some people think is impermissible and so there's no truth of the matter.

I say here's the general principle. If people disagree about X, then there's no truth to the matter about X. Is that principle true? Of course, not. People disagree about all sorts of things. They disagree about whether evolution has occurred. They even disagree with the shape of the Earth but we don't think there's disagreement about the shape of the Earth. There's Bob, who thinks it's flat, I guess, there's no truth of the matter because there's disagreement. We don't do that.

Alexandra: Good point.

Reid: There are people who disagree about what's at the center of a black hole. Very smart physicists disagree with each other about lots of cases, lots of things. It's a multiverse interpretation, the right interpretation of quantum physics. A lot of disagreement about that. No one says, "There's disagreement so there's no truth of the matter."

The mere fact that there's disagreement about X doesn't show you that there's no truth to the matter about X.

That's true, whether we're talking about something's being flat or round or has traits inherited by previous ape-like creatures or whether X is morally permissible or impermissible. You can keep going through these things like "Oh, well," but in those cases, they're scientifically verifiable.

Ethics isn't scientifically verifiable, and so the disagreement does matter. You say, "Here's your claim that you should only believe claims if they're scientifically verifiable. Is that what you think?" "Yes." "Okay. I would for you to scientifically verify the claim that you should only believe claims that are scientifically verifiable." "Oh, well, right."

One thing that I want to pull people out of is thinking about ethics as this just squishy subjective, it varies by culture, I prefer vanilla, you prefer chocolate, whatever. That's not the way to think about ethics. I think that organizations actually need to overcome that frame of mind when thinking about ethics, both for themselves, they want to create the right kinds of ethical risk programs and for the people that they're educating because if they don't understand that there's real issues here, and it's not just I prefer chocolate, you prefer vanilla, then they're not going to take it seriously. They're going to think you're just doing some PR stuff.

Alexandra: Makes sense. You're just increasing my anticipation for your book. Really looking forward to reading it. You mentioned also something quite interesting because you said a person having a PhD in philosophy might be in place to have proper training on AI ethics or ethics in the first place. Does this mean that we should employ more philosophers at companies moving forward so that they are the right people to make sure AI ethics is going in the right direction? Because lawyers also don't have that thorough ethics training to my knowledge.

Reid: I actually think the answer is yes. I do think philosophers can play, should play a role. More specifically, I think they would do really well working with product teams and, or working in ethics committees. The reason is, it's not because ethicists, as you said when I'm talking about ethicists, I'm mostly talking about people with PhDs in philosophy who specialized in ethics.

It's not because they have some special kind of wisdom. It's not because they're akin to priests or ethical oracles or something that. It's because ethicists of the sort that I'm talking about are trained to think through really tough ethical issues and to help others think through them. That's where I think our value is. One, we're really good at spotting ethical risks because we've been thinking about ethical risks for our entire careers.

Most of what a philosopher does, we imagine scenarios where things have gone ethically wrong or some claim is made and we imagine a situation which it's false but what you do, less as a researcher and more as a professor, is you help others understand the issues and help them to navigate the thorniness of the issues. That's where the expertise of ethicists is really relevant because, as I said earlier, a lot of people find it, "It's really squishy. It's subjective, I don't what to do about it."

They understand the foundations, first of all, but also, they don't know all the concepts. There's lots of concepts and distinctions and certain kinds of moves that they just don't know. The analogy here actually is with lawyers. There's the world of law and you might want to involve a lawyer because they're really good at spotting legal risks.

They are faster at spotting legal risks than non-lawyers and a good lawyer can help you understand, "Here's what we're dealing with legally speaking. Here's the grey areas, here's the not-so-grey areas.

Here's how making this kind of decision can affect you legally in these three ways but if you make this other decision, it won't affect you, and those who will affect you and these other two ways." A good lawyer can help you see that so that in light of the legal risks, you can make a wise decision. Similarly, a good ethicist, in this context, is one who says, "You've got a certain kind of ethical issue, here are the major things ethically involved here. You can make this kind of decision if you think about it this way but here's the kind of ethical risks you'll face that way.

On the other hand, you could think things are this way, but then there's-" Again, what you're trying to do, what a good ethicist is good at, at least in part, is helping others to see the issue more clearly so that they can make a more informed, wiser decision.

Alexandra: You got me convinced we need more philosophers for ethical training in our companies. Thank you for that.

Reid: I wouldn't recommend go hire an ethicist, then you're good. That's a recipe for disaster because just like tool, I mean, think about ethicist as a kind of tool at your disposal. You can't just drop them in and expect them to fix the problems. They have to be embedded in the right kind of governance structure.

Alexandra: One other thing you mentioned that regulation is far behind with the EU currently working on their AI Act, and also now, the parliament strengthening the AI ethics requirements. What are your thoughts on the draft? Did you take a look at it? Do you think that this will kind of change the regulatory landscape or is it still rather weak?

Reid: Yes, I've taken a look at it. Obviously, there's some good things about it. Yes, there are some bad things about it. There's a lot of vague things. You don't even have to go to those regulations, just go to GDPR and what GDPR says around issues pertaining to explainability explaining how decisions were arrived that impact people.

What does explainability look for AI systems according to GDPR? It's not even settled yet. It's also years away.

I didn't pay super close attention to it, frankly, because I don't know what things are going to look like three, four years from now when something or other gets passed that bears some resemblance to what's been put out. I'm not convinced is going to do that much and it's not going to do much, I don't think in the US. I could be wrong about that.

Alexandra: We will see I think. In the European Union, I definitely saw that it became a board-level topic, and also with some of the US brands I'm talking with larger companies, they definitely are worrying about putting AI ethics and responsible AI into the product because they're just afraid they won't be able to sell in the European Union anymore once it's out but looking at the draft, I think there's still quite some area of improvement to really make sure that the outcome is something that's beneficial to humanity.

Reid: I should say, it's really hard to regulate for the following kinds of reasons. There are some kinds of regulations that are outcome-oriented. Let's say you have regulations around how many miles per gallon a car has to reach X amount of miles per gallon and in order for it to be a legally sound car, in order for it to be compliant with regulations around gas consumption.

It's outcome-oriented in that it says something like, "All right, we want it to be the case that by virtue of being compliant with regulations, all the cars that are out there are going to look like this," whatever that looks like. It's a lot harder to do that with AI ethics. The alternative is something like being process driven. You can have regulations on what the process needs to look like as opposed to what exactly the outcomes are going to be. Because AI spans literally every industry, you can't be outcome specific. I mean, not in any way that's specific, you can be outcome specific, and that you can say something like, "We don't want any bias or discriminatory outputs."

That's a bit vague. What counts as discriminatory? Discriminatory impacts are a subset of differential impacts. Not all differential impacts are discriminatory. That's going to vary not just by industry, but also by use case. How you get fine-grained regulations around outcomes is really hard. It's going to have to be process driven.

Then the question's going to be, what's the quality of the process? There's so many ways to fudge process. There's so many ways to be vague about what that process needs to look like. For instance, take discriminatory or biased AI, you say, "You got a vet for bias." What does that vetting look like? Who does the vetting? Do you have to apply quantitative metrics of fairness to the model outputs? Is that what it looks like? Who decides or what makes it the case that choosing one metric is justified over another?

Can I just choose any metric and say, "I've done my due diligence," or do I have to, for instance, have a certain set of people deliberate about what the appropriate metric fairness is? We have to be transparent about who is involved in that decision. Do we have to document the decision? Is the deliberations behind that auditable? Is it public? Is it transparent to the public in general? It has to be about process, not outcome, as far as I can see. It's very hard to create robust regulations around a process where you need judgment calls made by well-informed experts.

Alexandra: Just thinking about it because we tried to be more focused on the output, it was a critique point that they should focus more on the outputs because if it's too much focusing on the process, regulators tend to become quite prescriptive rather quickly, which is a problem, particularly because we don't know if the way how we approach mitigating AI buyers today will be the same process as we will have it in 5 years or 10 years from now.

Second risk, I'm seeing to process-based regulation and, in general, just having processes in place and not caring that much about the outcome, I've seen it in the privacy world and I've also had this discussion with AI ethicists when they work with organizations that they focus so much on, "We are documenting all the considerations we went through and that we're trying to do the right thing, but if it really ends up being discriminatory or not, we can't tell, we don't know.

We just did that," which I think in the end, for the customers, is not really satisfying. I'm more inside of outcomes-based, but of course, on the high abstraction level so that you don't become too prescriptive or specific. Then again, [crosstalk] the guidelines.

Reid: That'd be great. The problem is that are they too vague to actually be to make it the case that it constitutes a robust test for any given AI model? If you say, "The outcomes need to be non-discriminatory." That's not by itself going to settle anything because whether some particular impact is discriminatory will potentially be contentious.

Alexandra: Fully with you on that. Definitely, I think deregulation itself should be more outcomes based because it needs to be in place for several years, several decades, but then extensive guidance is needed from something like we currently have the European Data Protection Board who is supposed to issue guidelines on applying privacy technology and so on and so forth.

There it's really a necessity for organizations to have some official authority tell them, "If you talk about non-discrimination, how does it look like in that scenario? Which fairness metric are we considering? What should be and the approach that you take because, otherwise, I'm with you, it's just too wishy-washy and not going through.

Reid: Is this what you're imagining, the regulations look something like this? Regulations look like, at least in part, here are a number of cases in which these metrics would be inappropriate and these metrics would be appropriate, and here are the reasons why. You do that for a handful of cases, then you say, "Make sure that you're living up to what these examples demonstrate," is that what you're imagining?

Alexandra: When I talk about regulation, I'm really thinking of the legislative text in something like the upcoming AI Act. With the guidelines, I think, there's more flexibility because we could issue guidelines on bias elimination in credits scoring applications and so on and so forth and break it down to where we are at with the industry. Something like that, I think, would be more helpful.

Reid: That sounds great. I'm relatively skeptical. Take an analogy with medical ethics. I think medical ethics is the industry where it's been done the best job of operationalizing ethical risk mitigation. At least in the US, we had the Tuskegee experiments from I think the 1920s to the 1940s, I think that's right. Tuskegee experiments as, as you may know, and your listeners may or may not know, I'm the guest, not the listener.

Doctors withheld penicillin from syphilitic Black patients so that they could observe the unmitigated effects of the unfolding of the disease. This eventually became public news. There was an outcry, blah, blah, blah, blah. Now we have regulations around how you have to treat patients or test subjects, but you still have things like IRBs. In the state, those are Institutional Review Boards. They're called different things in different countries but, essentially, before any medical researcher can do research on a human subject, they have to get IRB approval.

I was just talking to somebody in Canada. I think they call them, I forgot, something Ethics Board, something about an Ethics Board, something like that. Anyway, the point is that they didn't just say, "Here's the law, here are the regulations." Medical researchers now, just comply with that stuff. They saw that an essential component of it is that the risks will vary by use case, and we're talking about within the same industry, it's not across all industries. Within the industry of medical research, there's going to be lots of variances. There's going to be lots of things that vary on a case-by-case basis. We need IRBs to look at each case to give the green light that this experiment is on the ethical up and up.

The idea that the medical, in healthcare, that's what we had to do but in AI, we can just figure out the regulations across all industries in a way that doesn't require this case-by-case assessment. I'm skeptical. I'm phenomenally skeptical that that's possible. If we can't do it in medicine, then I don't see why we can do it in AI, in medicine. If we can't do it in medicine, then I don't see why we can do it in AI, in medicine and financial services and space exploration, and retail and marketing. I'm just skeptical. [chuckles]

Alexandra: Very good point that you're making, definitely. I'm convinced now that maybe only outcomes-based wouldn't be the solution, but having a mix of them also requiring an organization not to run it by their internal AI ethics sport, but for certain high-risk applications go to an approval body or something like that, similar what you described from the medical space that should have a say on the specifics of a given case, but definitely, a good point. When we go a little bit away from the-- Oh, yes, please.

Reid: I'm going to say one more thing that's, I don't want to say it's controversial, but when I think about the conversation on regulation around AI, I could be dissuaded from this view, but I think that it's too narrow. The EU has got GDPR and now you've got potentially these AI regulations that take at least a few years to pass and we'll see how robust they are, but we've also got blockchain. We have quantum computing, we have IoT, some people think we've got this Metaverse thing.

What my view is, I see this as a piecemeal approach where it's like, "Let's have regulations for data and we're going to spend 5 to 10 years working on that, and then 5 to 10 years working on AI regulations and then 5 to 10 years on blockchain and then five to 10 years on quantum." It takes forever, it's way behind the regulations and it fails to appreciate the commonalities among these various emerging technologies.

When I hear people talking about regulation of AI, my own view is that I think we need-- What's the expression of the-- I don't know, things are already rolling with AI regulations of the EU, but when I think about at least the US, I think we'd be much better off thinking about something like emerging technology regulations or emerging digital technology regulations because there are common features among lots of these technologies, such that we should just say we need digital regulations or regulations around digital technologies and think about, at least in part, what are the things that blockchain, quantum, AI or machine learning, just big data, generally.

What are the things that they have in common or sufficiently in common, such that we can create regulations that would apply to all of them? Let's think about other digital technologies that we don't know that are coming down the line, can we specify digital technologies, whatever they look like, have to comply with these regulations. You're not going to get everything. You're going to need AI-specific things, and you're going to need blockchain-specific things. I think that we need a more general regulatory framework around digital technologies and then dive into each one in particular.

Alexandra: That's, a very good point, absolutely with you on that, that we need a more holistic approach. I'm personally a fan of the AI regulation, but with all the upcoming acts that we see in the European Union, I haven't heard of a metaverse regulation in the pipe yet, but it's definitely creating a fractured regulatory environment, and it's really increasing complexity to a level which is putting a burden on small and medium enterprises and even large organizations.

Not the best approach and also, in the enforcement side, I think we definitely need an organization enforcement body where different perspectives are balanced, not only privacy and data protection but also competition, societal aspects, and so on forth because those technologies are not like niche topics anymore. They have, particular with AI, wide-spanning effect on so many elements of society.

One thing I'm wondering though, not a legal expert on that but in general, we already have regulation in place for AI ethic-- Sorry, not for AI ethics, but for ethics, in general, or at least to have products and services in place that should not discriminate or not offering the services in a discriminatory way.

The question really then would be, do you need a whole new set of regulations for everything tech or do you hope to cover a big part of all technologies, including AI, with existing regulations, and then really can only pick out the peculiarities of a given technology to put additional regulation on top of that. One thing that you made me curious about since you mentioned it not twice, what are the particular aspects when it comes to ethics that interest you in the context of blockchain and also quantum computing?

Reid: There's a lot of really interesting things. With blockchain and, actually, I don't know when this will air but I have an article coming out on blockchain ethics with HBR and I don't know sometime in May. There are a number of problems or risks, I should say, my favorite one [chuckles] if I have a favorite one, is people think that blockchain is this anarchist Wonderland. It's decentralized, there's no third party. Then what people I think failed to appreciate is just how much governance the blockchain or a blockchain actually needs. That there are very significant ethical, reputational and economic decisions that are made about how to govern a blockchain.

I don't think that the proper investigation is being done either by, let's say, developers of the blockchain or by, say, a financial services institution that is using, say, apps on the blockchain or advising clients on those apps. To take a somewhat famous or infamous example when the Ethereum blockchain was hacked in 2016 was a massive debate about what to do with this by the people who, as it were, had control over the Ethereum blockchain as people who are familiar with this event know a rather significant decision was made to fork the Ethereum blockchain into two, which is known as the hard fork.

That was a decision leading up to that decision, which was highly contentious, there were two kinds of people. There was one camp who was saying, the person who drained, who started draining Ethereum from the Dow. I don't know, I could tell a little bit more of the story, but the general point is that someone hacked what was called one of the Dow on the Ethereum blockchain, they took out a tremendous amount of money, millions, tens of millions I forgot what the figure is. There was a debate about what should they do about it and the person who had done it took advantage of a bug in the software.

The bug in the software essentially said, how much money would you like to withdraw? Then the person who would withdraw the money. Then it would say, would you like to withdraw any more money but it would ask that question before updating that it had already given you the money that you had asked for. If you got $100 in your checking account, you take out $99 then it says, would you like to take out more money? You say, yes, I'll take out $99, and because it has't updated that you already did, that you can just keep drawing out money infinitely, it's all drained or until you're stopped, which is what they did.

Some of the people who ran the Dow said, this is an outrage. They're taking advantage of the bug. It's a hack, and we need to stop it and we need to try to reset this as best we can. Other people said, no the code is law. The blockchain is immutable. This is a moral imperative. They didn't do anything ethically questionable, they merely obeyed the rules of the blockchain as they existed when they started draining that money.

There was an ethical conflict, an ideological conflict about how should the Ethereum blockchain be governed and that led to this massive decision to essentially create two different Ethereum blockchains, the hard fork, try to restore certain kinds of money before create a reset button so let's go back to what it was before this person drained that money. Anyway, this is just a long rambling way of saying that there are serious things that go on with blockchain. Who gets to decide with what authority, what you should do when, say, catastrophe strikes. Just because it's decentralized, that doesn't mean that everybody gets a vote.

You might think, oh, it's decentralized. Everyone gets one vote. No. Certain people had power of the Ethereum blockchain and they got to decide, not every node in the network or something like that, or not everyone who had any money or had any Ether. Other blockchains operate differently where you could be a token holder and a token holder gives you voting power.

Ethereum does not run the governance. Ethereum is not token based. It's something else. This is, again, my long-winded way of saying that one of the biggest ethical risks around blockchain is that there's very little understanding of what governance of a blockchain or of a particular app that runs on the blockchain looks like or ought to be which makes it extremely unpredictable and ethically, legally, and economically risky. There are other risks as well, but that's one of the main things that I find really interesting.

Alexandra: Interesting. Then also hard to regulate, I would assume, but going away from regulations and back to AI ethics, I'd be interested in hearing from you what you see as current challenges of operationalizing AI ethics when we look into businesses.

Reid: When you say challenges, do you mean challenges that are because of what AI ethics is, or do you mean because businesses are what businesses are?

Alexandra: The latter.

Reid: It's just a lack of will. It's really, how do you put AI ethics or ethical risk mitigation into practice, my general view is that if you really understand the issues and the sources of the risks, it's not that hard to solve for, at least in principle.

We know the kinds of things that we need to do at least at a somewhat high level. Then it has to get customized to organizations so I spend a lot of time with my clients, helping them to see how to customize general ethical risk mitigation strategies, to their particular organization how to roll it out, what their standards should be, how we should coalesce with their existing corporate values. The only thing that really stands in the way of operationalizing it is corporate will, that's it.

Alexandra: That's is actually quite interesting answer because if those organizations that are willing to have good AI ethics in place, then should not find any hurdles in front of them, or is there anything that you encounter working with organizations where they still are challenged or where we still need to do more research, develop more tools to help you to really achieve explainable AI and practice fairness and so on and so forth?

Reid: Tools are good. Tools don't mean anything if not embedded in a larger structure where those tools are taken up and used. They're not going to get taken up and used unless certain things are already in place. Something like a governance structure is in place, policies are in place. Financial incentives are not misaligned with use of those tools.

I think it's when you're doing AI ethical risk mitigation, you're talking about organizational change. You're trying to change an organization such that it now takes a new thing seriously from the C-suite to the frontline developer and data scientists and everyone in between and any organizational change requires lots of buy-in. It's a political problem. How do we get the right people to buy in the right way, such that they devote resources, time and money, to implementing or to bringing about this kind of change? The difficulty is the same difficulty as any other kind of organizational change. I think a lot of people think that the real difficulty is, oh, ethics is subjective. It's squishy, it's all a gray area. We don't know what to do with it. How do you operationalize ethical principles? They think that's the problem but that's not the problem.

Alexandra: Changing people is one of the challenges.

Reid: Yes, I think that's right. If they want to know how to operationalize those ethical principles, call me, and I'll tell you how to do it.

Alexandra: Or read your book.

Reid: Or read the book, yes.

Alexandra: One thing I'd be super interested to hear, Reid, because this now reminds me of things you highlighted, change management, and changing people's mind, of a great conversation I once had with a good friend of mine who is leading data governance at Swisscom, a telco operator. He made this wonderful analogy of why it's so challenging to change how employees act with data and how much they care about it. That they haven't yet understood it, or we as a society haven't yet understood data as the economical asset and the sensitive, valuable good it is nowadays, and we don't yet have this feeling for it.

Versus with money, and he made this example of if I were to give a person €10,000, $10,000, they would immediately know what to do with it and what not to do with it. That, not to put it openly on your, I don't know, fence and hope that it is still there when you return from work, the other day, and so on and so forth. With data, they don't yet have this feeling for it, why it needs to be protected, and why you have to handle it carefully. I'd now be curious to get your take on how long this will take with people adapting to taking AI ethics seriously.

Are you positive that this will happen faster than two generations' worth of time that needs to pass, as the governance had said with data, or do you think it will be a similar long journey?

Reid: Obviously, I don't know the answer. This is a bit crystal ball gazing. Look, I can tell you that if the near future resembles the near past, we'll see a massive increase of attention to AI ethics. When I started doing this, I don't know, let's call it the end of 2018, there was talk about AI ethics but there wasn't a lot of action. Microsoft was doing some things, Salesforce was doing some things back then, but there wasn't a tremendous amount. If you then think about the last year, there's been a tremendous amount of talk about it, a tremendous amount of attention.

Just this morning, The Wall Street Journal has a newsletter that goes out to CIO, chief information officer crowd. This morning's was referencing an interview with Arvind Krishna, the CEO of IBM. The quote was something like, "Organizations are not nearly taking advantage of AI's full potential. Maybe 10% of organizations are doing that at most." When asked why, he said the biggest obstacles are the ethical challenges. They're not how sure how to tackle them, which, one, I find totally fascinating, but two, to your point, it means that's a person who's probably in the know about these things, saying, "Hey, this is one of the problems."

If it's the case that AI is not being adopted at nearly the rate that it could be adopted to drive the kind of ROI that companies could get from it, and what's holding them back is lack of an AI ethical risk program, then I think that there's a lot of reason to think, "Oh, they're going to start taking it seriously because AI ethics is an enabler in that case." It's an enabler to actually doing AI in the first place for a lot of organizations.

Alexandra: That's an interesting thought and something that puts me in a positive mood that we will get to AI ethics or fear AI much faster.

Reid: Look, first of all, it's a journey, it's not a destination, blah, blah, right? Always, it has to be done. I think from what I've seen, looks to me to be a reasonable prediction to think that it's going to increase at an increasing rate. It's still tiny. My clients, I would say, some of them are at the cutting edge for sure. Not quite at the cutting edge. Some of them are just starting it now, Fortune 500 company or Global 1,000 companies. They're just starting it now, but they're starting it. Microsoft and Salesforce started it, like I said, years ago. The other interesting thing that I see is, my clients are from all over the place.

One client is in the mining industry, another was in financial services, another was in healthcare. It's interesting to see that it's not industry-specific, so it's not like I think, "Oh, it is happening, but it's only in this little pocket." It's all over. It's not saturated across the verticals, but it's across all the verticals.

Alexandra: That makes sense. One thing I found quite interesting when I read one of your HBR articles is that you pointed out that one risk that's oftentimes overlooked when it comes to AI ethics are actually procurement officers. Can you elaborate on why that is the case?

Reid: Yes. I've got this view, I haven't done research on this, but here's my sense of how things are going. Lots of enterprises outsourced their innovation to the massive startup community. You have startups doing all these awesome innovative things, and then companies can sit back, not risk dumping resources into innovation because they can sit back and see, "Okay, what are the startups that are going to be successful out there? Then we can acquire them, we can procure from them. Let's outsource the risk of failed innovation." Then they're importing all the ethical and reputational risks of AI because they're buying from companies who, their dream is just to sell to enterprise.

To make a lot of money selling to enterprise. Startups are not properly incentivized to take ethical risks into consideration. Some of the do, but they're few and far between. Then, let's say, most famously, they're selling into HR. They've got an AI HR solution. The HR people and procurement officers generally don't know anything about discriminatory models or explainability as a potential problem, or the various ways AI might violate people's privacy. They just don't have the training for that sort of thing. They don't even know to ask certain kinds of questions from the vendors from whom they're procuring.

When you're talking about AI ethics, it usually is a conversation about designing and deploying AI responsibly, but people very rarely talk about having the processes in place so that ethical due diligence is being done for procurement as well.

Alexandra: That's a very good point.

Reid: Yes. I'll give you an example. One of my clients, they've got offices around the world, and they have a pretty impressively sized in-house data science team. They had been focused on, when they came to me they had a draft policy, and they said, "Look, the model owners are going to own the ethical risks for the model that they own," which makes sense. They're the first line of defense. Not the last one, but the first one. They did have a line, to their credit, about, "Anything that comes in that is being procured, someone has got to own that model."

They had nothing about, what are the processes by which procurement sees that they're procuring some kind of machine learning solution, and then who does it go to? It's not being developed in-house. It doesn't naturally belong to this data scientist because they have nothing to do with that model. First, you've got to recognize, of course, that, "Oh, we're procuring machine learning solutions. We better do our ethical risk due diligence."

Then it's a matter of having the processes in place to make sure that it actually gets done by the relevant experts, which is first going to presuppose that you've educated and trained your people in the right kind of way, the non-technical people, to ask the right kinds of questions when they're procuring.

Alexandra: That makes sense. When you say that it would make sense to have these ethical questions in procurement questionnaires and processes, from which department should they come from? Is there a best practice that is emerging? Is it AI Ethics Excellence Centers? Is it the privacy or some other functionalities? Who should own that?

Reid: There's not an answer that's going to fit every organization. that's what I've seen so far. Just, some organizations want a highly centralized strategy and some want a highly federated solution. They have their analytics people in HR, and the analytics or data people in HR will do the ethical risk due diligence for AI HR solutions. Then the marketing department will have their analytics people, and their people will do blah, blah, blah. That's distributing it, as opposed to it all goes up to this center of excellence, or it all goes up to this team or something like that.

Either strategy is fine. It's just going to depend upon how your organization generally operates and making sure that it coalesces with those existing procedures.

Alexandra: That makes sense. Maybe to give our listeners a more tangible example, when we first talked about this episode in our pre-production call, we covered explainable AI, and also how you prepared this topic in your book. Can you share a little bit about that? What future readers can expect from the book, and then also how explainable AI can be made actionable.

Reid: Yes. Should I say a little bit about where the problem comes from, or do you think your listeners know that?

Alexandra: A quick recap, I think is definitely something we should include.

Reid: Okay. Quick recap. Machine learning, it's supposed to be really fancy. At the end of the day, it's just software that learns by example. That's what I like to tell everyone. It's software that learns by example. How it learns by example, it gets more complicated, but at a high level, it just learns by example. You want your AI software to recognize your dog, Pepe. How do you do that? You give it a bunch of photos of your dog and label them, Pepe. The AI software will learn, "Oh, from all these examples, this is what Pepe looks like."

When you upload a new picture of Pepe, it says, "Yes, that's Pepe."

If it's not a picture of Pepe, then it says, "Not Pepe." Something like that. That's very basic. It learns by example. Now, a fancy word, for example, is data. Doesn't just learn by example, it learns by a bunch of data. In this case, for the dog, the data has to do with, you and I look at the eyes, the nose, the mouth, the color of the skin, the size of the ears. We're looking at, if you like, the macro features of the dog. Your AI software is not looking at the macro features. It's looking at the thousands of pixels that make up each picture and the mathematical relations among those pixels.

It's looking at each picture in a way that you and I would never look at it. We don't look at pictures at the pixel level, and we don't care about the mathematical relations among those pixels, but your AI does. What that AI software is doing is it's recognizing the Pepe pattern. When pictures are arranged in such and such a way, blah, blah, blah, then it's Pepe. That's roughly what it's learning. Those mathematical relations, like the set of variables and the mathematical relations among those variables, is phenomenally complex, and it's way too complex for us mere mortals to understand.

I can't keep that equation in my head for half a second, let alone really comprehend it. What we get is, let's just say in this example, we get a pretty good Pepe recognizer. You upload a new picture of your dog, Pepe, and it says, "Yes, that's Pepe." You upload a picture of a cat and it says, "Not Pepe." That's great, but exactly what the pattern is, we don't really understand what pattern it's recognizing. It's too complex for us to comprehend. In other words, how it comes to the conclusion, as it were, to put it metaphorically, that this is Pepe, is opaque to us because we don't have the intellect for it, frankly.

Another way of saying it's opaque, it's unexplainable, or we have an unexplainable set of outputs, or unexplainable, how it came to have those outputs. That is explainability in a nutshell. You do the same thing with more contentious stuff like we upload these resumes, and it says, these ones are worthy of hire and these ones are not. Why this one and not that one? We can't explain why it recommended this resume not that one. Then you start to get a little bit worried that this is a real problem. There's a couple of questions to ask here.

One important question is why is explainability ethically important? There's a couple of things to say here. I don't want to lecture, but one reason is that, in some cases, we're owed explanations. If you treat me really poorly, all else equal, you owe me an explanation for why you did that to me. You denied me a loan, you denied me a mortgage, you denied me a job interview, you denied me admission to the school. You gave me a crazy insurance premium that I can't afford, and so on and so forth. Usually, we think, or let's take a Non-AI example, I don't know if I walk up to you and I push you, you might say, "Why did you push me?"

I say, "I'm not going to tell you." You'd be insulted. You'd be forced to be angry because I pushed you, and then you'd be afraid because I refused to explain to you why I pushed you. One reason why explainability is important is because, in some cases, it's just required as a matter of respecting people, that you give them certain kinds of explanations for when you harm them. The other thing is, really explainability, we're talking about at least one of two things. One thing that we're talking about is the rules of the game that make it the case that certain inputs are transformed to certain kinds of outputs.

This is called the global explainability. What are the rules of the model? What are the rules of the game? As opposed to local explainability, which is, why did this particular input lead to this particular output, the global explainability is, what are the rules according to which someone gets approved or denied for a mortgage, for instance, as opposed to local, which is, why did Reid get denied a mortgage? Now, think about the global explainability. What we're talking about are the rules of the game.

One reason why we might really want to know the rules of the game is that, in certain cases, not with Pepe the dog, but with say, getting a mortgage or being given a high-risk rating as a criminal defendant, are the rules of the game fair? Are they just? Are they reasonable? Are they good? These are ethical reputational and business decisions about whether those rules are any good, but if the rules are utterly opaque, then you can't even begin to assess them. One reason why explainability is crucial is that if we don't have explainability in certain kinds of contexts, then we can't even assess for justifiability, which is what we really care about.

Alexandra: Let alone intervene and complain about some decisions.

Reid: Yes, exactly. Explainability is also important for things like debugging, but ethically speaking, one of the things that we care about is that the rules of the game are fair, are just, reasonable, or at least not unjust, not unreasonable, et cetera.

Alexandra: Are there still any kind of challenges to having explainable AI? Whenever I talk with the data science folks, with more technical peoples, they tell me about the [unintelligible 01:06:33] values they have in place, and so on and so forth, but oftentimes, that's just not sufficient or does not necessarily produce an explanation that's easily digestible by somebody who has never worked with data before in this, from the general public.

Reid: Yes, that's exactly right. There's two things to say. One thing to say is that sometimes explainability is really important and sometimes it's not. One reason, with Pepe, we might not care. We just want it to work. With mortgage denials, both for regulatory reasons and for ethical reasons, we might really want explainability. Your organization needs to think about when do we need it because it takes resources to get explainability. More interestingly, in a way, traditionally, at least, it's conceived that there's a trade-off between explainability and accuracy of your AI.

The idea being that, as you turn up the volume on explainability, you've got to make your AI a little bit dumber so we mere mortals can catch up to it, intellectually speaking, and so it's going to get less accurate. You've turned explainability up, you turn complexity down, and then it gets worse at doing its job. If you turn the complexity up, the accuracy gets better, but because it's getting more complex, it makes it harder and harder for us to understand. Complexity is the main issue. Because it's so complex, you can't understand it, but it's by virtue of being able to recognize such complex patterns that it gets to be so impressively accurate.

Alexandra: Definitely a challenge there, but one thing I'm wondering, I'm not on top on all the tools that we have for explainability for complex and huge models, but I'm just wondering that even if you would reduce model complexity, it's still something that's rather complex, at least for somebody who doesn't have anything to do with AI.

Reid: Right, so there's a-- Sorry.

Alexandra: I'm optimistic that we will find ways where we can have more complex models because the abstraction level of, why did this result, this prediction happen anyways, has to be something that's digestible and comprehendible by a human. This should not necessarily correlate to the amount of layers that I have in my network and the many steps that are taken to derive to this decision.

Reid: Yes, 100%. There's one issue about whether your organization should devote resources to explainability for a particular model. Then you might think it's not actually an issue. You might think there's some technical fixes such that we can get certain kinds of explanations, things like LIME and SHAP, but then to the point that you made is, those explanations are only intelligible to data scientists. If the explanation has to be given to a customer, a client, a regulator, a non-data scientist executive who has to green light deployment of the AI, then it's not an explanation for them at all, and so is useless.

Alexandra: Definitely.

Reid: What constitutes a good explanation is going to vary by the stakeholder to whom you're trying to give an explanation. Then the last thing to say is, even if you can get explainability with LIME and SHAP, and data scientists can do that, aside from the, it's not intelligible to everybody issue, you still get the issue of data scientists are not the people that you want assessing the fairness, or reasonableness, or quality of the rules of the game. That's just not their training. That's not their thing. They're not going to know whether or not those rules are discriminatory. That'd be something for a lawyer, or an ethicist, or a civil rights activist, or some such.

Alexandra: Or ideally a combination of all of those.

Reid: Exactly.

Alexandra: From different background, different cultures, and so on and so forth, but yes, definitely, that's also one of the things that I really criticized back then. I wrote my Master's thesis on GDPR and its impact on AI already. Some people said it's going to make a difference on discrimination of people because it has this Article 23 in there, which requires you to let people know whether there's an automated decision or an AI decision being made about them.

Just naming that there is AI involved and just giving an individual the information that they were subject to a decision by Artificial intelligence, never equips them with the knowledge to either fully understand how this decision came into effect, let alone understand, how to say it, if there are any kind of mechanisms in place that systematically discriminate certain user groups because under GDPR, only I can learn more about my decision and not about other people with similar characteristics. There we see some improvement in the AI Act, but still, it's a long way to go to have really explainability that does something for people.

Besides having explainability, I think it's crucial that we also have easy mechanisms for end users, for customers to intervene, and notify organizations if they have the feeling that there's something going in the wrong direction because, both for businesses but also for regulators and lastly customers, it's a learning curve. I think a closed feedback loop here where people have possibility to intervene is something that would be highly beneficial. Just to give an example, I talked with a fairness practitioner once, sorry not a fairness practitioner, but an academic.

He highlighted that this is one of the challenges for the LGBTQ community. For example, with Amazon, Alexa, and other voice systems, they oftentimes get misgendered but can't really notify anybody about that, or change it in the system, and so on and so forth. I think this is something that would just be quite easily to put in to place and consider from the process and would already be a big win for the customers.

Reid: Yes, there's a general issue about means of redress. When the AI goes wrong for some stakeholder, say a customer, what are the means by which they can say, "Look, the AI did this," or, "The machine did this," or, "The software did this," or, "I got this kind of decision," or, "I was talked to in this kind of way, and I need to report that to the appropriate--" Having that kind of feedback mechanism is important. It's lacking, I've yet to see an organization that does it well.

Alexandra: [chuckles] Definitely another area where we need more resources and more research going on. One question I want to ask you before we have to close off, when we talked earlier before this recording today, you mentioned a chapter title of one of your chapters, which sounded promising, but I'm still quite skeptical. It's title, if I remember correctly, Writing an AI Ethics Statement That Actually Does Something. Whenever I think about AI ethics statements, AI ethics principles, it's just on such a high level that it oftentimes goes more in the direction of ethics washing than towards implementing AI ethics.

What's different about your chapter, and what is the content of that if you want to let us know already in advance?

Reid: Yes, sure. The first half of the chapter is tearing apart most ethic statements as they exist because they slap some high-level words on there; fairness, and sustainable, and beneficent, and for the common good, whatever. It doesn't mean anything. It can't be operationalized because it's so high level, exactly as you said. There's other problems with it as well, which is, organizations throw everything they can into the statement. It's supposed to be an AI ethics statement, and that was for cyber security practices. That's great. Cyber security is great, but it's a different thing.

It's related, it's important, it should be integrated, but it's a different thing. A more model robustness. Yes, model robustness is great, but it's not primarily an ethical thing. There's a conflation of ethical values with non-ethical values, and that's one kind of issue. Let me get to the point. The point is that when I work with clients, I say, "Look," and this is what's reflected in the book, "You're not allowed to say that you value X without tying it to some guardrails."

I don't think that you value X unless there are certain things that are off-limits. If I say, I don't know, "I value my children's well-being," but this doesn't stop me from doing anything including not taking them to school or not feeding them, then I can't really say that I value my children's welfare. At a minimum valuing X, caring about X means that certain things are just you won't do. One way to articulate values in a way that begins to operationalize, it's just the beginning, it's not the end by any means, is to say, "We value X, and so we will never Y, and so we will always Z."

I'll give you a simple example. Because we value people's privacy, we will never sell your data to a third party. Okay, that's meaningful. That's a guardrail.

Alexandra: I've heard that before, and then heard some reports.

Reid: Okay, look, no statement can make anyone comply with it, right?

Alexandra: Sure.

Reid: It's part of the puzzle or part of the overall effort at organizational change, which, as we said earlier, is a people problem.

Alexandra: Sure, and to your point, you said it's just a starting point, so, at the beginning.

Reid: Right. It's a starting point, but it starts it in a way that says, "Look, we're just not going to do certain kinds of thing," or, "We're always going to do these things. We will never push out a model that has not been vetted for bias," for instance. If they fail to live up to the ethics statement, the problem is not the ethics statement. It's not because you wrote an overly vague ethics statement, it's because you lack integrity as an organization. That's a different kind of problem. One that's not going to be solved by any kind of statement. If the problem is statements are too wishy-washy to mean anything into direct action, then one solution is, make it the case that you're not allowed to articulate a value unless it's tied to particular guardrails of, "We will always X so we will never Y."

Alexandra: That makes sense.

Reid: That's step one. The nice thing about that is, whenever I work with clients, we always get that list. That always happens, but there are some things, "We will always X or we will never Y," where there's disagreement on the team. "Maybe if this were the case, maybe if we're working with this kind of partner--" What's nice about that is then you take those statements about which was disagreement, you put them to the side, finish your ethics statement, and then you are in a really good position to create what I call your organization's ethical case law.

You get the relevant senior leaders involved, you can talk more about that if you like. Now you go deep on the conversations on those places where there was disagreement, and say, "Okay, here's this thing. What are the cases that we're thinking of where we would do X, or we wouldn't do X? Let's deliberate, let's decide, let's write it up, and make that now part of how we make decisions. For instance, take the Supreme Court of the United States, or take legal deliberations generally. Legal deliberations proceed by way of, one, thinking about the general principles or laws, and two, thinking about the previous cases, and how those cases were decided in light of those laws, the reasoning for those decisions.

Then investigating whether the current case is sufficiently analogous to warrant the same decision or sufficiently disanalogous such that it should warrant a different decision. That's how legal deliberation of decisions go. If you've created an AI ethics statement that spells out your guardrails, and then that process lends itself to identifying where there's disagreement among the team, which then leads to a process of settling those disagreements, which then becomes your ethical case law, then when you're faced with real-life cases, if you've got the organizational infrastructure to handle it, which hopefully you do, then you've got a lot of resources for making decisions on tougher cases.

Alexandra: That makes sense. First step, coming up with this AI ethics statement and including what you're not going to do. Then we also talked about not having the data scientists be the only ones having to make these tough decisions but having a broader group of people involved. You also mentioned senior leaders, you mentioned case law. How many steps, just out of curiosity, are there to take to be in a good position when it comes to your AI ethics? Can you give us a brief outline on what all the points are that need to be considered just on a headlines level?

Reid: Yes. One of the chapters of the book I think it's called Conclusions Executives Should Come To. The way that I write the book is, first half, here's what's an issue. Here's why it's a problem. Let's go deeper on these issues, and you'll understand the many sources of the risks. Rather than just the headlines, advice is bad, what do we do about it, you're not going to know what to do about it unless you actually have a deeper understanding of the problem. That's the first half. Then I think that once you really understand that stuff, I almost treat it as a kind of conclusion that people should draw about what they should do.

Like, "Of course, we should do X in light of this." For instance, once you see that you're not going to math your way out of the problem of biased algorithms, you're not going to just be able to take some quantitative metrics and say, "Let's just live up to these quantitative metrics across the board, and we're all good." That's not going to happen. Once you see that there are substantive and qualitative ethical and business decisions that need to be made when determining, or whether when designing a model to mitigate bias, you're going to see that you need, for instance, something like an Ethics Committee or a Risk Board staffed with the right kind of people to make that kind of decision for tough cases, or potentially for all cases.

What do you need? I could list six, seven things. I think you need something like an Ethics Committee. I think you need a senior leader to own it. I think that you need training and upskilling for your organization as a whole, to be able to flag or spot ethical risks, and then pass it along to the right people. For instance, an Ethics Committee. You need training for your data scientists and product owners and developers. They need a handbook.

Alexandra: And procurement officers.

Reid: And procurement officers. Maybe, for procurement officers, it could be really simple. This is machine learning, give it to them. That's the extent of the product. I need the green light from that department, or maybe it's, "Give it to my colleague," or whatever. Crucially, I think financial incent-- This is obvious, I think, but it very rarely gets discussed. Certainly, by companies, it doesn't get discussed out loud. Making sure that financial incentives aren't misaligned. You think about a place like, rather infamously, Wells Fargo.

There was this massive scandal because the people who worked there were financial incentivized by quantity of accounts opened, which incentivized them to create a bunch of phony accounts that can hit their numbers. If you don't have financial incentives aligned with how people are doing with AI ethics, whether they're compliant with existing policies, then forget it. It's just not going to work.

Alexandra: Yes, and it's going to fail from the start.

Reid: Right, because your employees, fairly enough, they care about raises, and promotions, and bonuses. Thinking about other things as well, but whatever you financially incentivize with raises, promotions, and bonuses, that's the activity that you're calling for. If every time you consider promotions, raises, and bonuses, you never take into consideration the extent to which they're compliant with, say your AI ethics policies, then they're not going to take it seriously. On the other hand, if it's really part and partial of that evaluation, they will take it seriously.

Leaving out, ensuring that this is woven into, that people are accountable in both a carrot and stick sense, if you take that out, then forget it. Forget the whole thing. You can throw as many tools at them as you want, it doesn't mean anything.

Alexandra: Makes sense. Would you say, in general, that putting up AI governance and processes to ensure AI ethics is something that's just same business as usual with other procedures, but you just have to come to the understanding what's peculiar about AI to make sure that it works for AI, or are there some elements that are truly new terra for organizations because they haven't encountered these aspects in their change management processes, risk mitigation processes, governance processes before?

Reid: Yes. Again, it's going to vary by institution, by organization. You talk about the financial service institution. They have model risk management procedures in place, and so you can dovetail existing model risk management processes, and procedures, and policies with newer updated ones that accommodate the AI ethical risks because you're just talking about a different kind of model. They need upskilling, they need training. They don't know much about explainability.

They don't know much about fairness or discriminatory AI, and they don't know much about the bias mitigation techniques, and potential ways that those bias mitigation techniques actually may run afoul of that anti-discrimination law, which is ironic because their very point is to mitigate discrimination, but nonetheless, [crosstalk] problematic. It's, I think there's a fair amount of education, and then other organizations don't have that kind of risk culture, and so, they have to create it more, like marketing. They don't look at their models with an eye towards risk in the way that financial services organizations do, which desperately need to be compliant with existing regulations.

It's just going to vary by organization, how much updating needs to be done.

Alexandra: And how much you already focused on risk in the past. Reid, this was a truly exciting conversation. There were so many other topics I would love to dive deeper to, but we already are over our hour. My last question for you is actually, besides all of our listeners and all of the executives are thinking about applying AI in the near future, buying your book and making sure that they have actionable AI ethics and workable AI ethics in place, what do you wish to see in the context of ethical AI in the next let's say three to five years?

Reid: Oh, that's an interesting question. Aside from, say, buying my book, right.

Alexandra: Exactly, but don't take five years to buy that. It's out tonight.

Reid: [laughs] Thank you. I think the biggest challenge, but it's the most important one is that organizations acquire, develop a capacity for ethical risk deliberations about tough cases. Some call it an Ethics Committee, call it an IRB, call it a Risk Board. I don't really care what you call it, but there's always going to be tough cases. There's always going to be ethical, and legal, and business decisions to make around how to design, deploy, whether to procure this AI. Shoving it off to data scientists, it's unwise because they don't have the training, and it's unfair because they don't have the training.

The burden shouldn't be on them. You don't want to put your brand's reputation in your data scientists' hands like that. That just seems like a foolish move. I think both for the sake of brands and for the sake of society, the thing that I really want to see is a recognition of the ineliminable qualitative assessments that need to be done, the kinds of people that need to make them, and getting those people involved in the organizations and giving them sufficient power such that they can mitigate those risks.

Alexandra: That sounds like a good point, and potentially there are even some areas where we want to have it outsourced. Not within the company, but maybe from a regulatory body, certification body, to tackle the super sensitive cases.

Reid: 100%, that's ideal.

Alexandra: Thank you so much for being on the show, Reid. It was my pleasure.

Reid: Yes, my pleasure. Thanks so much for having me.

Ready to start?

Sign up for free or contact our sales team to schedule a demo.
magnifiercross