💡 Download the complete guide to AI-generated synthetic data!
Go to the ebook
Episode 11

Hands-on ethical AI with Nancy Nemes, leader at Microsoft, founder of HumanAIze

Hosted by
Alexandra Ebert and Jeffrey Dobin
Nancy Nemes is a seasoned AI expert with 20 years of hand-on experience in implementing frameworks as a leader at Microsoft and Google. She is the founder of HumanAIze, an organization that aims to make AI more inclusive. If you are looking for some practical advice on how to implement responsible AI at the moment, then look no more – In this episode you will learn about all the necessary steps to perform ethical AI.
  • What are the most important ethical AI best practices
  • What does the regulation aspect mean for the financial sector?
  • What kind of skills are necessary to adapt to this AI-driven environment?
  • How to identify the right metrics for understanding privacy and fairness when developing AI systems?
  • Which kind of privacy enhancing technologies should you implement?
  • Who are the experts that you should gather? 
Subscribe to the Data Democratization Podcast on SpotifyApple Podcast or wherever you get your shows! Listen to the previous episode and read the transcript about Data ethics best practices from Nicolas Passadelis, Head of Data Governance at Swisscom.


Jeffrey Dobin: Good morning and welcome to the Data Democratization Podcast. This is episode number 11. We’re going to take a deep dive into ethical AI with Nancy Nemes. I’m Jeffrey Dobin, privacy lawyer and tech expert at Duality. I’m here with Alexandra Ebert, Chief Trust Officer from Mostly AI. Alexandra, what’s up and good to be back, how are you?

Alexandra Ebert: I’m doing great. Happy to be back as well.

Jeffrey: Fantastic. Tell me a little bit about Nancy. What can our listeners expect from today’s conversation?

Alexandra: A lot. Nancy is a seasoned AI expert. She has 20 years of leadership experience within Microsoft as well as Google. She founded HumanAIze, which is an organization that aims to bring more diversity into the AI space, especially also more women. She’s a speaker and she’s an advocate for ethical AI, responsible AI diversity, and inclusion. Although ethical AI is a relatively new field, she has hands-on experience in implementing frameworks at different organizations so there’s lots she can share.

Jeffrey: Sounds like someone worth listening to, the type of person I’d want to work with. Let’s go ahead and meet Nancy.

Alexandra: Hi, Nancy. It’s really really great to have you on our Data Democratization Podcast today. For everybody of our listeners who doesn’t know, Nancy really has an impressive resume. She worked for Microsoft for years and years. She also worked for Google. Can you maybe highlight some of the most important aspects of your career and how AI became an important part of it?

Nancy Nemes: Well, Alexandra, thank you and thank you everyone who is listening. It is a pleasure and an honor to talk to you today in this wonderful podcast. As you mentioned, I spent about 20 years at Microsoft and at Google working in mostly what we call emerging businesses. I really have a passion for new technologies. Already 20 years ago we worked with embedded systems.

It’s what basically, more or less what we call IoT today, so Internet of Things, connecting devices, smart homes, but also we had smart devices such as wireless monitor, that we basically created probably almost 20 years ago, 19 years ago, and brought to market even way before the iPad was anyone sleeps. Then, we may know, Microsoft had the Windows Phone. I worked in that division as well and then I moved to Google and worked also in the hardware division.

I really like to pick new technologies, cutting edge, futuristic innovation, and then bring that to market. That’s what drove me and how I actually shaped my career over the years. I also wanted to work in the large parts of the business, so I did work in Windows and Office as a division just to learn the general technology and how the big companies really are thinking about these technologies, but I always preferred a bit of the new cutting-edge devices, software as well as hardware.

Sometimes, I took a bit of a different path when I joined Google, basically, to drive their hardware division and nobody was aware, “What is hardware? We didn’t know Google makes hardware.”

Alexandra: This was actually my question though.

Nancy: Everyone knows Google as an advertising and search company. Not many people knew back then and this is already six, seven years ago, that Google also has devices such as the Pixel phone, Google Home, which is their voice assistant, Alexa, which are actually at the base of what we call artificial intelligence. Equally, Siri, Cortana, or the voice bots are one.

Alexandra: Was it Alexa from Amazon?

Nancy: What was it?

Alexandra: Wasn’t Alexa from Amazon, this voice assistant?

Nancy: Alexa was from Amazon and Google Home is from Google.

Alexandra: Google Home is just– They don’t have a name for it? Just–

Nancy: Yes. they call it voice assistant, basically. Your voice assistant. They don’t have a nominal name like Alexa does or Cortana, or even Siri. The device itself was called or still is called Google Home. They also have a Wi-Fi device. In a way, Google decided to have a family of hardware products that will enable a smart usage of technology at home, just equally you can use on all your devices on the go, if you have your basic phone and have your voice assistant, voice recognition. These are the base and that’s how I came across.

We call it artificial intelligence. It’s not new. For those of you that work in this thing, this is not a new space. It exists since the ’50s already. That’s basically how it becomes a lot more democratized today through this interesting name that we happen to have, at this point, artificial intelligence.

Alexandra: Maybe before we dive into the different topics we want to cover today, let’s start with clearing up a few things. How do you actually define artificial intelligence and also, how do you define responsible and ethical AI? I’m just asking because AI can mean so many different things for different people.

Nancy: Yes and you’ve just put it really well. It means different things to different people. I tend to be a generalist. That’s what I always wanted to do to have a general overview, to have an end-to-end overview. For me, at the highest level, artificial intelligence is intelligence in computers, if you will. It’s a device or a machine that can basically perceive its environment and basically take actions to achieve specific goals that we, humans, predefine. Really, these are human programmed machines or devices.

I think they are not artificial, not really intelligent. There is a long way to get to the intelligence aspect, but really, at the highest level, there are different definitions. Some people prefer to say it’s just machine learning or it’s just one specific terminology. For me, it’s an intelligent device that can sense its environment, and basically responds to specific actions, and creates, achieve specific goals that are predefined by humans.

Alexandra: Really more than narrow artificial intelligence definition and not general-purpose artificial intelligence?

Nancy: Exactly. The general artificial intelligence which basically also is a very big hype word or strong AI, if you will. It is labeled as artificial intelligence, but basically, I think it will take many, many, many years to get to these futuristic visions of machines being able to be as intelligent as human brain, not to forget, we only know 1% of how the brain functions. I don’t know how we can ever, the human brain, can emulate its own intelligence. It will be a very hard undertaking.

Of course, there also you have different views, different academics that look at it from different point of views. Then the theories and the practical implementations basically, the more you are in the practical implementation, the more you see how difficult it is to achieve even the general artificial intelligence.

Alexandra: Yes, absolutely. This is why I’m always sometimes amused and also partially worried that especially many regulators and politicians are worrying about this general artificial intelligence and similarity and AI taking over the world but not that aware about the problems that we have with AI today or the challenges like bias in artificial intelligence. Maybe to touch responsible and ethical AI again, I know this is such an important topic for you, you’re also very active with HumanAIze and other of your initiatives, what is responsible and ethical AI for you?

Nancy: I would say ethical is just one part of Responsible AI. Again, here, I would define this as a Responsible AI is really as a general term. At the end of the day, the key question we need to ask here in order to define these things is basically who is accountable for your AI system that you create in your company, in your organization? The foundation for Responsible AI is, for me, an end-to-end governance framework, if you will. It should really focus on the risks and controls along the organizational AI journey from top to bottom, from left to right.

Basically, it’s really including many different aspects. I basically have four or five that I like to include into the general term of Responsible AI. I’ll start with what we call Comprehensive AI, which means you define a testing and you define the governance criteria that will allow you to have machines that basically cannot be hacked easily. The machine learning we create should not be easy to be hacked. For me, that’s a comprehensive view of looking at the responsible part of technology.

Second will be Explainable AI. Are you able to explain and to program your software and your hardware in a way that is clearly describable? You can tell us what is the purpose, what is the rationale, what is the decision-making processes, and what do you do in a way that really everyone can understand? That’s why we call it Explainable AI. Then we do have the ethical AI, which means basically, you have processes in place that seek out and really eliminate bias in machine learning.

There are so many examples of bias that are now quite famous, so that would be the ethical part of it. Finally, I also like to look at it from a business perspective. Then what would be an efficient AI? How can you basically run your programs continually and respond quickly to changes in the environment or in the operation environment, and basically make it efficient? For me, it’s four areas, but many other people have different ways of looking at it. Then maybe to give you some more examples.

Alexandra: Absolutely. Absolutely. Maybe since you mentioned that you were always at the forefront of new technologies, what made AI so particularly interesting for you? Also, what led you to focus on responsible AI for quite some time now?

Nancy: It’s precisely this aspect of really being able to know what to do and to explain it. As the machines develop intelligence, it will be really important to have specific principles in place in terms of fairness, so for me, it was really important to be able to articulate fairness principles, transparency, explainability, human centeredness, this is why I called this initiative.

Initially, our initiative was called MIS AI because I wanted to really drive that point of the importance of diversity in order to avoid bias, but also the importance of the human-centric approach. That made me really look into this and connect the dots. We can’t only rely on the engineers, software engineers, hardware engineers that are creating these technologies, we have to have all the other experts at the table.

That’s what I did about three years ago, four years ago with MIS AI and HumanAIze to try and establish a comprehensive pool of brains from different organizations, different disciplines, and to interconnect them in a way that would allow us to basically create a safe technology and democratize it for everyone around the planet. As we know, there is a huge digital divide between east and west, between north and south, and so connecting the dots, and to your point, Alexandra, this is what should be also on the agenda of policymakers and politicians as well.

Alexandra: Yes, I 100% agree that bringing more diversity and fairness and democratization to the topic of AI is definitely needed and something that we all should pursue. Maybe coming back to both Ethical AI and also Explainable AI, do you have any real-world best practices that you can share with our listeners?

Nancy: Yes. Yes. Let me start with one that I really love because it’s my current work. I do work for Microsoft US right now. There is a team in the United States at Microsoft that– They have many teams. I’ll tell you a bit more about all these teams. One particular team is called AI and Sustainability. It’s a team that basically creates markets in this space and tries to drive insights, societal impact, workforce transformation, economic impact, specifically after the pandemic now or as a result of the pandemic.

What we do there, we actually look at all the industries that are relevant such as the banking industry or healthcare and pharma, or for example, energy manufacturing, or even retail, and we looked at the industry and look at the specific issues. To give you a very specific example, in the banking sector, we have a lot of issues with fraud and bias. One of the initiatives we basically launched back in October, end of October, is that we launched a consortium of partners that we call National Council for Artificial Intelligence.

We invited some of the most iconic banks in the United States such as MasterCard, and Visa, Citi, but also NASDAQ. We invited universities, very famous universities such as the State University of New York, City University of New York, but also the Brookings Institution, which is a very important Think Tank in the United States that have a lot of work in those spaces.

We invited accelerators and startup accelerators, and also a number of other system integrators. They basically identified a couple of issues such as what are, for example, some of the standards and assessments we need to put in place in order to address the fraud issue? How does that look in this highly regulated industry? For a company like NASA or a company like MasterCard, one is a consumer company, the other is a B2B company, how does it look in these environments?

How can we basically come up with solutions using AI technologies to address that problem? How should the algorithms be explainable? That’s a very specific Responsible AI project we’re working on right now, very successfully, and it basically creates solutions together with these partners Also, Microsoft has a number of other initiatives in the United States and also emulate along their subsidiaries worldwide, so they have an Office of Responsible AI, which is acronym is ORA.

They have an AI Ethics, and Effects in Engineering and Research where they’ve specifically, in engineering, they looked at the ethics aspect of the work. That’s a specific committee they formed. Then they also have another initiative that’s called Responsible AI Strategy in Engineering. Just a few of their initiatives. I would love to give you now, this is not the leadership, more of a management type of looking at Responsible AI, but let’s look at the engineering piece maybe, and I’d love to give you an example if you would.

Alexandra: I would be curious to hear that.

Nancy: That was the other one more of a leadership aspect. Let’s look at the engineering piece in Google. I think back in 2018, Google introduced their AI principles, which basically guide the ethical development and use of AI in their research and products. It’s very important for them to articulate their specific goals, and those are around privacy, security, fairness, and interpretability.

Then I think there’s also accountability. If you look at TensorFlow, TensorFlow is one of the most popular machine learning frameworks in the world that is being used almost everywhere. They have incorporated Responsible AI toolkit in the TensorFlow ecosystems so that developers everywhere can imagine these principles in their machine learning development.

For example, they can ask specific questions that part of this responsible AI consideration and they have identified very specific steps, and every single step, what does it mean to have Responsible AI. Let’s say you’re starting to define the problem within your software development process. When is this AI actually a valuable solution? What problem is it addressing?

As you define that, how do you make sure that the different users of your software really understand what you do in your programs? We talk about algorithms, at the end of the day it is these computer programs. For example, if you’re building a medical model to screen individuals for disease, even COVID tests, the model may learn very effectively and work differently for adults versus children, or women versus men, for old versus young.

When that model, let’s say, if it fails and all of a sudden gives you data points for men when actually, your patient is a woman, that’s a big problem. It may have a pretty good repercussions that both the doctors, as well as the users need to know about. That’s one of the key aspects that TensorFlow is allowing you today to really look into very specific areas, and then just to that’s the initial step of defining the problem that you have the right questions in place.

Alexandra: Yes. If I understand you correctly, say, it’s important that you not only look on it on a strategic level, but also on the practical and engineering side on how to actually implement it, and basically, that everybody is involved in these discussions.

Nancy: Correct.

Alexandra: In your role as an AI consultant, you have the pleasure to work with different industries. Do you see different approaches to Ethical and Responsible AI according to different industries, or is everybody currently at a rather same maturity level of being more responsible in their AI endeavors?

Nancy: That’s an excellent question. Yes, unfortunately, not at the similar level. We see very big discrepancies, first of all, by industry as you said. Of course, the closer the industry is to high-tech, let’s say if you’re an e-commerce platform, it’s a bit more advanced than a bank would be or than a government would be, but also regional. Clearly, the United States is leading. Europe is as always a little bit behind. China is doing a lot of great things.

You see now, the Middle East and Africa, a lot of focus on AI. I think maturity level is the first strongly even within the region, and some of the famous examples that you know, so you are more agile in Austria and in the Baltics countries than you are in Germany. Yes, it’s really interesting to see the ecosystem companies like Mostly AI, the company you work for is an amazing example of that, and how basically, starting from a smaller initiative.

Now, you guys are becoming a significant force and we can see this in the smaller countries, they are more agile than the larger countries but also within industries and within– It’s really important in regulated versus non-regulated industry, there’s also a big discrepancy.

Alexandra: I can imagine we also see it, we work a lot with financial institutions and due to all the regulations they have. Of course, also the importance of customer trust, they really have a drive to do AI responsively, and now we get much more requests also for synthetic data with the purpose of explaining AI systems via synthetic data to mitigate bias, and especially now with the new AI regulation that was proposed on the European Union level, which will have this risk-based approach and also have additional obligations for the financial sector, I think this topic is really growing in its importance.

Maybe coming to regulations, I’m sure you have taken a look at the AI proposal in the European Union. What’s your take on that? Is this is a step in the right direction, anything that you’re missing in this piece of legislation?

Nancy: I think it’s the right step to ask the questions. I think in Europe, we need to really find a balance between driving innovation and taking risks, which is not one of our strengths. Really, it’s very hard. We need very smart politics. It’s not easy to create smart regulation and smart politics especially in a region that traditionally carries a lot of historic baggage.

It is a step in the right direction but it’s a very fine line even with GDPR. It’s a very fine line where GDPR might hinder innovation just because of too many regulations. It’s great to see the progress. It took quite a bit of time until this April when it was published. It basically demonstrates that we need these proactive efforts. We really need not only to think about these questions, and to regulate the way that we’re using now.

We also should have measurements. How do we measure success? How do we know that what EU is doing will have an impact? It’s really important to adjust and to really connect.

I think for the European Union, it’s crucial to have representatives from companies, but not just from the lobbying part. Also from the technical part, from the engineering part, I think they are doing a good mix of people that have good experts at the table.

There are also voices that are criticizing the way the EU is doing it because sometimes you can see it’s basically largely focused on big companies. It’s important to have that diversity of having everyone at the table. It’s becoming hard to manage something like that. Having rigorous processes around how we create these regulations and the diversity of decision-makers in those regulations that can bring those multiple perspectives is really important.

The other thing that I think is important that the EU is doing is that we can have a cross-cultural perspective because we have people from all the different countries here in the EU. You can ground your work in this human-centered design. I’m very proud to see that this actually initiated in Europe. The whole discussion around ethical AI initiated really in Europe. While the United States is focusing on the tech aspects, on innovation, on risk-taking or capital. China, of course, is focusing a bit more on their own view of the world. A bit more on controlling the society through the technology.

It’s great to see, but it’s also really important to operationalize and learn into best practices. What I’m missing in what the EU is doing is true real best practices from the world which is not easy, because companies cannot yet share much of their work because they’re in the middle of it. Building this product with ethics in mind is absolutely the right step that Europe is taking.

Alexandra: I think also the intention is definitely great. One aspect that has been criticized is that most of the assessments will be self-assessments and that, for example, there are no public AI audits or something like that planned. Do you think that’s sufficient that if you leave this to the companies themselves, that really will do the ethically correct thing, or would it be desirable to have public body also doing AI audits, certification, something like?

Nancy: Absolutely, definitely. I think we need a multi-faceted and basically fundamentally socio-technical view of this. You have to have technical and societal elements in it. It’s really hard. It’s becoming hard to manage because you have so many countries represented in the EU. In each of them, you have so many committees, and then having the public voice. I think there is a public voice. I think if you want to contribute, there was this huge effort done by, what’s her name? I forget the name of the lady that’s running this in Brussels. She did a brilliant job of basically bringing public voices into this.

I think they moved away from that a little bit because it’s very hard to manage. For me, what is important is to have these multiple views. It’s not just from a policy societal perspective, but they really need to look at it from technical, from a product perspective, from a policy perspective, from a processes perspective, and the cultural factors. If you don’t have the cultural factors as part of the discussion, you will always have bias.

Alexandra: Absolutely, because there’s so many different cultures affected from artificial intelligence and currently, at least it’s created by only one culture or two cultures, mainly United States and China. One other question that I wanted to ask you is especially talking about this being forced to do the correct thing versus intrinsically being interested in doing ethical and responsible AI.

What’s the driving factor for all these brands that you mentioned that you’re consulting in your work that are collaborating on this AI strategy? Microsoft Project. I think you mentioned Mastercard and Visa. Is it profits for them that they believe doing the right thing will also translate to the bottom line? Is it just their moral obligations and their values? What’s the driving factor for them to engage in responsible and ethical AI projects?

Nancy: That’s a beautiful question. I really believe it’s multiple aspects. Of course, there is a profit aspect. All of the companies at the end of the day will need to sell their products. They know that in today’s transparent society, which is more transparent than ever in the past, the consumer has access to information. I think the newer generations that work in these companies also have a much more critical view on what the end-users and democratization needs are.

I think in a way having the new generations coming into the workforce will bring even more attention to democratization in general. Also, I believe the driver is that companies really trust and believe in their values. If as a company you are not clear about who are your employees if you don’t really understand what makes them tick, why are they working for your company, how can you make them happy, you will not be successful. Everyone today understands the success of a company, that profit basically is driven by people.

Nancy: I think people come with values; companies create own corporate values. I think it’s a very big factor that they want to drive thoughtful leadership, they really want to impact local communities. They can. They also have a lot of money. Most of these companies have so much money basically even call it profit, but they want to contribute to the society. I think it’s really a combination. It’s great to see that we are moving as the industrialization happens if you look at the conditions of the workers just 100 years ago, horrible, right?

Alexandra: Sure.

Nancy: Today, look how we work. Now even more companies are embracing this whole working from home style. That was not something that you would see happen before the pandemic. I think really, again, it’s a combination of cultural values, corporate values, of course, the need to drive profit, and to bring the company forward but also an important aspect for them is to be supporters of the local communities.

Alexandra: Very good points. I definitely see that it helps with talent retention if you have positive company values and really uphold these values, and your employees can actually be proud of what they’re working on. I think that’s super important in today’s times where you’re fighting for talents. Our listeners always love practical examples and actionable advice. Do you have any scenarios you can share on? I think you mentioned, for example, Explainable AI, how to actually achieve that, or anything else in designing AI systems. Any practical examples for our listeners on how to approach this?

Nancy: Sure. Let me think about– Maybe we were talking a little bit about the importance of data and responsible view in medicine and diagnostics and all that. I would say another example would be– I was giving you that TensorFlow model. As you build and train a model, just to make it a bit more palpable. One of the steps you take when you develop your products is, there is a time when you collected, you prepare the data, hopefully, you have good insights for data, and then you start building and training that model.

You would basically train your TensorFlow model, which is one of the most complex pieces of development, but how do you train it in a way that performs optimally? How do you train in a way that everyone can understand while also still preserving the user privacy? Which is one of the key aspects today. I know you guys also work in that space. The TensorFlow basically provides a federated pipeline.

That’s a new aspect in machine learning that enables many different devices to jointly train the machine learning models while keeping that data locally. Keeping the data locally provides benefits around privacy, helps protect against risks of centralized data collection but also like theft or large-scale misuse. One of the aspects here really is how do you evaluate that performance once you’ve created that model and you initially trained it and you begin this iteration process?

Very often, the first version of a model does not perform the way you intended it. Developers then have to look again and really, it’s important to have easy-to-use tools. One of the most practical examples, I would say, how do you really use your tools in a way that allows you to identify what the right metrics are. What are the right approaches, especially for understanding privacy but also fairness and basically privacy tests. For example, which are part of the TensorFlow privacy library enable a developer to interrogate their mother to ask questions and to identify very specific instances where data points basically the information that has been memorized and might need to be further analyzed are part of what the developer work is doing.

What are the considerations and how do you train the models to be really private. That’s one of the key aspects I would say fairness indicators that are built on top of that TensorFlow and that allow the evaluation of specific matrix across outcomes.

Alexandra: Yes, that sounds super interesting.

Nancy: Yes and I know people like to hear these examples about how I trained– There were so many examples of training different skin colors, dialects or the famous HR examples where your CVs is being taken away because you’re a woman or something but I just wanted to give you some of the engineering specifics that was just to show how complex this is. It’s easy to talk about it but it’s not easy to do it.

Alexandra: That’s true. That sounds completely true. Coming back to organizations that many organizations are just starting out with artificial intelligence. If somebody is super new to this and wants to make sure that they’re approaching AI responsibly, what would be your top five first steps that these organizations should take? Is it initializing an AI committee within the company, what are the best practices that you know from your work with corporates?

Nancy: Yes, this is an interesting question. Yes, definitely you just nailed it. Having a committee that basically looks at the different aspects of the technology is important, connecting the dots. This will be a forcing function to allow people to work outside their silos. It’s really important for all the departments to have a representative at the table. For example, with a committee that we created for Microsoft in the United States, will not only have a data scientist or AI people but it will also have ethicists, it will also have professors, we have heads of research.

It’s really important to have also research that they will, by the way if you’re a more larger company. That’s one aspect but the other aspect I would say yes, have a committee, have people– There will be a lot of new jobs created in the next years, many, many years to come. The data scientist is just one example but there will be many new jobs just the way we didn’t have a social media manager in the past. We see new jobs created that we don’t even know about today.

I would say the first step for now is in order to create a toolkit of responsible AI. Have your committee, have them create a toolkit that should address the most pressing damages which should be what is your government’s framework, what kind of inter operability, and what kind of interpretability and explainability you want to have? Very important, it’s a huge topic how to make your algorithm simply explainable. Other committee should work on bias and fairness, maybe another committee should work on security and your business, maybe another committee should work on ethics and regulations. Have different committees but very important is that these committees are not working in silos.

We assign the deed, for example, for each committee that basically is responsible to connecting the dots across all the other committees and then an Uber if you will that is connecting across all the committees as such and making sure that everyone talks to everyone, at least that they’re aware. People are so excited we see a lot of excitement and even from the academic side because for example, if you’re a professor of computer science or AI and you’re preparing the next generation, companies have an interest to really put their issues on the table so that academia knows what are the pressing questions in the industry and how do we train the students so that they become really excellent workforce in the future? Having these aspects is really important. The last thing which I love to say is that how in medicine we have The Hippocratic Oath. I think it’s important to have a Hippocratic Oath for artificial intelligence.

Alexandra: I haven’t heard about that before.

Nancy: From the medical, I was just thinking why don’t we have this in our space, it’s so important because what’s the oath doing in medicines? Basically talking about ethics. It was initially for doctors, for physicians. Yes, it’s important from a healing perspective or aspects that are pointed to specific ethical standards but the oath is actually also not only for medical in the western world but also really within principles of confidentiality, of maleficence, all of these things that are in The Hippocratic Oath are very relevant to our spaces.

Alexandra: Yes, definitely. I think it’s also a positive sign that we see more and more universities really including ethics classes in the computer science studies and course but oftentimes it’s criticized that this is just too far away from practice and that therefore people still have the feeling that it can’t apply this in practice. For example, one of my last guests, he’s the head of data governance at Swisscom. He also shared that his engineer is actually super happy about the data ethics framework that they have in place because it enables them to upskill themselves.

Also not have only their developmental skills but also just more holistic perspective on what is done with data is ethically correct and so on and so forth. Which is something that’s intrinsically motivating for them but also valuable business skill to have in today’s time. I think here more educational work is definitely beneficial to prepare the workforce for the new challenges of artificial intelligence. Maybe now that we touched upon humans and skills, what would you say are the most important skills that we as a society should obtain to be prepared for what is yet to come in the AI field?

Nancy: Yes. I know that many people say it’s about the soft skills when you look at the AI space, how we prepare the workforce like we’ll be working with this tomorrow and oftentimes they are worried machines will take over the thematical or analytical pieces so humans should focus more on creativity and innovation. I think that’s important but I believe that one of the most important skills of every generation but in particular the new generation coming into the workforce is that the analytical thinking.

I think that’s really an important aspect of combining the right elements to make sure because we will need people that understand data, we need people that can derive insights, that’s all what humans can do, deriving insights. Bubbling up information, there is all these crazy information amounts that very hard to keep track of. Analytical thinking I think is a key paramount skill.

Then of course what I believe is really important is part of the soft skills should be resilience because in this space you need to be very patient, you need to have a lot of resilience to work in this space and to come up with new innovative ideas. That means try multiple times, fail, learn from the mistakes, don’t be afraid of the mistakes. But really crucial for me is creativity basically comes out of many, many different trials and to have that, you really need to be able to analyze the world around you, bubble up to an executive view part of information that should inform your own decision making.

Alexandra: That’s I think a good recommendation for everybody looking to what skills to obtain to be fit for the future and the new AI work environment. At the end of our podcast episode we usually have a dis or death game. It’s just everything that comes to your mind don’t think about it too long. Are you ready for our game?

Nancy: Yes.

Alexandra: Wonderful Nancy. First question, Microsoft Office or Google Docs?

Nancy: Microsoft Office.

Alexandra: Why is that? Because you’ve been with Microsoft longer?

Nancy: Because I think in terms of business tools they’re better. They serve better and I do easier PowerPoint and quicker in Excel I believe than what I can do with Google tricks. I would say for me basically in the business world collaboration aspect excellent in Google Docs, also Microsoft there’s more collaboration aspect. If you create content in PowerPoint, Excel, Word, I think Microsoft is unbeatable.

Alexandra: Understood. Europe or United States?

Nancy: Both.

Alexandra: As somebody who travels a lot, you wouldn’t want to decide?

Nancy: Yes, I guess United States for your professional development always especially in this space. Europe for your heart and cultural development.

Alexandra: Beautiful words. AI for profit or AI for society.

Nancy: Both. You cannot do anything for society without deriving some profit, you have to feed the society, how do you feed without some real profit.

Alexandra: It’s a really reconciling making profit with it but doing it in the right and ethically correct way.

Nancy: Exactly.

Alexandra: Perfect. Wonderful, Nancy. Thank you so much for everything you shared. Before we end, any final remarks, any final comments to our listeners, anything you want to share with them for their journeys on making AI more responsible?

Nancy: Thank you so much. This is absolutely a fantastic talk and it’s so important that we do this. I would only recommend everyone to ask a lot of questions, go out and ask and get a lot of information. I do believe it’s critical to anticipate problems, and to future-proof your thinking, your systems so that you can realize all this full potential of AI. Anticipating problems is a huge, huge view, I believe for the future. My last word would be, everyone is really responsible to drive this. It’s not just the engineers, it’s not just the ethicists. If you hire some in your company, it’s not the CEO only, it’s really everyone, it’s board members, it’s CEOs, it’s business unit heads, it’s AI specialists. All of the people are responsible for making this world a beautiful world for everyone.

Alexandra: Wonderful last words. I think it definitely needs to be very inclusive, this discussion that we are having. Thank you so much, Nancy, for your time. It was a pleasure.

Nancy: Thank you so much, Alexandra. My pleasure.

Jeffrey: Wow. Lots of actionable advice for those looking to approach responsible AI at the moment. Let’s pull together the most important takeaways for our listeners.

Alexandra: Sure. Let’s start with Nancy’s definition of responsible and ethical AI. The foundation of responsible AI should be end-to-end governance, and it should focus on the risks and controls along the organizational AI journey. Responsible AI is made up of four different fields. First, comprehensive AI that defines the testings and governance criteria. Second, explainable AI that explains the purpose and the decision-making process of the AI system. Third, ethical AI that seeks to eliminate bias, and number four, efficient AI that takes the perspective of the business making sure that the AI system can respond to changes quickly.

Jeffrey: That’s excellent. Nancy believes that the new AI regulations are a step in the right direction. Asking questions about responsible AI is very important, but Europe needs to find the right balance between AI innovation and regulation. Success of these AI regulations and the way it’s measured definitely needs to be defined.

Alexandra: Absolutely. In the European Union, we can have a cross-cultural perspective between different countries. This is especially valuable for creating a human-centric design of AI systems. Without the cultural aspect, we’ll always have bias. True responsible AI best practices however are still missing from the AI regulation proposal.

Jeffrey: The driving factor for most brands to create responsible AI is multi-faceted. Business reasons are important as well as social factors and the younger generation in our workforce demand values which reflect their own. These must be incorporated into the company values of their employers. It’s one thing to say you value something, but it’s another thing when you actually demonstrate it through actions.

Alexandra: It for sure is. We also received some practical tips from Nancy for designing AI systems. For example, using privacy-enhancing technologies like federated learning can help with protecting privacy during the AI training process.

Jeffrey: Yes, and it’s crucial to identify the right metrics for understanding privacy and fairness when you’re developing these AI systems.

Alexandra: Absolutely. To approach AI responsibly, Nancy also shared that having representatives of all departments at the table, for example, at an AI committee within your company, which you look at the different aspects of the technology, including these committees, ethicists, academics, research leaders, and basically create a toolkit of responsible AI to answer the most pressing questions.

What is your governance framework? How do you create interoperability, interpretability, and also explainability? Asides this general committee, there should also be some specialized committees. One should work on bias and fairness, another one on security and robustness, and the third one potentially on ethics and regulations. Nancy also emphasized that it’s really important that these committees are not working in silos and that you have to assign a lead to each group who is responsible for connecting the dots.

Jeffrey: Totally agree. We should have an ethical oath like the Hippocratic Oath in medicine but for application intelligence to hold ourselves in higher ethical standards.

Alexandra: Yes, particularly like that suggestion of Nancy. One other thing we talked about was preparing the workforce for this new era of AI and also responsible AI. Nancy shared that she thinks analytical thinking skills will be very, very important, soft skills as well, such as resilience and patience, and also a mindset that allows you to try and fail because this will be part of the work that you’re going to do in the field of AI. This is really necessary, and then also creativity.

Jeffrey: 100%. Finally, the most important thing she says is to ask a ton of questions. Anticipate problems by being proactive, and, of course, future-proof your systems.

Alexandra: That was also valuable advice, absolutely. Thank you everyone who listened in today. If you have any questions or comments about responsible AI or even some suggestions, how you approach it in your organization, please send us an email or a voice recording to podcast@mostly.ai. See you next time.

Jeffrey: Adios.

The Data Democratization Podcast was hosted by Alexandra Ebert and Jeffrey Dobin. It’s produced, edited, and engineered by Agnes Fekete and sponsored by MOSTLY AI, the world’s leading synthetic data company.

Ready to try synthetic data generation?

The best way to learn about synthetic data is to experiment with synthetic data generation. Try it for free or get in touch with our sales team for a demo.