🚀 Launching Synthetic Text to Unlock High-Value Proprietary Text Data
Read all about it here
Episode 20

Ethically aligned design for AI with IEEE's Dr. Clara Neppel

Hosted by
Alexandra Ebert
In the 20th episode of the Data Democratization Podcast, Alexandra Ebert, MOSTLY AI's Chief Trust Officer talks to Dr. Clara Neppel, Senior Director of IEEE. IEEE is the world's largest technical organization helping companies develop industry standards and safe practices for the latest technologies. Clara will share her knowledge and ideas around her latest project, developing ethically aligned design principles for AI-systems. Listen to the episode if you want to learn more about:
  • what is the difference between standards and certifications,
  • what role regulators should or shouldn't play in certifying systems and setting up standards,
  • how ethically aligned AI systems should be designed,
  • what is the role of values in AI systems, and how should we handle them,
  • why synthetic data is a must-have for dealing with sensitive data,
  • how providing safe, high-quality synthetic data is a public service that forward-looking cities like the City of Vienna provide.
Subscribe to the Data Democratization Podcast on Spotify, Apple Podcast or wherever you get your shows. Listen to the previous episode and read the transcript about Implementing data privacy using synthetic data with MOSTLY AI's cofounder, Klaudius Kalcher! 

Transcript

Alexandra: Hello and welcome to the already 20th episode of the Data Democratization Podcast. I'm your host, Alexandra Ebert, MOSTLY AI's chief trust officer. Today, I have another wonderful guest on the show, Dr. Clara Neppel. She is the senior director at IEEE, the world's largest professional tech organization that's dedicated to advancing technology with, for example, publications, conferences, and very importantly, technological standards. Their most famous one is arguably the Wi-Fi standard. Even if you haven't heard of IEEE so far, you definitely know that one and use it probably every day.

Enough from the internet and back to Clara. I know Clara as someone who is deeply passionate about human-centric innovation and ensuring that modern technology positively impacts us as a society. Before joining IEEE, Clara spent many years with the European Patent Office where she was involved in various aspects relating to innovation, intellectual property, and public policy. She not only holds a Master's in intellectual property law but also a PhD degree in computer science where she focused on artificial intelligence and big data.

In today's episode, we will cover so many interesting developments at the intersection of technology and humanity that I really had a hard time summarizing it. First and foremost, we spoke about the IEEE 7000 standard on ethically aligned design, which was just officially launched two days ago here in Vienna. What is the IEEE 7000 you might ask? It's the world's first normative standard for value-based engineering. To put it very simple, it's a standard on how to build AI and also other technological systems that are ethical. For example, by defining processes on how engineers can translate stakeholder values and other ethical considerations into system requirements and design practices.

Besides talking about IEEE 7000, we'll also cover the implications of the metaverse, why designing technology that's age-appropriate for children is so important, and which roles synthetic data has in the digital society. We'll also talk about standardization of synthetic data and why that might be useful. Besides that, we'll speak about digital humanism and even the Vatican and what it's up to when it comes to artificial intelligence. I hope you will find this conversation as delightful and also as insightful as I did. Let's dive in. Welcome, Clara. It's so nice to have you on the show. I was really looking forward to having this conversation today with you.

 

Alexandra: Before we get started, can you introduce yourself and maybe also the IEEE to our listeners?

Clara: Hello, Alexandra. The pleasure is on my side. I'm really looking forward to this conversation. My name is Clara Neppel. I'm the senior director of IEEE. IEEE is the world's largest technical association. We have more than 400,000 members worldwide, basically in every country of the world. We also have a rich history basically founded by Tesla and Edison more than 100 years ago. That's where some of the Es come from. Institute of Electrotechnics and Electronics Engineers. Well, since then, as you can imagine, this evolved and we have now new technical areas, which range from medical technology to AI, blockchain, and quantum.

Alexandra: Amazing. How did you end up where you are now? Also, what motivated you throughout your journey?

Clara: Well, I'm a computer scientist. I had my PhD already in big data if you want. Also on AI. I was already interested. Well, at that time, some years ago, what the kind of knowledge can we actually get from artificial intelligence. Is there anything that we can, the insights, are there something that we can probably deduct from these insights that we have in artificial intelligence? I think this is still a challenge. We, of course, have artificial intelligence which will predict with a lot of precision but still, the question is, what is the knowledge that we get from the system?

I still see this as AI is having hypotheses that can be used that we humans can then follow up just as in the famous Go game where we have now used School of Go after this famous game. I think it's the same also when we are looking for drug discovery and AI is giving us all these wonderful possibilities where humans didn't look until now but which can open up, of course, a lot of new opportunities. Well, I followed up with this one innovation I was working a long time at the European Patent Office. I was always interested in how technology is impacting society so patents and open source and standards and climate change, and so on.

With IEEE, I think this brought me to another dimension of how I can engage with society on technology topics with our ethical alliance, the design initiative. Basically, that was the starting point. One conference, I think, five or six years ago that I attended with IEEE on this topic where I said, "Well, this is something I would like to do as well." It was a wonderful opportunity that IEEE opened its European office in Vienna. That was also the starting point of me while leading this office and being the bridge if you want between the technical community and different stakeholder groups, including industry and in government.

Alexandra: Sounds like an amazing journey. I'm always inspired by all these great panels that you are invited to. Just recently, you spoke at the Vatican about artificial intelligence and how different religious beliefs and their common values could find its way into AI systems. Really amazing topics that you're engaging with. Of course, due to the closeness to the industry of IEEE through your job, you also see a lot of AI in the wild. Can you share with our listeners, what do you think are the biggest challenges that you notice when companies are using or planning to use AI?

Clara: Well, of course, we have all these wonderful opportunities. I think we should absolutely use this opportunity to, speaking of the Vatican to also see how we can bring humanity if you want to go to the next level. Also, the tagline of IEEE is advancing technology for the benefit of humanity. I think that is the challenge since, well, we all know we are both technologies, that technology is not neutral. The technology is reflecting the business values, business models, and the values of those who develop them. Key developers themselves. Of course, the organizations that they are in.

When we are optimizing this technology, when we are developing it, I think the big challenge is really to take into account the impact it has. I think that technologists, including myself, we were not trained to build this. I think that we need to have new instruments, new ways of how to assess these impacts, and also how to take these impacts into account from the design phase to the deployment phase.

Alexandra: That absolutely makes sense. Also that these aspects are covered already during university education and are not left as a side subject that you may take or take not because it's just too important with the impact that the systems are having. Before we come to this in more detail, and of course, I'm also curious to talk with you about the work on ethically aligned design but first, I want to touch upon regulations because there's so much happening at the moment. What's your take on all these emerging AI regulations? Are they going in the right direction? Is there something missing? What's your perspective?

Clara: I think the EU did a wonderful job. I think it is really important that we have this proposal on the table because we already have then the possibilities to discuss about it which may be probably would have taken some time if we wouldn't have this on the table. I find two things really remarkable as regards this proposal. Now, specifically, this is the redefinition of risk because the EU has taken a risk-based approach, basically building on existing regulation as regards product safety.

I find it interesting and very forward-looking that they redefined risk from taking into account only safety and security to include also new risk factors including fundamental rights and even going so far to say that some of these applications which are so important to us will not be a possibility in European Union. I think that is important for creating this trust in the technology that we all need in order to deploy it. I think that there are also new things, which might be also been taken into account. I was recently discussing about AI and climate change.

If we are talking climate change, of course, we have there some impacts, which go beyond the impact on the individual, which transcends the individual impact. This is also true for social matters. If we have an AI that is profiling, that is manipulating, then this has of course an impact on the individual, but it also impacts us as a society. I think that this is maybe something which also needs to be taken into account.

Means for address when you are not impacted immediate as a regional, but we are impacted as a society and there are different analog instruments such as the environmental agency where you have different possibilities to look into, for instance, on a transparency matters. Even if you're not impacted directly. I think that this is something which we have to look into both as a regard social as well as environmental impact.

Alexandra: Yes, absolutely makes sense. If you're talking about the impact on a larger societal level, are there some aspects that concern you the most or you think that we really should focus on as a society to address them and also ensure that AI is moving in the right direction? Not only with the impact it's having on individuals, but also how it influences how we behave as a society?

Clara: Well definitely, of course, the profiling mechanisms and also with the Metaverse is being created right now, I think this has a really very important things that we have to look into. Starting with the children, I mean children are one-third of the internet users. I think that the way they are building up their worldview is happening through different lenses. Some of these lenses are social-cultural, others are environmental, and with this new environment, which they are currently in and even they are going to be confronted with this new Metaverse. I think that there is a special need to care.

I think that this already started with some initiatives, for instance, in the UK where we have even a bill on age-appropriate design, where society has to say, "Where are the red lines? What is it, what we want to have? What is it, what kind of safety measures do we have to set up for the digital age?" Which might be the same, or even beyond those in a physical world. Well, one thing is really to set up the requirements and the other thing is really how to achieve those requirements. I think that's where we have to work together.

Alexandra: Absolutely because this would actually be my next question since many of the regulations tell you, this is what we don't want to happen, which of course poses challenges in what exactly should you then build? In your opinion is different regulations that we are currently see being drafted like the European AIs. Do you think there would be sufficient to ensure that AI is developed and also deployed responsively or what else is needed to really achieve this school?

Clara: Yes, so I think that regulation very correctly needs to set the aim of what we want to achieve and what we don't want to do. It has to be technology-agnostic. It cannot regulate AI technology. It has to look into the different application fields and set the desired outcomes. Now when it comes to how to achieve this, I think it also very rightly looked into defining those standards together with the stakeholders, which includes the technical community, but also those that are directly or indirectly affected by the systems.

I think that here standards and certifications are very important because we need to have a consensus of, first of all, what does it mean? When we are talking about principles, what does it mean for certain context to be transparent? Does it mean, for instance, that I would like to have an explanation of why the AI system, like a care robot made this specific suggestion of taking, for instance, a medicine for an elderly care? Or does it mean really to have the source code which might be important for the technologist to repair it or something like that?

I think it is important for us to understand, so have a common understanding on the terminology and then also how to achieve once we know what we want to do, to have. there are certain things on how to translate it into technical requirements. The standards that can do this, when IEEE we are developing both sides, if you want, so also certification, which looks into, how do we want this productive service to look like, and then the standards, which would be an age for the developers on how to achieve this goal.

Alexandra: Yes. Makes sense. That was a really like the analogy that you shared in one of our earlier conversations, that the standard is more or less the cake recipe for developers and project managers on how to actually build something and then the certifications of course, require criteria catalog based on which you can assess whether the goal was achieved, then the system is behaving.

Clara: Yes. Actually, I'm very excited because next week we are going to have the celebration of the successful to finalization of the certification project with the city of Vienna. This is going to be at the city hall. I think it was a very interesting project and a very important one because I think that's for public services. It is especially important to have also for the procurement services, but also for services that are developing themselves to have these marks which also signal to the outside world that an independent assessment has taken place and that this can serve as a mark justice for it put labeling to their consumers and citizens that are what they are consuming.

The services is aligned with their values and it's ethically certified.

Alexandra: Yes, absolutely. That's definitely important, and it's such a pity that they miss out on this event. When we will air this episode, I think it will be two days after the official announcement on the 16th of November if I remember correctly.

Clara: Yes, exactly.

Alexandra: Really great stuff. This project now can be celebrated as concluded. Can you share a little bit about all the work that has happened for the ethically-designed design standard and certification? I believe it was over 700 people from all over the world involved. What are the motivations behind it? What is critical that this ethically-aligned design standard and certification set out to achieve?

Clara: Sure. As I mentioned, this was also one of the reasons I joined IEEE. It started quite a long time ago, I would say now almost five to six years. I think we were one of the first to start with this journey, from the realization. The technical community that we have a responsibility.

As I said technology is not neutral. We have a responsibility and especially in this field of well, AI, we actually don't use the term when we don't like the term artificial intelligence because it is actually misleading in my view. It is as if the technology would appear from nowhere and we have to deal with its consequences, but actually it is a technology that we are building and it's our responsibility of how we are building it. Whether we call it artificial intelligence or autonomous systems.

Alexandra: What is your preferred term?

Clara: Well, we are using actually autonomous systems, also autonomous intelligent systems. It's important that it is systems. It is something that is actually built and well for this, as I said, journey, we started something which I think is very important may be to bring the technical community together with philosophers, with people from the medical field or with policymakers and start building up this dialogue and see what are the challenges that we are facing, and also what are the possible solutions. Some of the solutions which address us, the technical community, so how we are developing the systems, but then also a lot of solutions or recommendations which go to the policymakers and decision-makers.

As part of this journey then, we have a report which is now already in the fourth iteration which covers different aspects from classical ethics, so how do you take different moral philosophies into account, but important also to take this globally into account and maybe not just focus on Western philosophies, but also Eastern and global philosophies including Ubuntu and Shinto and so on. which brings also completely new ideas to the table, and to see how to translate them into system design to specific applications such as justice or whenever looking into effective computing and so on.

Very concretely then, beside the report, we are looking into different specific subcommittees if you want, so specifically for finance, just specifically for business maturity models when you're looking to AI ethics, which can range. We are aware of certain ethical problems, but we don't do anything until it becomes part of the business strategy, which is desirable. Then, of course, ranging to standards. We develop now a range of so-called social-technical standards from value-based design, to transparency, to ontologies for robotics, to the certification that we just discussed.

Alexandra: Maybe for clarification, what's the difference between a socio-technical standard and a pure technical standard?

Clara: Sure. Well, let's say the standardization organizations just like IEEE, but of course, also the others that you all know, ETSI and ISO and so on, we were traditionally focusing on technical matters. For instance, our best-known standard is the Wi-Fi standard.

Alexandra: What would you do without it?

Clara: Exactly, that we're using right now. As you can imagine, there are lots of technical things that need to be taken into account, that the technology works the same way now in Austria or everywhere in the world, and we develop it with a community of 10,000 technical experts. That is, of course, a completely different way when we're talking about ethics. Of course, ethics is something which is concerning the rest of the society, and we have to bridge this also different understandings, let's say, even different concepts, develop even a glossary as part of the ethically-aligned design to bring to these different communities on the same level of understanding on a specific matter.

That's why we call it social-technical standards because it encompasses, of course, a lot of things which are related to values, moral philosophies, but then also system design. We have to bring them all together in order to have done something which can be used in a practical manner.

Alexandra: Yes, absolutely. One question I have for you is that, especially, when I talk with opponents of standards, they oftentimes claim that AI is such a rapidly evolving technology and that you can't really effectively standardize it or it would slow down innovation. What could be your answer to people?

Clara: Well, I think, first of all, the question is always to what degree can we really go on with this very quick development. I would question this as well. Even in AI, there are now more and more opponents, I'd say, of this very rapid development without really taking into account the impact, really using if you want, the users as beta testers. It's more and more contested. I think when we are talking about ethics and responsibility, it will be increasingly important to involve the people into the design in a participatory manner. This will require, of course, some time and some additional effort.

I'm convinced and our proof of concepts also show that, in the end, you will have a better product, a product which is better accepted, and which you will not spend a lot of time and energy on to redesign it if possible because very often, this is not even possible at the later stage. This would be my first answer. The other is, of course, that standards and standardization has to be adapted to this rapidly changing environment which is, of course, the case. This is happening through different ways. With IEEE, I think that what is quite unique is that you can start the standardization process very quickly because you don't need any national representation.

Basically, you can start a standardization as five individuals that you think it's important, and we will scale it up to a global level. A lot of these pre-standardization activities can be resolved quite quickly. Then, of course, when it comes to the standardization itself, that also has to be more adaptive. Now, for instance, the certification that we developed is something which we have this criteria catalog, which is horizontal for the moment and takes into account three aspects; accountability, transparency, and algorithmic bias, but it can be done very quickly adapted to a specific use case. For instance, we made a use case on the contact tracing applications which could be developed in two months.

There are new ways of how to achieve consensus because basically, it's about consensus, on how to achieve consensus more quickly, and how to adapt it to a specific context.

Alexandra: Yes, makes sense. You mentioned how important it is to involve the stakeholders that will be impacted in this, of course, on the one hand, takes time, but on the other hand, also improves the product, or whatever is developed. You shared two very nice stories with me in an earlier conversation. Can you tell them again, for our listeners?

Clara: Yes. I think one which I really like is the one proof of concept that was done with UNICEF in Africa. The initial idea was to develop a tool to devise something which is called a talent score for local people so that this talent score can be used to match them with potential employers. The initial idea was to do this talent score based on their mobile phone usage, a quite common business model that we know which is also completely transparent and which doesn't give a lot of agency to the people.

After using the model P7000 standard at that time, which is now a standard, what happened is that we came up with a completely different system design. The system design resulted basically that the young people can engage in different activities for which they earn, let's say, different scores, and that they decide how they can use in order to engage with others on the community or with employers. This gives them agency, of course, and also allows for participation and collaboration locally, which was important for other stakeholders.

That I think is a really interesting way to show how by taking into account stakeholder needs and values, you can actually achieve a much better product or service. There was also another, let's say, survey when you are looking into AI-based tools. Again, it was just proof of concept. We asked investors if they would invest in these tools, then there was a certain percentage that said no before having an ethical analysis. After there was an ethical analysis, a lot of things came up on privacy and so on, then investors were suddenly of course less likely to invest in these AI-based tools.

After system design, using this standard which takes into account the study values, again, the investors were much likely to invest and users also to buy the tool. I think these two examples show quite clearly that by taking this time and effort at the beginning, you will end up really on the long run, also from an economic point of view in a situation in which you will have a better value proposition, and which, of course, is then more beneficial for your sustainable business.

Alexandra: Absolutely. That's definitely a very nice example. Just out of curiosity, when the survey was done, did you also collect data on why the investors changed their opinion? Was it because the low risk of having a public scandal that something goes wrong with this product? Or did you get any specifics on what drove them to revise their decision?

Clara: Yes. I think for this one, we would have to look into the University of Vienna. Basically, they set a speaker man who is the co-chair of IEEE 7000 now. She was in charge for this project. I would refer to her team there. I can say from my experience that, and also talking to investors, they are really interested in these sustainability and ethical aspects, because, well, first of all, it is really also about money. If you're looking now into a lot of services that are appearing in the press, which have to change the name, they have to completely redesign their PR and so on. There is, of course, this risk, they have to think about risks and ethical risks are becoming more and more important.

It is really, first of all, I think, risk-based mentality that investors have to have. they also have to take into account, then these new categories of risk. Then, of course, it's also about the sustainable aspects. If you are looking to have a product or service that is going to last for a longer period of time, and also invest, if people who are investing their money, they're interested to invest more and more in this product. Their funds, their ESG reporting, and so on, which take into account already these environmental, social, and governance aspects.

Investors are more and more involved in this. We will have also for a workshop next week, actually, the investors talking about this important tool. If you can make it, please come. Definitely, I think this is the future. We need to look into that. It's going to be also now from the regulatory point of view. It can happen that if you invest in a product that later on will not get the certification that you will not that product anymore.

Alexandra: Yes, absolutely. That's definitely a risk. Talking about risks, one other thing that came to my mind when you shared about the certification that's part of ethically aligned design. Just a few weeks ago, in Singapore, there have been some discussions about regulating AI and also certifying, ethical, and responsible use aspects of artificial intelligence. Initially, it was part of the discussion of the high-level expert group there. Initially, some suggestions were made that the government should issue the certifications.

Then a point was raised that, of course, in the beginning, where everything is so rapidly evolving, so new, you potentially don't know if you have taken everything into account or not, there is a risk that something could go wrong. That's something a system that got a certification, still would, for example, discriminate against certain groups of people or something like that. Then they opted more or were more inclined by the idea of letting the industry start with self-certification.

Therefore, I would be curious to get on the one hand, your take on self-certification, which purpose it could fulfill, what the limitations are, but also how IEEE is planning to deal with this risk of being the institution that independently certifies AI systems for being ethical or not. What if something that got the certification still then behaves in a way that is not to put it like that, behaving as intended?

Clara: Yes, I think this is an important question, especially when it goes into the direction of certifying services, which are adaptive. They are adapting to the environment. They are learning from new data. Of course, then their behavior or output can change over time. Well, first of all, what I would like to clarify is that we would not like to be the only certification body. We have a certification service, but the idea is really to be the authority who is in charge of developing the criteria as we are doing the standards. Basically, convening the stakeholders to agree on what will be necessary in the future. That will be, of course, evolving, depending on the use cases.

Coming to this aspect of how long can the certification be valid? This is also something that is discussed in the European Union proposal right now, and also about the rate certification if there is a substantial change of the product and service and what does this substantial change mean? I think that, again, it will depend on the risk and of the impact of this, and also the change of the service that we are talking about.

I think that everybody will agree that if we are using AI services for controlling an energy plant or anything like that, there is a need for certification. The question is, really, to what degree are we allowing this service to change over time? I think that in analogy if you are looking to have these services in a service, which evolves fundamental rights, such as in a justice system, that is also something which everybody agrees that we would like to have more control. In my view, a lot of things, but when we are looking into certification are actually long term, because they are looking into governance aspects that have to be put in place.

A lot of our certification when it comes to accountability is, let's say, the basis of what you have to put in place inside your organization in order to guarantee the ethical use of the systems. For instance, what are you doing, when you have false positives? What are you doing, also, in terms of stakeholder involvement? How do you document things? These are quite static things, which don't have to do with how the system evolves, but how you're actually dealing with possible problems that can come up.

Then, of course, when we are looking into the system, for instance, which will be used in autonomous cars, I think that you will already have regulation, which says, "Well, you need to have something like a watchdog, or you have to have really freezing an AI system and to be used in a certain way," so that it cannot, let's say, move out of the expected boundaries. I think we will have a lot of things both on the technical side when we're talking about these watchdogs, how are we going to put this in place, or continuing certification in the homestead. The boundaries are set up, and then there's going to be an automatic verification that it's still there, up to when it comes to the social aspects. We are looking for manipulation, and so on, where there's going to be maybe a public board, a public oversight, that is also discussed, where these certifications if you want to, will be done on more broader level to the environmental oversight that we're having right now.

Alexandra: Yes, makes sense. You mentioned AI governance and how important it is to really also keep monitoring and governing your AI systems, once they're in production. Can you share with our listeners, what some of the most important steps of AI governance are, especially for higher risk use cases?

Clara: Well, I think that, first of all, the most important one is to put in place a certain level of responsibility, who is accountable for which part of the AI system. Again, coming back to the initial discussion point that there is someone who is responsible for AI. This should be start from the developer level, but it should not end there. It has to go up of course, to the board level. It has to be part of the discussions there.

I think it is also important to then, in my view, not to make it just the compliance issue. Ideally, this should be part of the product strategy of the board strategy on which direction we would like to have our products and services evolve and then look also into the beneficial side, let's say, compliance is always seen as something we have to do. Actually, it should be something how can we embrace, let's say, these aspects that comes from society and from our users and make it better products, just as I said before. Of course, a lot of it is then once you have these lines in place, a lot is about what are you putting in place to deal with a different problem situations.

Also what is essential, in my view, is to provide the right training, the right instruments. Here, again, whether you're taking external standards or internal best practices, it will be absolutely important to provide it to the developers, to the different entities, also, across operating units in a bigger organization so that there is a common understanding of what needs to be achieved and how. Of course, I know that one of the critic points of standardization is that it's against creativity, but I think that these standards and best practices, that they should be really a help on what can be done.

Of course, it gives also right framework around how it can be done but it still gives a lot of freedom to the individual developers, specifically on the design decisions when it comes to, for instance, value prioritization. As we all know, there are different trade-offs that need to be done. I think that here, again, it's going to be done. The product owner and the team there, together with the stakeholders say, "This is what's more important to us and it's documenting why," so that it is again something that can be used for our data if it's necessary. I think that is also important, documentation and transparency.

Alexandra: I think also when it comes to the aspect that you just mentioned that standards could potentially limit creativity, especially when it comes to ethically aligned design, and this broad range of groups that were involved in creating the standards, not only the developers, not only the technical people, but also ethicists, philosophers, and so on and so forth.

I could imagine the questions that should be asked in the process of developing an ethically behaving system really broaden also the horizon of the more technical folks. Since you pointed out that previously, ethics and ethical design was not really core of their education that they went through in university. I could imagine that this also positively impacts creativity and how to create something that's of benefit to the users and to society.

Clara: Yes, I think here, I would again, refer to the work done by Sarah Spiekermann from the Economic University of Vienna. She did amazing work there and actually did studies which show that by using this value-based design, you're actually coming up with more creativity than using traditional road mapping.

Alexandra: Oh, yes. There was this one example with the system for the elderly.

Clara: Yes, exactly.

Alexandra: Can you please share that story?

Clara: Yes, I think that was really interesting. The story was to develop a tracking application for elderly people. First of all, for their security, but also for them to find more easily the way to certain things that they're looking for even in a shop.

Alexandra: Sorry, tracking application to track where they are at the mall or?

Clara: Yes, where they are for their relatives to be sure where they are, but also for them so that they can use this service that they can put it where I'm looking actually for buying milk and-

Alexandra: Where to go?

Clara: -where do I find it? Maybe I'm now already to the solution that I wanted to share with you that actually when the product was about to be developed, of course, the question was, what kind of values need to be taken into account? What kind of risk do we have? The first thing the developers thought of was the privacy aspect. Of course, when it comes to tracking, privacy is important.

They assume that is also important for the elderly people and it's the first issue that comes into their mind. It turned out that what was important for them was support. What they wanted to have is to have something, for instance, a big green button which they can push for help if they are, for instance, in a shop and they don't find the milk that somebody actually comes, a human comes and helps them to find that product and services. This was one of the examples which showed that if you're consulting your users, you're actually more creative and you're integrating some features into your product that you didn't think of before.

Alexandra: Initially, it was more of a tracking application for the relatives to know where their older relatives are, but due to this value-based approach to learn that the people would also really appreciate opportunity for support and then included the support functionality.

Clara: Exactly.

Alexandra: It makes sense. Yes, that's definitely a nice example. You mentioned that initially, they thought about privacy, which of course, is a very important value, then that they learned that support is also something that's really priority for the consumers and the users of this product. Does this mean that then privacy was not taken care of or was it just that you rebalanced the values and introduced some features that would not lead to maximum privacy preservation, but allow for both privacy and support?

Clara: One of these features and I really encourage you to take a look at 7000 standard, is really about first value elicitation. Looking about what is important, then you will have a lot of values, but then there is a very important second step to see how to prioritize them. Of course, you're bringing up a very important point there that you cannot ignore. For instance, privacy will be, of course, always important and then you have to implement it. The question is to what level? We will always have something which is set up, let's say, by regulation that you need to implement.

Then again, the question is, if you have something which has a bigger priority, it might be that you will not end up with a product which is very privacy-oriented. You are implementing, let's say, what is necessary from a legal point of view, and then you are focusing for the highest priority for what the users are actually expecting from the product.

Alexandra: Yes, it makes sense. Since you just highlighted the importance of privacy, of course, this brings me to synthetic data and the work that we are currently engaged in, which is basically launching in the IEEE industry, connection for synthetic data. Then soon, hopefully, also send its scope to standardized privacy and accuracy of synthetic data. My question to you, Clara, would be where do you see the role of synthetic data in today's digital economy?

Clara: I think that there is this famous study from Gartner which says that in a very short time, 40% of all the training data will be synthetic data. I think this is still accurate.

Alexandra: I think it's been upgraded. I don't know the exact number now, but it will be a huge percentage of data that Gartner predicts to be synthetic.

Clara: Yes, already from this study, we can see that when it comes to data, which is sensitive data when it comes to health care, when it comes to even emotion detection that we are talking more and more. If we are using this data at all, we have to have special care on how to use it and even when we are having different anonymization techniques, as we all know, there is a possibility to still track down certain individuals. I think that in that respect, synthetic data will be a must.

I like very much the analogy when I talked to the city of Vienna. They said the way they see this, the provision of health data to the public space would be something analogous to what the city did 100 years ago for their clean water, where, again, due to the growth of the city, they needed to look for an alternative to how to provide this clean water to their citizens. They went out to the Alps, they bought a big portion of land. Since more than 100 years, they are owning this land and taking care that the water which comes from there is clean enough for the citizens.

I think that's the same analogy that now when it comes, for instance, to public services and how to provide this clean data, it's something similar. It has to have a high quality as something that startups can build on, which new public services and new companies can build their business models on and which is reliable. That's why I think that it is going to be more and more important and what is, of course, important there is the quality.

The question is how we are going to define what is quality, what is a good quality synthetic data? I think it comes again to a consensus between different stakeholders of what they considered as being a good quality and then again, the whole question, how do we achieve this? I think that, here especially, it is important to have this dialogue in terms of convening the stakeholders, and then very important to have clear metrics in the end, which can be used by the public as well, and which can be reported. This can be part of a standard, it can be part of certification. This is what we are also very happy to start with you, Alexandra.

Alexandra: I'm very much looking forward to this project. I really like the analogy that you just made, I didn't hear it before, of supplying clean water now too in a more digital age, supplying clean, safe, high-quality, privacy-preserving data to the people of a city. I really like that. Maybe on that note, you're also involved with the city of Vienna on their digital humanism project. Can you introduce this to our listeners and give them a little bit of the details, what this is about?

Clara: Yes, I think that the city of Vienna is really a pioneer in this area. It recognized, I think, very early on, the special responsibility it has when it comes to the impact of digital on its citizens. In both ways, I like that it's not only on the risk, but basically, on what the opportunities that this digital age provides us to have something similar to the discovery of the printing press some time ago, which brought a transformation, a social transformation, without which we would not have anything that we consider natural right now.

The question is really how to take these insights, the knowledge that is being generated by AI, and also to make it available to everyone, basically. How to integrate it into education, into the arts, and how to establish this dialogue so that nobody is left behind and gain, to bring something which the humanists brought us, and again, use it to bring us on a new level as when we talk about humanity.

I think here as well, this is now my personal take, this is particularly important when we're talking about humanist because we have now or something like, I would say, a redefinition of what human is, because you also hinted to the discussion I had on with the Vatican, because we have more and more technologization of the human with wearables, with the different technologies that we are using right now.

Alexandra: The lines are basically blurring between technology and human.

Clara: Exactly. Then we also have the other way around, we have also the humanization of technology with effective computing. The question is really these lines, what does it mean to be a human also in the digital space, again in the metaverse, I can have different identities in different spaces, if you want, which is very different from what I am, what does it mean in terms of rights, in terms of interactions, in terms of our worldview? I think it's going to be very interesting, and it's going to be disruptive, I'm pretty sure.

The question is really how to use this disruption to not create new monopolies of power, but to make it equitable so that it is in line with our social values, and it's also in line with what we need to have a sustainable development for the environment.

Alexandra: Yes, absolutely. I think that with all the amazing work that the IEEE and everybody involved have done on this ethically aligned design standard, I think this will help us to move in the right direction. I'm also a big believer of data democratization and open data, and also increasing data literacy in the broad public to really put us in a society, in a level where more people can benefit from data, and where we also really break up these power imbalances that we currently see with the big tech companies and the rest of society.

I think these are some of the important steps that are currently being taken, need to be taken in the years to come.

Clara: Absolutely, yes. I think that it is important that we are working towards this and also raise awareness. I think it's also good to see that we have a lot of international organizations which have taken this up very seriously. We are working with the OECD, with UNESCO, also with the European Commission, there is the AI assembly, there is the Council of Europe who has a very important initiative on AI and how to use it, which is in line with fundamental rights. It is really good to see all these initiatives, and I think it is important because we need to do this together.

Alexandra: 100% agree, yes. Perfect. Clara, I think we could continue our conversation for hours, but unfortunately, we are approaching the end of our recording. My second to last question to you would be, what's your advice, since you work in such an exciting field, for everybody who's looking to work also in this intersection of technology, regulatory developments, and ethical AI?

Clara: I would say what is important is to be aware of what is happening. Also, if there is time, if there are resources, to be part of the conversation on a higher level, but definitely, when it comes to your business, to your context, you need to be part of the discussion, which is going on with the users and with the regulators, we're talking now about Regulatory Sandboxes, and so on. I think, even for startups, if you're developing a product now, and the regulation is going to come in two years, that is going to have a lot of impact on your product.

It is important to follow these discussions, and again, as in my view, to be or to participate as much as possible. We have now these regulatory international organizational level discussions, but it is happening also as part of public-private partnerships. It's happening at the Vatican level. I think that there are a lot of opportunities. I think that bringing in your views, and also learning from the others is really something that I would recommend to everyone.

Alexandra: Wonderful, wonderful. Clara, thank you so much for everything you shared today. Do you have any final remarks for our listeners, anything that you want to share with them, or maybe also any resources that you can recommend to our listeners if they would like to dive deeper into responsible AI and also ethically aligned design?

Clara: I would really recommend, if you start in this matter on ethically aligned design, and what are really the question marks, what are the points there? I would really recommend that you are looking into our report on Ethically Aligned Design first edition. You can Google it, you can download it, and then you will find all these issues and possible recommendations. All our standards and standards initiatives are also on our web page. Again, you have to look in IEEE, Standard Association, and Autonomous Intelligent Systems.

Then, of course, beyond that, I would really look into the AI regulation on the proposal that is there. Also, now I think the really exciting work that is being done by the UNESCO because they're talking also even of a moratorium for certain applications. It's really something which is global there. What are the principles at least? That is important for everyone. I would really recommend to look into that.

Alexandra: Perfect, thank you so much, Clara, for taking the time, as always, it was a true pleasure having this conversation.

Clara: Thank you, Alexandra. Thank you for the invitation. Pleasure was on my side.

Alexandra: Thank you, Clara.

Alexandra: You see, I didn't promise too much. Clara and I covered a lot of ground today. Some of the key takeaways. First, as any other technology, AI is not neutral, but tends to reflect business values. Therefore, Clara sees it as one of the main challenges of AI that the technologists correctly assess the impact it will have, particularly because this is something that we're not trained to do. Plus, it's not only about the impact AI can have on individuals, but also on society at large. One example of this impact is the impact on children, which really soak up their environment.

Clara highlighted that age-appropriate design of technology is a topic that needs to be specifically addressed and shared. Needs to be specifically addressed. She shared that there are already some positive initiatives like UK's bill on the age of appropriate design ongoing, and that this is moving in the right direction. One example of this is the impact AI can have on children, which really soak up their environment. Clara highlighted that age-appropriate design of technology is a topic that needs to be specifically addressed. She shared that there are already some positive initiatives ongoing, like, for example, that UK passed the bill on age-appropriate design.

When we talked about regulations, Clara thinks that the European AI Act is moving us in the right direction. Plus, she finds it remarkable that it redefined risk, because traditionally, the focus of technology risk mitigation lied on safety and security, but in the AI Act, also fundamental rights are included and certain use cases are even explicitly prohibited. While regulation is necessary, it's not sufficient.

Clara mentioned that even AI principles are not enough and that we really need standards and certifications because different principles, for example, transparency, can mean many different things, depending on whom you ask, knows on the context. As a rule of thumb, you can think of certifications as, how do we want the products to look like, and standards as the recipe or the aid for developers on how to actually achieve this goal. Next, Clara shared that she doesn't like the term artificial intelligence. It's misleading and creates the notion that AI just appears out of nowhere and that we have to deal with the consequences, whether we like it or not.

In fact, and she was really strong on this, it's technology we are building and it's our responsibility how we are building it. Therefore, her preferred term is autonomous intelligent systems, because systems, as opposed to intelligence, indicates that it's actually something that is built. To achieve this, we have to work together. Society has to say where the red lines are, but it's important to not only set up the requirements, but also define how to achieve this. This is what the ethically aligned design standard set out to do. Next up, we talked about the IEEE 7000 on ethically aligned design. Clara shared that IEEE realized early on that we have this responsibility to make AI and technological systems ethical.

They were one of the first to start with this journey by bringing together a community of technologists, philosophers, policymakers, medical professionals, and various other domain experts to identify the challenges, but also to come up with solutions on how to approach these problems. The result is what we find in the ethically aligned design standard and certification, but also the various reports and sub standards that were issued by this group. One very important key takeaway of ethically aligned design is that you should involve the users right from the start.

Not only will this help you to achieve an ethical system, but also result in better products that better cater to the needs and better respect the values of your users. On the opposite side, not doing it and having to redesign the system afterwards would not only be costly, but sometimes it's even impossible. If you take away one aspect, then involve the users and ask about their values right at the beginning of your AI projects.

Next, another aspect I really found memorable from this conversation was the research that Clara mentioned, where they found that investors and VCs are much more likely to invest in a product that was developed ethically because they are not willing to put up with the ethical and regulatory risks of non ethically developed systems. Next takeaway. One critique point that's oftentimes raised when it comes to AI certifications is that AI is too fast-moving and changes too quickly for certifications to work efficiently and effectively.

Clara dismissed this and pointed out that the certification that is part of the ethically aligned design, in fact, is something that can be valid for a longer period of time, because it looks into foundational governance processes that have to be put in place regardless of an AI system updating itself or not. Of course, recertification will be necessary at some point in time but how soon depends on the risk profile of the specific use case.

Another point and another key takeaway is that critiques of standards oftentimes raise the point that standards are against creativity. The example that Clara shared on the monitoring and support system for elderly citizens, showed that this is not the case. The ethically aligned design standard gives a framework, but developers and product owners still have lots of freedom on which values to prioritize and the framework might even get them to think out of the box and by interacting with the end-users, get them to build a product that's even more user-friendly and valuable.

When we talked about AI monitoring and governance, Clara highlighted that accountability is the most important aspect and having clearly defined responsibilities for each part of the AI system. Then, of course, it's also about having processes in place on how to deal with specific problem situations once they arise. Lastly, training is crucially important and providing the right instruments to developers on how to actually achieve what's required like internal best practices or external standards. Next up, we touched upon synthetic data, which according to Clara, will be a must in our digital economy, specifically to deal with sensitive data and to share healthcare data.

Also due to the pitfalls of legacy and [unintelligible 01:06:42] techniques, it's something that's needed. I also really liked the clean water analogy of the city of Vienna, which 100 years ago, saw an increasing need for clean water. By securing some fountains in the Alps, found a way to provide it to its citizens. Now, it's about providing clean, high quality, and safe data to citizens to facilitate research and also innovation by startups, but also to have reliable data new public services can be built on and the privacy safe option to achieve this is synthetic data.

Lastly, we spoke about the digital humanism project from the city of Vienna, who set out to shape the social transformation that's currently ongoing in a way that nobody's left behind. It's about establishing a dialogue between a diverse set of stakeholders and about ensuring that technology is used for the benefit of everyone, which in general is a notion that's taken very seriously by various international organizations as well, like OECD, UNICEF, the European Commission or the Council of Europe, who are all creating awareness.

Clara was clear that this undertaking is something we all need to do together. Thus her call to action for you was to be part of the conversation and to find ways to participate as much as possible. As always, thank you very much for listening. I hope you found this episode as insightful as I did. If you have 20 seconds and could follow us on Spotify, Apple Podcasts, or whichever platform you listen to, we would highly appreciate it.

As always, if you have any questions or comments, we're looking forward to hearing from you. Just shoot us an email at podcast@mostly.ai. Until then, see you next time.

Ready to start?

Sign up for free or contact our sales team to schedule a demo.
magnifiercross