💡 Download the complete guide to AI-generated synthetic data!
Go to the ebook
Episode 24

The EU roadmap for AI with Axel Voss, MEP

Hosted by
Alexandra Ebert
In the 24th episode of the Data Democratization Podcast, we had the chance to talk to Axel Voss, Rapporteur of the Special Committee on Artificial Intelligence in the Digital Age in the European Parliament. Axel has been working on a report outlining important policy recommendations for the upcoming European AI regulations. Among other things, Axel told us:
  • why the GDPR is due for an update
  • what the future of data protection holds
  • how the EU should compete globally in AI and digital innovation
  • why access to high-quality data is so important 
  • how synthetic data can help bridge the gap between data protection and data-driven innovations
  • what the upcoming European AI-regulation will look like
If you would like to learn more about the legislative privacy landscape around GDPR and the upcoming AI regulations, listen to one of our previous conversations with Axel von dem Bussche, GDPR, the new European AI regulation, and the privacy landscape. 

Transcript

Alexandra Ebert: Welcome to a new year of the Data Democratization Podcast and what is already our 24th episode. I'm Alexandra Ebert, your host, and MOSTLY AI's Chief Trust Officer. Today, I'm happy to kick off the new year with not only a truly exciting guest but also a tremendously important conversation. Today, we will discuss what it will take for the European Union to become the responsible leader of this global AI race. Also, what role synthetic data will play in this, and how it can facilitate a European data ecosystem where data flows freely while privacy remains protected.

Who is this exciting guest who is joining me today? It's Axel Voss, a member of the European Parliament who focuses on digital and legal policy issues. I invited him on the show because just recently, Axel and the members of the parliament Special Committee on Artificial Intelligence, published a report on AI in the digital age. In this report, they covered exactly the topics I want to discuss today.

What's needed? Which steps do the EU and the individual member states need to take now to make this ambition of a global AI proposition become a reality? What should already have happened, and how can we course correct? All of this, we will cover in today's conversation. I'm convinced you will find listening to this episode worthwhile, if not enlightening, in one area or the other. Let's dive right in.

 

Alexandra: Welcome to today's episode of the Data Democratization Podcast. I have the pleasure and the honor to have a very exciting and special guest on the show with me today, Mr. Axel Voss, from the European Parliament. Before we jump into today's topic, which will be how the EU can actually responsibly lead this global AI race, and also a very exciting report that Axel recently published, I first want to hand over to you Axel, and ask you to briefly introduce yourself, and maybe also your motivation on why you're focusing on artificial intelligence and all the digital aspects that influence society nowadays.

Axel Voss: Thank you, Alexandra. I feel very honored to be a part of also of your podcast, and thanks for the invitation. I'm in the European Parliament since 2009, and from my background, I'm a lawyer. I was then more keen in coming more and more into these more legal issues also. When I started here in the Parliament, I was confronted by, at that time, the so-called SWIFT agreement. I was in my group the only one who was interested in dealing with all of these data issues, and then it continues with the General Data Protection Regulation.

I move more and more into the subject, and I felt then a special interest of all of this new upcoming technology, and how to balance these, and so on. This has brought me to the situation where I get the image internally, of my group, that I'm the one who's dealing with all these technology, and then new issues, and so on. That's why I'm very much interested in this new legislation and then trying also to balance everything towards to other issues. This is totally challenging and contesting also on a political level, on a personal level, and this is quite interesting.

I dealt with the GDPR, and also the so-called SWIFT agreement. Then also the person, the name records issue, the ecosystem in cars, and so on. I try all the time to just show interest and also be a part of all these issues of new technologies. Now, that's why also artificial intelligence is a extremely important and of strategic relevance. Therefore, it was also my motivation in trying to be a part of it. Then the Parliament created a special committee here. Then there was the question who would like to lead the committee and who would like to be the repertoire.

Therefore, of course, I have been very much interested in being the repertoire for it because then I could also underline once more how important this subject is and where we should go. This is a little bit about my political situation in the Parliament. My political interest started more the less at the beginning of the '90s. I have been already over 30 years, and I haven't been interested in politics till that time. Then I had the opportunity in having an internship at the UN in New York, and there has been-- They tried, the United Nations tried at that time to get rid of this conflict between the US and with Iraq.

This was totally interesting, and then my political interest grow. This is why I more than less have this political interest and also always looking for some challenges regarding also political issues because I think there, it's always a good operational field for coming up with new ideas and so on. This is something what is motivating me very much.

Alexandra: I can only imagine. One can actually say that it was the new technologies and the data that found you, and now you are in this topic where you have. Speaking of challenges, I think there are not many regulatory fields that are more challenging now than how to regulate all these innovative and emerging technologies. Since you already mentioned that they, of course, cannot be looked at in an enclosed box, but actually have impact on so many different levels on society and economy. I can completely understand that you got into this exciting field.

Axel: From the background being a lawyer and also trying to deal with all these legislations, that's a really fantastic job. You have to have in mind a lot of different issues and experiences from formal legislations and how we can move on with these and that. This is totally interesting.

Alexandra: Completely agreed. One topic I absolutely want to talk about today is the report that was published by you and the group, and the committee already mentioned earlier about artificial intelligence in the digital age. I was astonished by how well-written and how on point it is with its suggestion on what the EU should do to become this responsible leader in the global AI race. I'd be curious to hear from you, what was the motivation behind the report and why is it needed, and also why now?

Axel: When we started this term, so in 2019, and of course, then you have some time to think about what you would like to do, and how you might come forward with it. I try to question myself, in a way, how we can catch up with other regions in the world, namely China and the United States. How we can catch up in a competition, in technology issues, in digitalization, and so on. This, all of a sudden, became a wider project because if you're, one, starting to think how you can survive in a digital age, then all of a sudden, you are in all different subjects. If this is investment, if this is procurement, if this is skills and talents.

Alexandra: Sustainability, the green deal.

Axel: Yes. Sustainability, everything. Also, legislation procedures, because speed is already a competition issue. In this digital world where you have all of this innovation, if you're coming first with an idea then all of a sudden, you're a monopoly. Here, we are falling totally behind the US and China. That's why I questioned myself what we should do now to not to become data colony of other regions in the world. Because I always think we need to have this motivation of surviving, and because this has to do with prosperity, this has to do with wealth.

That's why I thought we should create something where we have a guideline or a roadmap for ourselves in trying to come forward. This was the starting point. We finished these, and then we created this digital manifesto for us. In the second step, then we thought about, how much is the data protection regulation hindering us in being a good competitor to others? Then we tried to list all the shortcomings and how we should think about fixing the GDPR knowing that politically, there is no majority in the house so far to solve problems.

It's very astonishing and amazing that as a politician, I always thought if you see a problem, then you have to solve it somehow. Here, they are totally ignorant regarding some problems regarding the GDPR because they are saying, "Oh, this is our gold standard in the world and we can't touch it." This is a wrong attitude if you would like to survive. Not meaning we should undermine data protection or undermine privacy protection. No, but better balancing would be a goal and a solution.

Alexandra: Yes, exactly.

Axel: Yes. Then, also, artificial intelligence is politically more of a buzzword right now. That's why we have been interested, and we have seen the special committee where we then think, "Oh, yes, it might be quite good to be a rapporteur there, and also the AI proposal from the commission." We have been also interested in leading or playing a role in it also. This is what we are doing with - We, I mean my office.

Alexandra: Sure. Sure. It makes sense. Definitely, a pity what you pointed out with the attitude of certain members of not tackling problems that arise. Since you mentioned some shortcomings of the General Data Protection Regulations, can you give us a few examples? Of course, I can think of quite a few, but I would be more interested in getting your perspective.

Axel: Yes. We have problems on, let's say, five different levels. The first form is that we are seeing that the GDPR is not handled in a harmonized way. In Germany, we have already some problems with the 16 bundeslaender. On the European level, this is the same. We do not have this harmonization degree, what we would like to have. Secondly, the data protection administrations and also the European Data Protection Board is not acting in a very fast way forward to create guidelines and to create guidelines which might be pragmatical in a way.

You have still some unclarities or uncertainties in these guidelines also. Then all of a sudden, everyone is using this differently. This is something on the lowest level of what we can fix already. The second step, I would say, if we have still some legal situations where it's not clear enough what you should do or so on, then all of a sudden, you should have a better legal text. Therefore, I would -

Alexandra: The uncertainty that-

Axel: Yes, I would more ask legislator in saying we can, very precise in fixing these, but of course, you have to open up a legislative process. Only on certain elements, not the whole GDPR. This would be also something, but if then we are coming to new technology, all of a sudden, and everyone is saying this in a stereotype way and saying, "Oh, the GDPR is technology-neutral." This isn't the case.

If you're coming to blockchain, for instance, you can't delete your personal data any longer. If you're coming to AI, into the artificial intelligence systems, where we as a legislator also asking for the good results, you need quality data. Therefore, you need also personal data, but you're probably not allowed to use it and so on. This has also kind of a competition element because other regions can train their algorithms in a way what we can't do here right now.

Therefore, we have to provide more data anyway. This is also a cloud computing. It's not, in a way, precise. The GDPR, because the GDPR always is saying, "Oh, you need someone who is processing these." In this cloud, this is mixing all the time, the responsible person behind. Even, you don't know if this is still the person who is doing this. You have biometric data where you can't really get fantastic results out of the GDPR.

You have the home office situation right now. All of a sudden, everyone probably should be an expert in data protection issues, but no one is, of course, because it's too complex. Then I will stop. Also, we have this situation with some principles. This is then on the top of these problems. Is this really adequate any longer in a data-driven world that you are deleting data or that you--

Alexandra: Ask for data minimization?

Axel: Yes, data minimization. You're totally right. This is not the way forward because the more data you have, the better is your result, and the knowledge out of these algorithms at the end. Also, processing data only for one purpose. It's also probably not the way forward to survive in this data-driven world. Therefore, I would like to see the data protection regulation a bit more future orientated. Not meaning, once again, undermining the protection of data or the protection of privacy, but we have tools in place where you can better balance everything.

Alexandra: Absolutely.

Axel: It's anonymization, synthetic data, and so on. This would be kind of solutions what we can think about, but if you are not having a majority in the house which is open enough for new ways forward, then it's -

Alexandra: Hard to achieve. I completely agree. Of course, I really appreciate that I live in the European Union where we set such a high standard around privacy and data protection, but on the other hand, especially since we already mentioned earlier, since there's such a tremendous impact that data and artificial intelligence will have on so many areas of human life and of human prosperity, I also agree that it would be better to have a different balance between, on the one hand, protecting privacy, but on the other hand, as you pointed out, also making granular, accurate data readily available for the development of AI.

I think this is also what we really tried to achieve and help organizations to achieve with synthetic data, but we really can combine both of these aspects. That you protect our privacy, but you have super realistic data that is already used for AI development. Therefore, I was so happy to see that it was also one of the policy recommendations in the report that was published.

Axel: Now you can see, on the European level and on the legislator level, that we are now needing a data strategy, somehow, because we are seeing already that this is not working with our high-level protection that you can't really come forward. Now we have this data strategy, we will have data acts, and we will have the Data Governance Act, and so on. From my interpretation, just because we are protecting personal data in a way that we can't compete any longer.

Alexandra: It's definitely a level of legal complexity that what I get as a feedback from the industry is really frightening many practitioners because they lack the lawyers, they lack the professionals to even interpret all of this legislation, what it means, in fact, for the business operations. This, of course, contradicts our goal of moving faster and not falling behind this global AI race. We already went in a few areas of why the European Union should change and need to move faster. Can you give us a brief overview and introduction of the contents of the report? What was covered in there?

Axel: We have more than less four of five chapters in this report. The first one is introducing AI, where we are saying, if you would like to lead, or everyone who is leading in AI, who is also leading the word. This is more the situation and the impression of AI. This is underlining how important AI is, and that we have to play a role here.

The second chapter then is dealing with some, we are calling these use cases, where we are concentrating a bit on, let's say, health and competitiveness. It's about the labor market. It's security and democracy. It's about also the sustainability question. Where we would like to combat climate change also via AI and so on. These six use cases are then, a little bit, we are describing these in this chapter, what can be done.

Alexandra: What are the risks and obstacles, I think, if I remember correctly. This is what I also really enjoyed when I read this section. That it is really a quite objective and calm perspective on these technologies and that the report points out, in different places, that there's so much hypothetical fears around artificial intelligence singularity and areas that might or might not impact us any time in the future, but that this is not the-- Or basically that there's so much positive that can be accomplished with artificial intelligence.

That these should not be forgotten, and that those at the public debate should shift more to balance perspective on artificial intelligence, not only creating panic around potential or hypothetical fears and negative impacts.

Axel: Yes. No, you're totally right. The third chapter is then dealing, where we stand as the European Union. Describing a little bit the deficiencies of this. The fourth chapter is then dealing with the road maps. How we can become a leader in AI and a leader in this digital race. The fifth chapter, if you mind calling this in such a way, but this is the conclusion of everything. Where we are, but also recommending that the European Parliament is also acting on this report still in the future, because we do not want that this report doesn't have an impact. Just reading, and finding this wonderful, and then-

Alexandra: Yes, not doing something wouldn't do the trick.

Axel: Yes. This should be different.

Alexandra: Yes, absolutely. What is your hope, which impact do you expect this report to have, and how can you contribute to increasing the impact that the report will have?

Axel: At first, I hope that we can take away something of these, what everyone is feeling this danger, this shifting of power from humans to a machine or to algorithm because so far, we do not need to have these threats in mind. Of course, we should think about how to safeguard this, that this is not happening and so on, but here, probably some movies are giving you the impression that this is now where the AI is taking over our day by day lives, in a way.

Therefore, we think it might be quite good to give a more realistic approach, where we can have these benefits out of artificial intelligence. If you're looking to the health sector, finding new drugs, new treatments, new medicines, you have telemedicine possibilities, and you have these health records in place, what also might be helpful for seeing the whole situation. Here, we can have or start a lot of these issues where this can really be a benefit for our societies.

Alexandra: Absolutely.

Axel: We could save millions of lives, or improving our standards of living. We can detect diseases, what humans are not able to, and detect abnormalities on an early stage, and so on. This is-

Alexandra: Absolutely. Maybe to quickly add here, this is also one of the projects I was really looking forward to. We are currently collaborating with our public sector organization on synthesizing cancer patient data from millions of cancer patients. These datasets, due to privacy reasons in the past, couldn't be made readily available for researchers to figure out certain indicators for cancer and better understand how cancer is impacting our society and also which policy implications.

For example, if you see that there's a different stage of cancer in a certain area, then you need to take different measures on, I don't know, early stage diagnosis or something like that. There's so much power in this data. Synthetic data helps to open this up and get all the creative and ingenious people that we have here to collaborate and to do great things not only for the economy, but in the end, for society. This is really something that's motivating me also to do the work I do.

Axel: Yes. If you're looking to the combating climate change issue, also there, you have a lot of possibilities. If you're looking to these transport systems that might be more autonomous and so on. If you're looking to energy, being more efficient used and so on. Agriculture. To reduce pesticides or whatever. This is everything what AI can do.

Alexandra: Or smart cities, for example, that's more efficient in all these aspects.

Axel: Yes, you're totally right. Also, for single houses, you can deal with it. Do you need to heat this room or not, and so on? This might all be more effective, and trying to reduce the emissions. Therefore, you have lots of benefits what AI can do. Therefore, we shouldn't be too restrictive. This is what I'm trying to do. If you're just focusing only on threats, then you are totally lost and also then-

Alexandra: You're not going to get anywhere.

Axel: -once again, we will be a data colony only, if we are not innovating of inventing issues. We have the talents, we have the skills, but we are not doing the business models out of it. So far, in a good way, all this will be - Yes. We might face these killer acquisitions by the big techs and so on, but this is not something what we should look for. Therefore, we should be more motivated and more ambitious here and coming forward. That's why we are trying to say what benefits may come out of AI, and also in these, how important it is to be a good competitor, that we are trying to lead also here in AI issues.

Alexandra: Yes, absolutely. I think, maybe coming back to the challenges, or what is holding the European Union back at the moment, we already covered a few things today. Like the regulatory complexity and not having achieved the digital single markets. You mentioned also the level of investments, and that we have a hard time actually putting our great talent and scientific results to work, and not having all of our great people migrating to the US and being bought by investors there.

I think, if I remember from the report, a few other points that were highlighted as factors that are holding us back is the digital infrastructure, then legal uncertainty again, but also that we of course, need to up our efforts in building skills and talents, and as you mentioned, focus more on the positive aspects. One thing I'd be curious to understand from your point of view, what are the strengths of the European Union that will help it to become this responsible leader in AI race? Is there some or other aspects where we can build on or should build on?

Axel: Yes. At first, our strength is standard settings. Here we can help a lot, we can do a lot, but we have to do it. This is what we are also asking for. So far, the European Union is too fragmented. The national political leaders, I would say, should change their mind setting. That the European Union can survive in this digital world only if we are joining forces, and if we are coming up with European projects. Therefore, also you need this strong will in political leadership and setting also priorities. One priority should be, from my point of view, AI, but also, you can have other priorities. We can talk later about this one if you want.

This is, from my point of view, very necessary, to concentrate on some projects where we can lead. Then you have to act in a way of, we need more of this active execution of this idea, where you then put skills into it, where you can have this investment, the money for this to invest, and so on. Then we are coming forward, but we need this concept and strategy in the first place, and the conviction also in trying to catch up with others. This is what I'm missing right now. Just sitting there and saying, "Oh, we have to protect everything," this is not very wise if we are not inventing, and if we are not developing here also this issue.

That's why I think we have to change our ways also, of legislation, that we have more speed in it. That we can really try to solve prevalence beforehand and not afterwards. Just to give you an impression, if Facebook is now coming with this metaverse, this emerging of virtual reality and reality, here we should do, as a kind of a future orientated legislator already saying, "If you would like to invent these, then please have in mind, regard this one, this one, and this one, because these are the safeguards for our values, and then you can move on."

If we are letting this grow, all of a sudden, we find ourselves in the same typical situation where we then think, "Oh, now we have to deal here or to work on these differently, and now we should put values into it." Then it's too late, sometimes. This is what I'm meaning.

Alexandra: Would like to see here. Yes?

Axel: Yes.

Alexandra: Definitely agree that the process to regulation, and that it really takes years, and on the other hand, technology is racing above and beyond regulation at some point in time is something that should change. This brings also to my mind, a nice analogy that you made in one of our earlier conversations about this 100-meter sprint, and winning that. Maybe you can share it with our listeners because I think it can be a good illustration of the problem here. That also the regulations, once they're finally out there, some people have the opinion this should be set in stone, which I don't agree with.

Axel: I think in these extremely changing times or changing technologies and developments, we should more think in legislative tools as a kind of living document. We know for today that this is the case, such and such, and therefore, we have to regulate something, but tomorrow, it might appear a kind of a problem where we already are seeing, like the GDPR, where we are seeing there are some problems, and we should fix it immediately. That the development can still go on. This is what I'm thinking about these living documents where you just can fix easily some problems without having to wait, and like the whole process.

This is what I'm meaning also with this 100-meter running situation. We can't say, "Oh, I have won this race, and I do not need to train any longer because I have always won this race." The second race, you probably won't win then. Therefore, you have to train, and therefore, I would say in the parallel to the legislator, we have to train or to fix the problems, what might appear on this way of further development. Therefore, I think, if we do not want to be the second, the third, or the fourth in the run, then we should, all the time, sitting there in the evening and saying, "What can I fix tomorrow?" This would be my ideal world in the legislature and digital race.

Alexandra: Sounds like a world I want to live in. Maybe for those of our listeners for whom it's not immediately apparent why it would be good for the European Union to lead this race and not be second, third, fifth, or as you pointed out, the data colony of some of the leading nations, what is the downside of being left behind? What are the consequences that you think we can expect if we're not managed to keep up and come in a better position?

Axel: All of this has to do with prosperity and wealth. Also, we are getting totally independent. Just as a kind of an example, if we are saying we do not want a certain technology in place, we never know what might become next built upon this technology. All of a sudden, you do not have this experience, you do not have the conditions around, and if you are then creating something, then you are totally dependent.

This might be in all these. Also, if you're coming to military issues, just saying, "Oh, no, we do not want to have these algorithms or drones or whatever," in military defense also, then all of a sudden, you will notice that the others will have, and they probably do not use these for the good. Here, all of a sudden, if you're then coming to this idea, "Oh, now we should use it," then you are totally dependent on others. You can also use this for the idea of cryptocurrency. I'm very glad that the European Central Bank is doing something regarding the cryptocurrency for the euro.

Otherwise, this will then take over our bank system. If we are not the one in the lead, they will lead and they will take over. All of a sudden, you find yourself not in the first place any longer. This is something, what I think we should avoid, but it needs different thinking. We are not doing the digital euro for financial issues or other issues in the financial sector, we are doing this for competition reasons.

Alexandra: For sovereignty.

Axel: Here you need a different-- Yes, sovereignty. This is included then, if you're trying to lead these and have this at the place. Otherwise, you will be dependent. This is something, what all pays for the sovereignty if we would like to be still sovereign, also in digital issues, we need to think differently, and we need to catch up with all these developments. Therefore, it's necessary to be open for these, but we do not have to do it in such a way we don't like it. It’s more the way what we like we should do, we should put the safeguards in it, and then we have a wonderful balanced world for everyone. This is what is driving me.

Alexandra: I can imagine. Basically, tackling it proactively gives us the permission and possibility to shape it in the way that it suits us and corresponds with our European values versus not doing it will lead to us being influenced by systems that maybe not uphold our values. I think this is why I'm sometimes surprised when I talk with people who are coming mainly out of the corner from the data privacy and data protection area that they've sometimes failed to recognize how interwoven everything is nowadays, and that, even though I would never argue that privacy is a fundamental right that needs to be protected and should be protected.

We have seen so many examples already in the past, how technology influenced our democratic systems can have impact on the security level. You mentioned, the economic prosperity. All of these are also important values that need to be balanced with that which reminds me again something you said in our last conversation, I think it was that you sometimes get the impression that I think data protection and also freedom of speech are considered-- What was the term you used? Super fundamental rights?

Axel: Yes, you're right. This is politically here used for everything so that these both fundamental rights, yes, they are important, but they also have to be balanced to other fundamental rights. This sometimes it's not done here in the house by the political groups and this, of course, can't work at the end. That's why we should think in this dimension of super fundamental rights, that we should more balance all the time, what you had to say something. Oh, this is a value we would like to protect but you have to do this all the time, again and again. There might be another problem next day, and then you have to balance this.

Alexandra: Again.

Axel: This is what I'm asking for more and more. So far, I don't know why but some of my colleagues are not really interested. They think it's easy to say, "Oh, this is data protection, and therefore everything else has to be undermined. This is not the case. We can't live in, say, security, I don't care because I have data protection, no. We should also secure our day-by-day lives by analyzing data and to then balancing how to safeguard these but not in saying, "Oh, we need to have protected data all over," and then you can't really analyze the situation. We also find in COVID times, so if we are just thinking in protection of data, then you can't really manage the COVID pandemic situation. We do not have to have 150% of data protection, 100% would be enough. Therefore we can also can get out of the situation more for other issues that are also important.

Alexandra: Yes. I completely agree. This brings me back actually to the section in the report as part of the roadmap and the policy recommendations where it's about the favorable regulatory environment. One subchapter here is also I think it was called the European Data Challenge. Some of the policy recommendations in there I think are perfectly on-point and really should be implemented.

In general, all over the report, it's highlighted how important this access to high-quality data is and opening up data silos because without of this high-quality data, it doesn't have to be personal because especially with artificial intelligence, you're only interested in the generalizable patterns but it has to be granular data that's needed. Otherwise, it can't see how we would ever become a leader in this AI space. Maybe coming back to this chapter on the European Data Challenges, can you share a few of the policy recommendations that were in there with our listeners, like for example, the creation of a single view data space or what else comes to your mind?

Axel: Of course, sometimes we have to think in the sectorial approaches like health data, for instance. This is the trillion Euro market probably. Therefore, if we are not fast enough, and if we are not providing data or quality data enough, then we probably do not have to be a part of this competition because we are always behind and this shouldn't be the case. It's always the same problem with data. We're overprotecting everything, then we won't move forward. I would say we need to have these possibilities in place where we can use the data. We have now the Data Governance Act, where we think about of more sharing data, algorithm data.

The possibilities with third parties who might be a trustee, someone we can at least provide the data. We need the European data market or European data single market or meeting the data or digital single market at the end. Otherwise, we are ever kind of a disadvantage beforehand already to other regions. We are coming in Europe always with 20-- Let's exaggerate a bit, the 27 different languages, 27 different legal systems, 27 different mentalities, traditions, and so on. This is what we try to solve in this single common market, and also for digital issues, we should have this one in place. This is then also affecting some of the competences of the member states.

If they are still insisting on these issues, we will never come to a digital single market. That's why also here we need this change of mind setting in this political world, then we might come forward that we have this one digital single market where we have a very good starting point for our, let's say, digital industry, data industry and so on. Then we can be a group compared.

Alexandra: Yes. Move forward and also scale effectively if you have this unified legal space, data space where it's much easier for organizations to grow and scale and then compete on a global level. Really a good point. A few other policy recommendations I remember and can only support also this legal uncertainty around anonymous data and whether something is sufficiently anonymized and when it can be shared and when it can't be shared with a report pointed out that this uncertainty oftentimes leads to organization moving back on data initiatives, not sharing data, not approaching the project, which of course, contradicts what we actually want to achieve.

This is also why we just recently started an industry group and also standards group with the IEEE where we will also collaborate with regulators and setting those standards for synthetic data so that we have here the common understanding of when can it be considered as fully anonymous and then safely shared.

This is another project I'm looking forward to quite a lot because I think with all the importance that comes along or basically with the importance of having to share data, I think it's something that we can't miss out on this report also highlighted that we do more in regards to privacy by design technologies, encryption technologies, synthetic data, and all the different approaches that already exist out there to have a nice balance between data utilization and privacy protection, but that are yet not widely used and not really regulated.

Axel: Unfortunately, also this question of facial recognition. We are discussing in the house mainly by all the threats, what can be out of it, and what you can detect and tracking and profiling, whatever. You have already technology in place where can you safeguard these four data protection issues? Here, it would be quite good to be more open-minded to all the possibilities also. You introduced to me also once the synthetic data. I was very surprised that this is existing but it's a wonderful idea.

Alexandra: Absolutely. I think I'll also be curious to see how this will interplay on the regulatory stage because now with the proposed AI act we have even stronger requirements for fairness in artificial intelligence and to avoid discrimination. Oftentimes the problem that algorithms would start with discrimination the first place is actually a lack of representative data, which is also something that's influenced by privacy regulations and locking up data. Here again, we're trying to contribute to fairer algorithms with making synthetic data available that then could give insights on minority groups or sensitive attributes.

For example, just a few weeks ago, I talked with a member of the query in AI community who highlighted that them not being represented in data sets actually leads per design to discrimination. This is something that any organization can't overcome from the regulatory side, because on the one hand, they are prohibited to use these attributes, on the other hand, they have to use them because otherwise they could never measure and make sure that the algorithms cater to the needs of everyone and don't discriminate. I think here, we will also need some clarification on the legal front.

Axel: Yes. You're totally right. Probably the European Parliament will ask for some conditions of the results of algorithms. Therefore, you mentioned this already, it should be not discriminatory, it should be gender balance, that should be not biased, and it should be sustainable and -

Alexandra: Transparent, explainable.

Axel: Explainable everything. All of a sudden, you need quality data to meet all these requirements. Then I'm not providing these because of GDPR, but asking also for other things that this can't work. That's why the synthetic data might be a very good step forward in this direction. I'm very much looking forward how we can introduce these also in the AI Act because it's a good idea. Then the sandbox is what is in the AI I mentioned, it's from my point of view, totally necessary because everyone who's developing algorithms need to test if this is also in line what we are asking for, and invent probably, we can also use some, let's say, synthetic data, or personal data, or whatever, just for training that this is allowed.

Alexandra: Even for validating and testing. We're also having conversations with regulators because privacy sometimes is also used as fake reasons, especially by large organizations that they can’t prevent any external auditor or regulator look closely into their systems because you can't only look at the code of an artificial intelligence system, you need data to understand how it performs and how it interacts with the world. Here, of course, due to privacy reasons, unfortunately, you can't share this data.

Synthetic data allows you to access this data, or even create sets that include examples also of marginalized groups, or certain minority groups, which would help auditors or regulators to really thoroughly test an AI system and see, but how would it perform for this or that minority group? Do we see any discrimination in there? This is something I think will also be beneficial, because the more people can access data and look on data, the easier it will be to detect biases in the data itself, and also in the algorithms. I think we need to open up this resource nowadays. I think in the report it was also described a state of being the fifth element nowadays next to earth and air and everything else.

I really enjoyed this. I think it's on point that we need to consider data on a different level than we did before because it just gets such a more important role compared to a decade or two decades ago.

Axel: I'm totally in line with what you have said.

Axel: Happy to hear.

Alexandra: Maybe coming back to the idiom words you painted and said that you would love to see it. How can we achieve this world? Can you think of three levers that can be pulled? If these three big aspects are achieved, then it's looking super positive for the European Union that we will venture towards our leading position as responsible leader in the AI space, or is it so many small levers, so many different puzzle pieces that we need to get in place to make this work? What's your perspective?

Axel: Of course, it starts with political leadership. I can't see this so far on the European level, but if once we would have this then I think we need to be open for this joining forces, European projects at first, then having the strong will of surviving and then coming with a concept where you can deal with the investments, with the training of skills and with the infrastructure and all the issues and also then the programs to fund this research and development issues to also setting priorities and this should be AI, self-driving cars, smart farming, green tech, 3D printing, or whatever it is, quantum computing and so on.

Everything might be very important but if you have a priority in place, then you might know what to do as a next step, but if we're leaving this, everything like it is that every member state is doing its own and so on, then probably I would say, we are not really competitive. Then, of course, you need to have or you have to prepare also yourself with your budget and saying, "Now we are investing. We will have our lighthouse here and there and so on." Then we really might come forward. On the security issue, of course, we need to have in place cyber security tools, and also being aware that upcoming infrastructure shouldn't include spy devices by other-

Alexandra: Nations.

Axel: -state nations in the world. This is something I would see my idle words in saying, but once again, having the strong will of surviving in the digital age, then this would also lead to something what I would consider would be a perfect word for us. We should be ambitious in here and saying, "No, we would like to lead in this and this and this." Then I would be satisfied.

Alexandra: Me as well. I'm hoping 10 years' time we can come together ahead again, but in 10 years time, we can come together look back and live in this ideal world that you've just described.

Axel: I think we do not have this long time period.

Alexandra: Yes, absolutely. Talking about time, maybe just out of curiosity for the draft report, what are the next steps, and how will it continue?

Axel: The deadline for amendments will close, I think in two days.

Alexandra: Tomorrow, I think.

Axel: Oh, tomorrow already. Yes. Tomorrow.

Alexandra: At the time we publish this, it's already in the past. I think the deadline was the 16th of December.

Axel: Yes. Then we will try to negotiate in January and February the amendments and trying to find or to come to one single approach.

Alexandra: Within the committee, you will negotiate.

Axel: Yes. In the AI committee then we will vote in March and in plenary then in May. This is the timeline, and then I hope that the recommendations we have will continue in the standing committees so that they can work on it so that we might have us doing a step forward.

Alexandra: When in May the voting is positive, is their immediate next step afterwards, or how this -

Axel: Probably we have to motivate the standing committees in taking over the duties what we just decided on to remind them they should do here, they have still duties to do. This is so far a question how we can do this probably after the summer break, then next year, we might politely remind them that they should come forward with this issue. We have to think about how, in our structure here, can make these recommendations to the standing committees once again, very clear, but let's cross fingers.

Alexandra: I will definitely keep my fingers crossed for the next few months to make sure that this goes as positive as I hope it does. Perfect then. Actually, thank you so much for everything that you shared today. It was a true pleasure to talk to you. Once again, congratulations to the work you and the committee did. I think this is really what we need to achieve as the European Union, and I'm on the same page with you, rather now than later. I hope that this will happen.

Axel: Thanks a lot. It was a pleasure for me. Thanks for your initiative.

 

Alexandra: What a great conversation. It's been a real pleasure talking to Axel. I hope you have found this behind-the-scenes peek into the European parliament's work on AI and data regulations as fascinating as I did. I'm personally very much looking forward to the impact Axel's report will make on the upcoming AI act and also the European Union's ambition to become the responsible leader in this global AI race. As always, if you have any comments or questions or even a suggestion for a guest I should absolutely talk to, please reach out to us and send us an email to podcast@mostly.ai. Until then see you next time.

Ready to try synthetic data generation?

The best way to learn about synthetic data is to experiment with synthetic data generation. Try it for free or get in touch with our sales team for a demo.
magnifiercross