💡 Download the complete guide to AI-generated synthetic data!
Go to the ebook
Episode 27

Stop relying on the Brussels Effect: On effective digital policymaking and negotiating transatlantic data flows with Andrea Renda, CEPS

Hosted by
Alexandra Ebert
In the 27th episode of the Data Democratization Podcast Andrea Renda,  a Senior Research Fellow at the Center for European Policy Studies (CEPS), Advisor to the European Parliament and a former member of European High-level Expert Group on AI, joins us on the show to discuss:
  • What it takes to create effective digital policies
  • Why the AI Act only goes half-way - and what else needs to be done
  • Why data is a strange beast to tackle for academics and regulators
  • The challenges of transatlantic dataflows & why the EU should stop relying on the Brussels Effect
  • Synthetic data - and how it could enhance trust and enable data sharing
  • The geopolitical battle on data, AI & digital policies between China, the EU and US 
  • What’s good about the GDPR - and where are its shortcomings
  • And many more exciting and timely topics
If you want to hear more about the EU roadmap for the AI but from another point of view, listen to our podcast episode with Axel Voss, Rapporteur of the Special Committee on Artificial Intelligence in the Digital Age in the European Parliament.

Transcript

Alexandra: Hello, and welcome to the 27th episode of the Data Democratization Podcast. I'm Alexandra Ebert, your host, and Mostly AI's Chief Trust Officer. Today, I'm super happy to be back with yet another great guest for you. I have the pleasure of welcoming Andrea Rando, who is both a think tanker as well as an academic. He's a senior research fellow at CEPS, the Centre for European Policy Studies, where he's responsible for global governance, regulation, innovation, and the digital economy.

Besides being a think tanker, he also works as a professor, is an advisor to the European parliament, and was also a former member of the European high-level expert group on artificial intelligence. You can only guess I'm very much looking forward to the conversation with him. In today's episode, you will hear our talk and our conversation about the AI Act and why it's an important step in the right direction, but still only goes halfway. I really enjoyed what Andrea shared what else will be important to ensure that once the AI Act is in effect, we have effective regulation and oversight.

We also chatted about the upcoming date and digital acts on the European horizon and why data is a particularly strange piece to tackle both for academia, but also for policy-makers. Andrea made clear why principle and outcome-based regulation will become a necessity and shared why GovTech will become essential to ensure effective regulation and oversight.

Then, of course, we talked about one of the hottest topics nowadays, which is transatlantic data flows and the challenges both from a European, but also from a US perspective. We also briefly chatted about synthetic data in that context and how it could ensure the necessary trust to enable these cross-border data flows. We even covered similarities in the European approach to regulation with, you wouldn't have guessed it, the Chinese approach with all the regulation we see as part of China's Digital Silk Road initiative.

Then, we also talked about GDPR shortcomings, but many, many other topics that I'm sure will be of interest to you. Let's not wait any longer and dive right in.

Thank you so much for coming to the show today. I was actually looking forward to having this chat for the whole day now, so really, really glad that you're here now. Before we jump into all the topics that I want to discuss, could you briefly introduce yourself to our listeners and maybe even touch a little bit upon what motivates you and drives you to do all this work on AI policy, privacy policy, and tech policy.

Andrea: Thank you very much, Alexandra, and thanks for having me in this fantastic podcast. It's a pleasure to be here. I'm a researcher, I'm an academic, originally an economist, not necessarily the most useful profession in the world. This, I realized early on in my career and I started contaminating economics with a number of other social sciences. Initially, law, economic analysis of law, and this brought me into regulation and one of the most challenging, also intellectually challenging, and rewarding parts of the regulatory governance world is really how do you regulate emerging technologies?

This in my career brought me very soon to research emerging technologies and digital technologies. These were the years when really the worldwide web was emerging. The world of cyberspace and cyber law has fascinated me since the very beginning. This as an activity that as an academic, and then also as a think tanker over time I've been busy for now 25 years probably.

What motivates me is the intellectual challenge, but also the fact that after all these years, I have come to a point where I see the potential to have an impact in a debate. It is sometimes very polluted and very often polluted with specific interests. Whereas my only - I have no hidden agenda, meaning I'm an academic working and trying to improve the quality of the regulation of emerging technologies for people's wellbeing for a better society in the future.

Alexandra: That sounds indeed fascinating. I want to dive right in discussing the AI Act since it's such a timely topic at the moment. What is the good, the bad, and the ugly about the AI Act in your opinion, especially when it comes to AI ethics?

Andrea: Well, there are many good things about the AI Act in my opinion. I have been involved with the AI Act since the early days of its preparation if you wish because I was a member of the high-level expert group on artificial intelligence, an independent expert group appointed by the European Commission. Then I worked as a researcher and on the study that backed the preparation of the AI Act. It's called the impact assessment study of the AI Act, where with my colleagues, we mapped emerging risks and opportunities of AI, particularly the risks for fundamental rights and security. We also tried to assess the costs of various options of how to regulate and how much to regulate AI.

The good thing about the AI Act is that it is a comprehensive regulatory framework. It's the first of its kind. It tries to adopt an all-encompassing approach to the regulation of artificial intelligence. It is perhaps not entirely true that it regulates every aspect of artificial intelligence because indeed the AI Act stops where other pieces of proposed legislation start. I refer in part to the Digital Services Act, the Digital Markets Act, which look also at specific forms of algorithmic practices, potential discrimination, potential anti-competitive conducts, and so on and so forth.

The good thing is that it is a little bit like GDPR was. Not an attempt at regulating specific sectors, but it's an attempt to take in the users of a family of techniques such as AI in all sectors and try to at least establish a process, governance around AI, and a mechanism for identifying the users that are more risky and to take measures to impose obligations and to monitor those obligations on the market.

It's the first of its kind. It's been observed. It's been studied in other parts of the world. For some, it is perhaps excessively regulatory meaning that it is perhaps too traditional in the sense of trying to regulate something that is ever-changing and perhaps imposing behaviors that could be seen as compliance costs and administrative burdens.

For others, it's insufficiently regulatory, meaning that it only covers what is considered to be high-risk AI which is an estimated 5 to 10% of the future or in the near future AI applications. Whereas perhaps the risks might also come in the future from low-risk AI, if there's several low-risk AI systems that interact with each other, and knowing that the subject matter evolves so quickly that what is low-risk today might very well become high-risk tomorrow and so on and so forth.

The bad thing that I see about the AI Act is that it stops halfway. It tries to define artificial intelligence. It tries to introduce the so-called risk-based approach. Again, differentiating the regulatory treatment depends on whether an AI application is considered to be high-risk or not. It doesn't fully explain how this is going to happen the moment this proposed regulation lands on the market and starts trying to get ahold of an everchanging technology.

The law in the books is fairly clear. The law in practice is going to be extremely complicated and will require a degree of flexibility, a degree of adaptive regulation, a degree of agile governance that I don't see at the moment very well defined in the text of the AI Act.

Alexandra: Understood. Are there any ideas you have, how, especially this kind of second half could be better addressed to avoid all these uncertainty and challenges to actually implement the AI Act once it comes into effect?

Andrea: For me, the most important thing is understanding how the governance around the AI Act will be defined and then implemented. In particular, at the moment in the AI Act, all the process of risk classification and the decision of imposition of regulatory obligations is delegated to an AI board, which seems to me a little bit like a meeting of the council of the European Union, meaning as a board that has one representative per member state, and perhaps most likely the European data protection supervisor. But It's fairly clear to me that it takes a lot more than an AI board with 28 members or 27 plus one, to spread enough certainty and update the knowledge that is required for sound regulation in a situation like this one.

At the minimum, I would like to see a very independent and balanced expert group supporting the AI board, a mechanism by which the AI board cooperates on a stable basis with sectoral regulators, but also if possible, a replication of what I think has been a very fortunate experience at the EU level, which was the AI Alliance which was created in 2018 together with the high-level expert group on AI that quickly gathered thousands of participants from all walks of life.

Like civil society, business, academia, and became at least for a while very vibrant forum for discussing things, emerging uses, emerging risks, concerns, things that don't work. My overall impression is that if you don't empower civil society, and if you don't beef up regulators, we're not going to get ahold of this constantly moving target.

Alexandra: I agree and I think it's also important because currently it's being discussed, who could be the future enforcement entity that you dare have maybe one unified entity that consists of members from various sectors to say so, also areas of-- talking about competition, privacy, and so on and so forth, because one of the suggestions is actually having the European Data Protection Board also take up this responsibility.

I think since artificial intelligence is such a wide-encompassing field that's now getting in so many areas of people's day-to-day life, we also need a diverse perspective on the regulatory side to make decisions that improve society really in all aspects, and in that regard, it's not only in privacy.

Andrea: Alexandra, if I can add something there. Only involving the data protection side has the advantage that data protection authorities at least in some member states of the European Union have gathered quite a lot of knowledge on data governance more generally. At the same time, will certainly be criticized and might also be criticizable if you look at the way in which the data strategy is evolving at the EU level.

Data is a very strange beast. Actually, it's not very easy for someone who comes from economics to conceptualize it because I think the academic thinking on data will have to evolve quite a lot going forward. Just to give you one example in plain terms. Certainly, the GDPR has had the great merit of opposing a problem. The fact that for example, data flows should be minimized when personally identifiable data are at stake.

A data strategy, if you see the emerging Data Governance Act in the Data Act, are not just about reducing data flows, are actually about encouraging data flows in some cases and data sharing in particular on the business side. This is not like in the early days of the internet about encouraging the free flow of data. This is much more about, in some cases, enabling managed data sharing. Meaning sharing data, for example, among peers in specific industrial ecosystems or in data spaces, but not enabling full data flows.

It's a little bit like a stop-and-go in a very diverse type of exercise where you want to encourage the sharing of data for good. For example, between businesses and governments in a more open-ended way. EU wants to force large businesses to share data with smaller ones, as a competition-enhancing remedy. You want to restrict data flows whenever personally identifiable data are at stake, and you want to encourage managed data sharing among peers and companies in value chains to enable a smoother and more effective optimization of services or working of contracts and fairer contractual conditions along supply chains.

Data becomes a little bit like air or water. It's like an element of our environment.

Alexandra: The fifth element as it's now.

Andrea: Only incorporating the data minimization element, or if you wish, the personal data protection element into AI governance as a special angle on top of the general AI expertise would probably increase the salience of only one of the many elements that need to be taken into accounts to have a sound multidimensional, 360-degree view on the evolution of artificial intelligence.

Alexandra: Agreed. I think for the regulators, there's such a tough balance to strike here with privacy protection on one hand, and also of course improving access to data and innovation power upon this data, but of course, also considering competitive elements and so on and so forth. Since our listeners are quite interested in synthetic data and synthetic data is one of these tools that can encourage data flow while protecting personal data, what's your perspective on synthetic data and how it can contribute in the European Union?

Andrea: Well, I think synthetic data can be a very promising development for that attempt to reconcile different needs in the data governance space. The needs to use data and to enable data-driven innovation, but at the same time, the needs to protect specific types of data. Not only the personally identifiable data, but potentially trade secrets, industrial information and data, and so on and so forth.

The development of synthetic data is something on which policymakers have done zero work today, in my opinion. While academics and researchers are working on synthetic data, what kind of policy space and what kind of governance requirements, what kind of process needs to be followed to enable trustworthiness. Synthetic data is something that perhaps will become one of the points of concern in public policy going forward, together with other ways to enhance trust in data flows without having full transfer of information. It could be through cryptography or through data slicing and other means.

That is something that in my opinion is one of the big open challenges for the future, and without that, I don't think we will get very far. We don't get very far Alexandra if you pass me. The oversimplification, we're not going to go very far in the data strategy and data governance with the laws written with the pen and paper, and judges with wigs that try to implement them.

Alexandra: I think, but maybe before we come into the agility of lawmakers and regulators, completely agree with what you just said. For me, it's quite fascinating because, on the one hand, I talk with lots of regulators, on the other hand, I talk with the privacy researchers. It's so fun because the privacy researchers are oftentimes emphasizing, "We already have all of these emerging privacy-enhancing technologies that would allow the European Union to build on its vision to achieve both free flow of data and a world where data can be utilized, but at the same time without compromising on European values like privacy protection."

At the same time, of course, I agree that the regulators maybe are not that aware and not that closely looking into these technologies yet. Although for example, for synthetic data, I know this work has already started. I'm happy to see that there are some policy recommendations, for example, recently from the AIDA Committee of the parliament to include synthetic data and assess it, then we are conducting some work with the Joint Research Centre of the European Commission on synthetic data. It's starting, but sometimes it's still surprising why regulators are not further in the process of assessing this.

Andrea: One thing that could happen Alexandra is sometimes in the policy space at the very high level, things happen because first of all, individuals want them to happen or because there's a lack of alternatives. The area where I see the biggest promise there is the Trade and Technology Council between the EU and the US because obviously the elephant in the room is Schrems II and there is a need to find ways and potentially technology-enabled ways rather than purely legal ways to enable data flows between the EU and the US and enhance collaboration between the two sides.

In the end, privacy-enhancing technologies will see quite a lot of debate in the TTC, in the Traded Technology Council. Perhaps from there into more international AI cooperation and AI and data cooperation spaces including perhaps in projects that look at specific silos or areas. It could be related to health in particular, where there seems to be quite a lot of demand for international cooperation at the moment. Exchanging research results and exchanging data that support future data-intensive research, but even for purposes such as pandemic preparedness.

Alexandra: Agreed. Since you've just touched upon these trans-Atlantic discussions about data sharing, you recently co-authored also a study or a report on trans-Atlantic cooperation in regards to artificial intelligence. Why do you consider these trans-Atlantic negotiations, conversations to be so important specifically now with the European Union and the US but of course also other countries and continents both in regards to data but also artificial intelligence?

Andrea: Well, it's not me. I don't consider that to be particularly important. It's the policymakers. I consider that to be particularly important and obviously, I think this, given the current geopolitical dynamics, has become one of the key tables where policymakers can try and nail down some important elements of cooperation that then might become a blueprint for a broader group of countries. What sometimes are called perhaps with-- there's a little bit of an exaggeration but sometimes are called like-minded countries or the coalition of democracies in the words of Joe Biden.

I don't think that transatlantic cooperation will be super easy also because there are certain things that the EU is doing, in particular, on the data strategies side, which are not really resonating very well with what the US would like to do.

Alexandra: For example?

Andrea: Well, building all this complex web of regulations on data governance is something that I think it will take probably 3000 years for an American to accept because even the most liberal or progressives of them will consider this to be a regulatory delirium. The other thing that is quite interesting is that if you look at the evolution of the data strategy in China, it looks very much like the things that the EU are trying to do.

It even, at least on paper, has provisions that are very similar to GDPR, provisions on data transfers, provisions very similar to the Data Governance Act, and so on and so forth. Obviously with some differences, but indeed this idea that we depart from the open internet with data flow freely and we go into a strange new dimension in which in order to ensure that everybody has a fair share, you have to regulate everything. That is a little bit the direction that I see being taken in China and in the EU.

Now, this is probably including things like Gaia-X and other emerging ways to, for example, impose compliance by design with specific legislation, notably the GDPR and so on. This will become perhaps an element of bargaining in the TTC. Why that? This is also what interests me. Because I think over the past three, four years, we've seen for the first time, compared to the past three decades, the emergence of certain complementary between the United States and the EU. I put it very, very simply here.

While the US has traditionally dominated the technology space compared to the EU, some politicians in Europe have even exaggerated by saying that we've become an American colony and so on and so forth. At the moment, there are parts of the technology stack where the US are actually nowhere.

In particular, I'm thinking about connectivity and 5G where the US perhaps because of a very loose approach to competition in this field, has lost track a little bit of the needed research and development. At the moment, in the 5G patent pool, you see obviously China being quite prominent, Korea being quite prominent, but European companies as well, holding quite a lot of the patents that are essential patents for 5G. The US is basically nowhere.

I see an emerging complementary the EU, not only as a super-regulator but also as the land of important technology providers in specific parts of the technology stack can give the US the self-sufficiency if you wish, that the US is looking for to then go out there and compete with China and even in developing countries as we see the Digital Silk Road and as part of the broader Belt and Road Initiative of China spreading into African countries, Latin American countries and so on.

It's a big geopolitical battle where the US needs allies a little bit more compared to China and the EU is the inevitable one. This is why everything lands on the TTC. When it comes to the TTC, I see the EU getting there a little bit in my opinion less prepared than it should be.

Alexandra: Why is that?

Andrea: My friends from the European Commission or the European Parliament, they will listen to me in this podcast and would probably hate me, but here I try to expose my reasons. Everybody knows that I'm passionate for the European Union. This is my product as well but this doesn't mean that as a free thinker I cannot criticize certain things. In my opinion, the thing is, sometimes we fill our mouths a lot with this idea of the Brussels effect. The fact that the EU is a great regulator and better than everybody else so-

Alexandra: To quickly interrupt for those of our listeners who don't know the Brussels effect you're referring to that policy decisions and regulations who, originally in the European Union, when in the past saw some spillover effect to the regulation of other countries. Is this correct or anything you want to add to the Brussels-

Andrea: Yes. It's the ability of the EU to create very well-rounded regulatory frameworks, very solid regulatory frameworks that then become models that are in some cases spontaneously emulated in other countries around the world. In other cases, I would add - This is not part of the rhetoric of the Brussels effect, but I would add not so spontaneously because if you think about adequacy decisions if you think about the neighborhood policy of the EU, the Association's agreements, the deep and comprehensive free trade agreements, many of these things happen through negotiation as well.

You adhere to the EU Act here also because you want to be part or you want to have access to the single market. The Brussels effect, also this comes from a very a widely read book by Anu Bradford, is something that is really revived a little bit, the enthusiasm of the Brussels policymakers by saying, "Here's our flag. We can carry it around."

Now, if you look at this from a deeper academic perspective, I think the Brussels effect is something that only happens when you are a preacher in the desert. When you are the first one, you are the first mover, no one has thought about that. Other big superpowers are not doing the same thing. You do something that appeals to other parts of the world and then they look at you as the only reference that they have.

This is what has happened with the GDPR. Once you take out this first-mover advantage part, once the other superpowers start regulating on the same subject and this is happening in this space, and once international organizations become more active in this space, academics tend to converge on the fact that rather than regulating and waiting for others to adopt or emulate, what you actually have to do is to roll up your sleeves and sit down at the negotiation table and start working on reciprocal concessions, on mutual recognition, on finding common solutions.

This requires strategy, even with the complementary example that I was making before, you need to know what to ask for from the US and what to offer. You need to have a strategy, strategic advice, tactical advice. You need to have the right people to negotiate. I think on this, the EU has traditionally been very weak. We will see that in the TTC, whether the EU can get its act together, sit down at the table because it's fairly comprehensive.

This is not just AI. It goes from the Green Deal to trade restrictions and so on. Whether they will be the ability of the EU through its external action service, DG trade, the different DGs in the European commission to really cohesively negotiate a good package with the US.

Alexandra: Sorry to interrupt but what's the process in the European Union to come to this understanding of what should be offered, what should be asked for or what should they do to get a list of what they want to negotiate about?

Andrea: I'm a little bit worried there because I don't see a very structured process, if not exchanges obviously typically for specific chapters within the different Directorate Generals of the commission. Listen, the European Commission at the moment doesn't even have a full-fledge thing tank. The European Commission used to have the European Political Strategy Centre. You could criticize it or not but it was a center of strategic advice. The external election service doesn't really have strategic advice backing it.

The JRC certainly is not a strategic advice place. It's more of a research center. There are some internal groups or expert groups that can advise the European Commission, but I don't see this as a structured process. It is something that is left to the cabinets. The IDIA platform that has replaced the European Political Strategy Centre is more a convener of round tables. It doesn't have its own research capacity and strategic device capacity. I think there's a lot of work to be done there. That worries me a little bit because I study as an academic, the EU as an actor of global governance. There is so much to be done in terms of how do you choose your strategy, your instruments of global governance, the channels of-- the forging of coalitions that is needed to really amplify the impact on the evolution of global governance. At the moment, I don't see the European Commission or the parliament being well equipped for this.

Alexandra: Understood, understood. This indeed sounds a little bit worrisome. Maybe one thing that would also be interesting for me to better understand since-- Earlier when we talked about the AI Act, you mentioned that on the one side of the spectrum there are those who have deemed the AI Act as being too regulatory, too prescriptive. On the other hand, we of course as always have those who have the feeling, not enough is covered. Only the high-risk AI systems and so on and so forth.

From some, I hear the concerns that the high-risk classifications, in general, is something they see as the right approach to go by, but they're missing a stronger focus on AI ethics principles. Already back in I think 2019 for the Centre of European Policy Studies, you worked on this report that also focused on governance but also ethical aspects of artificial intelligence and how this could be regulated.

Are there some concrete recommendations that you could share with us from this report, how the AI Act could put a strong focus on ethical aspects of artificial intelligence?

Andrea: Well, I think the AI Act is sufficiently grounded on ethical requirements. Although ethics is a pretty broad field and the ethical requirement for a trustworthy AI incorporated a number of elements that are not replicated in the AI Act. Both in the work that I did with CEPS, but also in the work of the high-level expert group on artificial intelligence, the work that we did on trustworthy AI, incorporated a number of elements in terms of better-detailing fairness, transparency but also incorporating societal and environmental well being as key requirements for AI to be trustworthy.

Now, what happens in practice when you want to apply this as a regulatory requirement is not easily said. You could impose that those that develop or deploy high-risk artificial intelligence applications think about or try to mitigate any potential negative impact on societal well-being and environmental well-being. Perhaps this might go as far as required for example the compatibility of these technological solutions with the environmental goals of the European Union.

Minimizing energy consumption or choosing rather than having a data minimization principle, having an energy consumption minimization principle so that you demonstrate that you're choosing the technique and the process of development and deployment that minimizes on the carbon footprint of that specific AI application. That is one of the many things that could be done.

Other than this, the rest is subject to further study and improvement. The AI Act cannot fully go into details about what does it mean to protect self-determination for example or human agency. What does it mean to secure sufficiently sound human oversight, meaningful human oversight? Again, I'm pretty minimalistic on the text of the AI Act. I am very ambitious on what happens after the AI Act. Meaning that this is the work of experts, civil society, to set priorities to define what is acceptable and what is not.

One thing for which I praise the AI Act also in terms of ethics is that it takes a rather nuanced but still rather firm approach that is precautionary in terms of identifying cases where there are risks that as far as we know today, are not easy to mitigate and they are therefore unacceptable at the moment. This is the area of red lines. The area of red lines should not be taken for granted. Until two, three weeks before the publication of the AI Act there was no such thing.

It was still high risk versus low risk, a very binary approach. The fact that the commission decided eventually to introduce an area where it's pretty really clear that not everything that can be done with AI, should be done with AI. Meaning that there are certain AI applications that at the moment should be prohibited awaiting for further knowledge hopefully and evidence. I think it's a great step forward and I think it should be recognized also as incorporation of some key ethical and legal principles into the very texture and structure of this regulatory proposal.

Alexandra: Definitely I also agree that this is a really important step that I hope will also have this spillover effect to the regulation of AI in other areas of the world. This reminds me a dear friend of mine. She's the senior director for IEEE. Also oftentimes emphasizes that she doesn't see many people do, which say, "Okay, we are actually subject to artificial intelligence and we can't stop this technological development from happening and eventually taking over the world," or whatever.

Some scenarios are indicating but that it's actually up to us to shape the technology in a form that it's used for the human betterment and betterment of society. I think this was an important step to include those areas, but currently, it's just too risky and not enough knowledge exists on how to apply this safely. Then of course, I also like what you pointed out that it shows this iterative approach that you was taking here that it's setting a starting point to ensure that for development of artificial intelligence is moving in a beneficial and human-centric direction. Of course, it will be iterative. There will be new insights, new steps, new expert guidance that maybe will derive.

Therefore, I think this is a very, very well thought-through approach that they have taken. Also, a nice argument that you can use against those parties who are just completely against regulating AI at all. With the argument that it's just too new to come up with regulation that's all-encompassing. I think that's definitely not the plan but just putting up the guardrails to ensure beneficial development.

Andrea: I agree with this Alexandra. I think it is extremely important that we avoid that AI is something that is done to us. Meaning it is pretty clear that AI is not Armageddon falling on earth or something that is completely external to us. It is a decision at the individual level whether to deploy or not AI. Thereby liability, accountability of human beings, not machines is the key principle.

Also, it is a collective decision and it should become more and more a collective decision, what could be done and what should be done with AI, and what should not be using specific AI techniques or should be a leap of faith by just using AI without the necessary safeguards. As I always say, the issue here is that we need to regulate AI because it's stupid, not because it's intelligent. It is powerful though.

We've had many examples also among human beings of people that were extremely stupid and extremely powerful. Now we have a technology that is exactly like that at scale so you can decide whether you want to do something about it or not.

Alexandra: Absolutely. One thing I'd be curious to better understand and since you've been with the AI Act or its draft even before it was conceptualized. With the classification of certain high-risk systems, they focus on applications that have a significant impact on human lives in regards to for example access to wealth, education, or professions.

One thing that I think is a little bit missing or is not covered in the AI Act and I would like to get your perspective on why that's the case in which consideration led to writing the AI Act as the draft currently is, is that systems like platforms, social media platforms like Facebook not necessarily are heavily regulated with the upcoming AI Act. Even though one might be able to argue that although the impact they're having on the single individual might not be as severe as a single individual getting no access to credit for example.

Overall how social media shapes society endangers democracy if-- I'm referencing Paul Nemitz now- and all the other impacts we see it has. Some can argue that even though the individual impact is not as high, overall, the impact social media and platforms have on society can be seen as higher risk. What were the considerations that led to the AI Act not heavily targeting or restricting platforms like Facebook?

Andrea: There are two things that I can say about this. One is I think something that happened intentionally inside in the European Commission. The other one is perhaps unintentional, meaning it just happened. What happened intentionally is that the European Commission wanted to avoid big overlaps between the provisions on algorithms basically used by large-scale platforms.

They are incorporated in the Digital Services Act and in the Digital Markets Act with the applications of AI that will be covered by the AI Act. This is an intentional decision for sure, meaning that I've heard this discussed in cabinets in the services of the commission. The idea is that these otherwise would've created perhaps regulatory creep or inflation hysteresis or overlaps. Whether this is a good choice or not, we can continue discussing obviously because if you have a horizontal piece of legislation, perhaps this should apply all across the board but here, the decision is rather whether some of these services and some of these users of AI should fall under the determination of high-risk applications or no.

As a footnote to this first part of my answer, I could say there is always a possibility, meaning-- Again, I try not to read too much out loud, the specific prescriptions that you see in the AI Act. It should be made possible in the final text of the AI Act to change those things rather dynamically and flexibly as the AI Act finally enters into force, meaning that a group of experts should-- or the AI board/agency, whatever it will be. It will be in the future should be able to follow a transparent process, reclassify any existing application as high-risk or from high-risk to low-risk or moderate risk, and so on and so forth, or even into red lines, if things change dramatically.

My dream would be that they are able to do so by not just saying AI applications in the sector X that do Y are to be considered high-risk, but rather to say those AI applications that do not come with the following safeguards and mitigating measures should be considered high-risk.

That said, this is the first part of my answer. The second part is perhaps a legacy problem, which is we see the world in one dimension when we do-- or if you wish in 2D when we regulate. The AI Act is very much made for those individual AI applications that create an individual risk for an individual fundamental, that will end up damaging one individual. Those incremental, tiny little steps, like the manipulation of political opinion, the polarization of the debate, the hyper nudging of people, and so on. These are not easily covered by the AI Act. Obviously, it's all a matter of interpretation because-- That is to some extent, a problem because if you read through the proposed red lines, you see that there is a category of applications that manipulate human judgment.

It's clear that we are talking about extreme cases of manipulation, but there's no threshold defined. You can think about Cambridge Analytica and then you tell me whether that is a case in which there was a manipulation, or sometimes it's called nudging or hyper nudging of people, or algorithmic nudging. I would say those cases should not be considered as red lines but the elements in AI users, they are more about incremental gradual epistemic problems or the collective distancing of political opinion from reality.

For example, all this should warrant a specific guidance document or a specific interpretive communication. That clarifies also what are the links and why there are no gaps in the interface between, for example, the Digital Services Act and the AI Act. That is a problem for this big puzzle anyway. It's a huge jigsaw puzzle with many pieces and lacking perhaps a lot of glue at the moment.

Alexandra: Agreed. This is also what I oftentimes hear from the industry that they're now a little bit concerned with all the proposed regulatory frameworks on the horizon, how to actually reconcile this, and make sense of all of those. Maybe to now come to a question that we already discussed in one of our earlier conversations where you posed this provocative question, how can you actually survive this digital transformation without upgrading its regulatory tools? Can you maybe elaborate for our audience what you were referencing there?

Andrea: Yes. The last time I spoke with people at Google about their search algorithm-- This was I think, 2019, they told me that typically major updates to their algorithms take place some 160 times per year.

Alexandra: Quite often.

Andrea: Today I think that is much more continuous. It's actually thousands of times a year. What you inspect, first of all, is what you can inspect. Second, what you inspect is something that might change immediately like say, after two hours, it will be a different type of algorithm. There will be the need for tools and remedies that today we call GovTech or RegTech, where you actually establish a secure data exchange with the regulated entity and try to continuously monitor regulatory provisions, which is something that is being trialed.

For example, in the financial sector what's sometimes called SupTech, supervision technologies, but it's something that is still unknown to many, many regulators around the world.

Alexandra: Of course, and then-- Sorry to interrupt you, but the question oftentimes is also who is going to build these connection interfaces and the tool that actually are built to audit big tech players. Really curious to see how this will play out. I think in combination with that, one thing that could also be helpful is also to have easy mechanisms for end-users to also flag certain actions that they deem to be not correct or suspicious or something like that.

It's easier to identify some issues that need attention from regulators, and of course also from the company.

Andrea: Here where the other element comes, which is the need to open up those algorithms to civil society, what the DSA, to some extent us with vetted researchers. I think this is one of the elements that will need to be considered in shaping the regulatory environments of the future. When I was a kid at school, I remember I studied this paradox of motion of Achilles chasing a tortoise. You probably studied that too Alexander, right?

Alexandra: I think so.

Andrea: The idea that Zeno from Elea, this is a pre-Socratic philosopher saying that if Achilles only looks at where the turtle is, it would never reach the turtle because even if it's faster, the turtle will always have made that additional step. Here we are in a similar situation. If we only try to see where technology is today, and we try to adapt to where technology is today, because just as we do with the AI Act, the moment the AI Act reaches the market, it will be obsolete. With an additional problem here is the turtle that tries to chase Achilles, meaning that law is the slow one, and technology is the fast one.

I really do not see going forward in the next four or five years, how are we going to regulate digital technology without extensively using digital technology. Again, GovTech or RegTech on the one hand, has algorithmic inspections, real-time monitoring, and auditing. All those things are going to be extremely important. The regulation of smart contracts and the embedding of legal values and principles in code are the real frontier of regulation in this space.

Alexandra: I agree. I think all of these things are important, and I think you can find a balance because also again, pointing towards Paul Nemitz, who you mentioned is also a dear friend of yours who sometimes has the perspective that, of course, these regulatory tools are needed to monitor and assess effectively, but at the same time, he makes the point that it shouldn't be the expectation for democracy to act as fast and agile as the tech players, because the process of consensus-building just needs time.

I think we will have to find a balance here from the ones that are sticking to the well-working and trusted principles that democracy has used over the last thousands of years, but at the same time, add the tools necessary to really ensure also effective regulation and monitoring of all the new technologies that society.

Andrea: It can be done by ensuring the regulation is at once principles-based and outcome-based. At the same time that in between the two, the process of implementing the principles towards the outcome is as agile and flexible as possible. The governance and the process around the implementation of legislation becomes more important, even more important than the starting point and the endpoint or at least equally important. Sometimes when people say that the destination is less important than the journey. Here, both are important.

The destination is absolutely essential. In particular, what kind of society we want to see in the future emerging, but the trust around the process that we set in motion to implement our legislation has to be always multi-stakeholder, open, and transparent to generate trust in the users, and in the citizens, and everyone that participates in society. That is essential because otherwise, technology becomes an end in and of itself when we regulate where it should be taken as a means. Something that we choose to use or not to use.

Alexandra: Understand and agreed. As a last question, we already touched the AI Act. We touched synthetic data with the both topics that are super interesting to our listeners, but I wouldn't want to miss the opportunity to ask you a brief question about GDPR, since you just a few weeks ago, spoke at the EPP parliament or European parliament hearing on the shortcomings of GDPR.

Can you briefly summarize the key takeaways of the remarks you gave in regards to not only fixing old issues, but also anticipatory regulation that you think is needed, and maybe also some inconsistencies of GDPR that you foresee with all the AI Act, Digital Service Act, Digital Markets, Act, Data Governance Act and so on and so forth. All the proposed regulations that are on the horizon.

Andrea: GDPR again, is our very nice flag. It's been a super important piece of legislation. It has been turning the tide if you wish in the approach and the interaction between governments and cyberspace. It has established a super important principle, which is that governments can have a say and legal principles are important in cyberspace. This magmatic evolution of digital technology has to adapt, has to take into account that there are certain things that are non-negotiable, principles that should always be complied with. That was not the case until then in cyberspace.

That said, the overall approach of GDPR is quite traditional, quite old style. It's something that does not incorporate the more modern and forward-looking enforcement mechanisms that would be needed in order to really get ahold of these constantly evolving practices.

What we've seen today is, first of all, there's still quite a lot of uncertainty as to how meaningful certain provisions are. For example, what do we mean by consent? What is meaningful consent? What is explicit consent? What are the conditions under which we could say end-users have really in a fully aware way manifested their agreement? There's also a huge problem in terms of enforcement at the national level.

The famous one-stop-shop mechanism, because there is, to some extent, the conflict of interest for some regulators, for some data protection authorities, when it comes to be particularly effective and aggressive vis-à-vis companies that then can choose through the typical forum shopping to establish themselves in other places if they can find more lenient DPAs. At the same time, there's a problem of resources of DPAs because those data protection authorities potentially face more cases because they are from those legal systems, from those jurisdictions where many of the large tech giants are located.

I think about Ireland, for example, should be normally empowered with more resources. Even to start thinking about whether they want to be more aggressive, vis-à-vis the tech giants. There is a big backlog. Some people consider that to be strategic, others consider it to be a result of the lack of resources and this is something that needs to be fixed because currently, what we see is that the level of compliance with GDPR is not perfect. It's not even easy to understand exactly what is that degree of compliance, because we don't have this very pervasive observation of the market that will be needed.

Going forward, we've seen already the emergence of behaviors in particular, by large platforms that use the GDPR as a sword, if you wish. They can justify on the basis of GDPR restrictions in data flows vis-à-vis potential new entrants. This is happening to the extent that we've seen as an intentional effect of GDPR, the further consolidation of market power. Further market concentration in the number of key markets, such as online advertising and in-app tracking. That is another big problem to be taken into account.

Now, going forward, the more data protection takes place algorithmically, meaning the practices in businesses take place with the help of artificial intelligence, the more it becomes difficult to really spot the reuses, the handling of data, and the key decisions that are taken, where they're taken algorithmically in order to handle end-users' data. That is important.

The other thing that to me is worrisome at the moment is looking at how companies have had over time as a response to GDPR, the further incentive to become huge conglomerates because then, they can become multi-service companies and whatever they gather from the Instagrams and the WhatsApps of the world, they can still reuse internally. It's something that the smaller companies cannot do. This so-called internal data free for all approaches are something that clearly are problems with GDPR.

Now going forward, what happens with data protection, data protection becomes a drop in an ocean that we call data governance. How are we going to enforce GDPR as the subject matter becomes much bigger and as different regulators and different enforcers? We'll have the possibility to look at the data governance practices of companies from a variety of different angles. It could be the DMA or the DSA and so on and so forth.

We should make sure that we incorporate an understanding and monitoring of GDPR related compliance behaviors. Also in all those occasions in which regulators would look at the large tech giants for competition purposes, for IP purposes, for data governance purposes, and so on and so forth.

Alexandra: Agreed, which potentially brings us back to what we had earlier with this single enforcement entity that balances all the different aspects that are important nowadays.

Thank you so much for everything you shared. I really had to stop myself from not asking five follow-up questions to every topic that we covered, but I thoroughly enjoyed having this conversation with you and I'm already looking forward to continue this conversation at one point in time. It was really a pleasure. Thank you, Andrea.

Andrea: My pleasure. Thank you very much and I hope that all the listeners will find this to be interesting and hope to continue this discussion with you going forward. Thank you very much.

Alexandra: I'm confident they will. Thank you, Andrea.

I think it became apparent that I thoroughly enjoyed my conversation with Andrea and I hope you did so too. I must say I can't wait to welcome him back to our podcast because he's just so knowledgeable and it was a blast discussing all of these topics with him. In the meantime, let us hear how you liked the episode by either commenting on LinkedIn or writing us an email via podcast@mostly.ai. Thank you for listening. Don't forget to tune in when our next episode gets published on the second

Ready to try synthetic data generation?

The best way to learn about synthetic data is to experiment with synthetic data generation. Try it for free or get in touch with our sales team for a demo.
magnifiercross