🚀 Launching Synthetic Text to Unlock High-Value Proprietary Text Data
Read all about it here
Episode 25.

The end of AI ethics - a conversation about the EU's AI Act with Paul Nemitz, the godfather of GDPR

Hosted by
Alexandra Ebert
In celebration of Data Privacy Day in 2022, we talked to Paul Nemitz, Principal Advisor on Justice Policy at the European Commission. Paul is often referred to as the godfather of GDPR since he led the work on the General Data Protection Regulation in the EU. Today, we are witnessing the birth of the world's first AI regulation, and Paul has very important thoughts to share regarding this new, groundbreaking piece of legislation. In this episode, Alexandra Ebert, MOSTLY AI's Chief Trust Officer, talked to Paul about:
  • how bigtech threatens democracy,
  • why AI ethics is out, and it's time to regulate,
  • the role synthetic plays in privacy protection and fairness in AI,
  • the AI manifesto in defence of democracy and the rule of law, 
  • GDPR and the upcoming AI regulation.
In the previous episode of the Data Democratization Podcast, we talked to Axel Voss, a member of the European Parliament, leading the Special Committee on Artificial Intelligence about the EU's roadmap for AI regulation. Tune in for another perspective on AI-legislation!

Transcript

Alexandra: Welcome to the Data Democratization Podcast. I'm Alexandra Ebert your host and MOSTLY AI's Chief Trust Officer. This is our 25th episode. Today's guest is the Principal Advisor on Justice Policy for the European Commission, Paul Nemitz. Some also describe Paul as the godfather of GDPR as he led the reform of the data protection law in the European Union which brought us what is now known as the General Data Protection Regulation. Nowadays, it's the AI Act he's highly involved in. Besides his work for the Commission, he's also a professor of law and the co-author of one of the best books I've ever read about power, freedom, and democracy in the age of artificial intelligence.

Today, Paul and I talked about big tech and how it threatens democracy. While times of AI ethics are over and right the time for laws has arrived. The potential of synthetic data on the one hand for privacy protection by replacing the use of personal data. On the other hand for fairness and artificial intelligence. Synthetic data allows to generate data that not only reflects the world as is and forces us to perpetuate historic biases but as it can also generate data that represents society as we would like it to be. Therefore, as Paul describes it, synthetic data supports a normative vision.

Then we touched about the AI manifesto in the defense of democracy and the rule of law towards which Paul made his contributions. Of course, we also covered GDPR and the upcoming AI Act, and I couldn't help myself and asked plenty of questions about his highly recommended book. If you are interested in regulatory developments, responsible AI, and the steps we as a society need to take to ensure the primacy of democracy also in the digital age, this episode is for you. I'm sure you will find it worthwhile. Let's tune in.

Paul, it's a true honor to have you on the show today. I was very much looking forward to our discussion. You are an experienced lawyer. You are the European Commission's Principal Advisor on Justice Policy. You are some say the godfather of GDPR now also highly involved in the upcoming AI regulation and apart from all your work for the commission, you are a member of the Data Ethics Commission in Germany, you're a law professor. You authored one of the best books I've read on democracy, and power, and freedom in the age of artificial intelligence so it's a true pleasure to have you here.

Would you briefly introduce yourself to our listeners and maybe also share what motivates you to accomplish all this important work that you do on the intersection of democracy, technology, and law?

Paul: Thank you, Alexandra. [clears throat] I'm very happy to be here too. What motivates me is I would say a genuine interest in politics and democracy to shape things. I do believe that democracy really must function and that we have all to engage in it. I think it's the only form of self-governance of people which secures freedom and our participation in this government. It's under threat here and there. [clears throat] We have problems in Europe and, of course, in many parts of the world. I think I would like to strengthen democracy for my own sake and for that of my children. I think that's the key motivation.

Alexandra: That sounds like a good motivation. Speaking about democracy and democracy being in danger within the framework or the context of the AI Act and round table. You and some other thought leaders this year published a manifesto on the defense of democracy also in the context of AI. Why is democracy in danger or threatened by artificial intelligence, in particular, and why should people sign this manifesto?

Paul: The willingness and ability to agree on anything in societies but also among states is really today the scarcest resource. It's much scarcer actually than the great new start-up idea or the great new academic paper which individuals write. That was the underlying thought which led to the convening of this transatlantic reflection group on artificial intelligence. Or better Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of Artificial Intelligence. It was an effort to bring together an interdisciplinary group from Europe and the United States to see whether we can agree on something.

We need agreement in democracies to set the rules for business and for technology if we mean it with democracy. If we don't want to be ruled mainly or even only by technology and the business interests of the companies which develop this technology. Now, your question why does AI put democracy at risk? I think it is not AI on its own which is a challenge to democracy but it is an ideology which pretends that the global internet combined with other technologies like big data and the analytical possibilities of AI but also the possibilities to nudge people to do good things with AI, for example.

This idea that technology solves the problems of the world better than democracy, I think that is one element which undermines democracy. Another one is the reshaping of the public space of discourse through the internet and social networks. In this context, also the famous algorithms and AI systems which play up to us what we get to see in terms of political use in particular on the internet play an important role. We live now in a world in which 50% of the people whether in Europe or in the United States build a political opinion on the internet, and are, therefore, highly influenced by algorithms using artificial intelligence.

This is in part, I would say, a reason for polarization and for the new inability to agree which we see in many political systems and also in many political discussions actually between states. It is also something combined with the fact that the political discourse takes place on the size of the mobile phone screen. It's something which gives an advantage to populists and people who are not keen to have the complex discourses of democracy. If the mobile phone screen is the space in which you can argue, there is not much space for a differentiated democratic argumentation.

Alexandra: That's a good point. Quick question here. Basically, what you're referring to is that, on the one hand, the algorithms are steering what information I'm getting to see. This kind of personalization that different individuals are getting different news streams, news feeds. Therefore, don't even know what the other information is, what the information others are getting is making it incredibly hard to have a fruitful discussion and argument about something.

Because in times where we only had the newspaper, we at least saw, "Okay. These are different newspapers and this is the information that everybody is getting. Now we don't even know what somebody else is getting. Sometimes we have the feeling if they present their argument they're completely out of their mind and, "I've never heard this before. This doesn't make sense."

Paul: Yes. There are a number of issues in this new way we are being informed or we are informing ourselves. The first one is that from an economic point of view, the success of the targeted advertising model on the internet has basically deprived the press of a lot of money, of financing. It's very cheap to buy a newspaper today, and you can see it in the United States, Jeff Bezos the founder of Amazon he just bought the Washington Post. You can see the same in Europe, newspapers are really struggling. Two companies in this world, maybe Facebook and Google today concentrate 80% of the new revenues from internet-based advertisement with them which is an incredible concentration both in the US market and Europe.

That's let's say, the first problem for the public sphere because there is a reason why the press is called the fourth estate. The fourth power in a free democracy, because this is an important function of constitutional nature. We need journalists who go after the truth, who investigate, who ask painful questions to power, whether public power or private power. This function of the press is not replaced by the cacophony in the social networks and on the internet. That's let's say the first issue we are facing.

The second issue we're facing is indeed, as you said, we are lacking a common basis from which to start, but even more dramatically, we cannot judge on the political biases of an internet algorithm, which gives everybody what they want to see because we don't see what this algorithm gives to everybody. This is the very important difference to a newspaper, which may be tendentious but we all see it. We all know that the tabloid X is classically lying, and is classically politically oriented to the right. No problem in a democracy, because we all know it and we see it all on a daily basis.

In this world of individual news algorithm targeting, we do not see anymore what news other people are getting and so we cannot judge on the overall bias of the algorithm, let's say of Facebook in terms of politics. Which leads us not only in the public discourse but most dramatically in elections to a total blindness of the public sphere. In elections, parties are now able to target their voters with a huge variety of messages.

Everybody gets to hear what they want to hear, and we don't actually know anymore what the party is standing for because we don't see the messages the party is sending to all these individuals. This is something which we now want to compensate and at least try to address a little bit through legislation. The European Commission has just proposed a regulation on election publicity register. Which tries to create a little bit transparency on this so that people know what a party actually stands for. One way of knowing this is to get an overview, hopefully, a complete and comparable overview, of all the different messages that this party pays for on social networks.

With this proclaimed new freedom and access to information and all of this is good. I don't want to deny there are positive sides to internet and so on, come a lot of downsides. We have to catch up with this downside. We have made the mistake of being talked into slow regulation and doing nothing for decades following the classic neoliberal approach of laws are just obstacle to innovation. Just the cost factor and the political powers, which stand, and in practice, make sure that this ideology, at least in the past prevailed. They have a great responsibility because they did not only hinder the progress of rules for the internet. They also actively talked down the law as the most noble expression of democracy.

I think the good news is that these times are over. I think both in Europe and also in the United States, more and more people start to realize that the downsides of the internet leading to the storm on Congress in America, need to be addressed. They need to be addressed holistically and through law, which also can be enforced against these bastions of power which these companies are. I think we need to come back to a very simple principle, which has served us well in the pre-digital and pre-internet age, which is the primacy of democratic politics over technology and business interests.

Alexandra: That's a good point. Talking about faster regulation and the importance of regulation. Can you point at a few of the other aspects of the manifesto that were recommended to counteract these developments and strengthen the position of democracy?

Paul: The first for me, the most important thing about this manifesto, as I said, is that it exists as a consensus document of many. That it is a document, which was produced of a multi-disciplinary group, both people with technology background, political science and law, and from both sides of the Atlantic. Then there is a conceptual point before I get to the individual asks of the document, which for me is overarching important. That is that we must rehabilitate as we formulate. We must rehabilitate the language of humans before or above the code. What does this mean?

We have slid and being pushed into a perception of the world in which the human being is seen as weak and the AI and the machines as perfect. With this goes an ideology, which says, "The law should be as precise as the code and the code is good and all this text of humans is unprecise and no good." This is the engineering view of the world. Also, sentences like the law has to adapt as fast as the code. Typical lobbyist's sentence, which one hears often in Washington and in Brussels. All this is wrong. I'll tell you why it's wrong because code is written for stupids. Who are these stupids? These stupids are the machines, the computers, they cannot think by themselves.

The law with human language is written for humans. Humans can think by themselves and they can make good use of the openness of language to future interpretation. You see, if we use the openness of our language. Our languages as a means which has a corollary in the openness of the future, our future is open, we don't know how the future looks. We can shape it through our actions. We can use language in our laws, which allows future generations and also ourselves, maybe in 10 years or 15 years to reinterpret the law. To reactualize it, to give it new meaning in line with developments of technology or business.

The reverse of what the engineering worldview is claiming is true, the best laws are those which stay in place for a long time. Namely, the best examples are constitutions, because they have an open language which always allows new actualizations through new interpretation. These laws, they don't need to be changed. They stay the same, but the code must be changed every four or eight weeks on our mobile phone, we need a new code because the computer, namely our mobile phone device cannot think.

Let's not talk down language, and let's not talk down law. I would say another thing democracy requires deliberation between people in order to understand each other, and to convince each other so that eventually we at least have a majority to decide. Again by talking down the language and by talking down the law as a means of expression of a large enough consensus in a democracy. Some people I would say in the scene have either not understood what democracy is about, how it secures our freedom, or they are actively pursuing intentions which are anti-democratic.

Which certainly, play in the hands of populist who speak exactly the same type of language who say, "Democracy is a talking shop and so on." We have all that in history. This is for me the key point at this point of history, that we must make it clear that neither humans are failures. We have the ability to think which machines don't have. We have our instruments on how to organize how we live together. Namely, the law language and democracy, weak and useless instruments. They are actually in many respects much more powerful, much more persistent, much more useful, certainly, for deliberation between humans and for practical democracy than the code and technology.

From this, let's say general point which we discuss in the document flows then, of course, also a very general statement that we don't want tech absolutism to replace democracy. Then from there follow very precise demands starting, of course, from the demand that big tech contributes its fair share of taxes to the public good. That it must be regulated with competition law, which is effective, and which addresses the new types of power exercised by these companies not only in the market, but also over democracy.

These demands then also going into the regulation of platform behavior and so on are detailed in the manifesto. The manifesto, I would say in the end is not so much about the detail of this or that formulation, it is about the primacy of democracy over technology.

Alexandra: This general mindset, one point to the regulations that I also really found worth sharing with our listeners. It's not from the manifesto, but from your book that it's also important to not only have regulations in human language with this openness and also on this more abstract level. That you also have general laws, and not specific laws for each different area, because this opens up complexity that's overburdening our execution and regulatory capacity. On the other hand, also leaves loopholes, especially for the big tech firms, and this is why, in general, if I understood you correctly. You are supporter of the more abstract overarching laws and not specific niche laws too.

Paul: One thing is clear, the battle cry of the lobbyist from Silicon Valley's, we want sector laws, the laws have to precisely address the problem it had. This battle cry has, of course, the function to make, not as they state to make the law more effective, but on the country to soft wash it. How does this work? It works like that first by narrowing the scope of the law. There are many areas which stay unregulated. Second, it is impossible for a democratic legislator to address every detail of this, or that business model in this, or that sector through law.

Third, it's just overburdening the capacity of lawmakers, and that's again, to the advantage of those who actually don't want the law. Third, the very specific laws for all the different areas overburden the cities. People who deal with all these companies, cannot manage, and understand all these complex laws. Even in general laws, people don't understand anymore. The only ones who benefit from this complexity are the rich powerful companies who can manage this complexity and actually love to create complexity to their benefit.

The battle cry of the lobby starts with, "Yes, sure let's have a discussion, and let's make it complex, and let's differentiate, and let's go further in making formulations, which at the end, nobody understands anymore." That's the master-

Alexandra: Cry of the lobbyist and also driving it out for years and years.

Paul: Absolutely, and this works in Washington even better than in Brussels. What I'm saying is, we have to look at the law not from the side of interest of these companies or, and we're also not writing the laws for engineers who, of course, need very precise instructions. We are writing it to shape society, and our perspective have to be the citizens. For example, we need a simple law which applies everywhere, where AI is put on the market, and where people come in touch with AI. Which is for example, that people know that they're dealing with AI.

When a voice speaks to you over the telephone, or over the internet, or when you get a written message you need to know, is this a machine talking to me, or is this a human? This is a rule which has to apply everywhere. We must go through the intellectual effort to first define these general rules, and then, of course, afterwards, if we see that in some areas, there are specific issues which need to be addressed beyond that, we have to think of how this can be done. Sometimes it has to be done through additional specific legislation. Sometimes it can be done through secondary rules, or standard-setting for exam.

Alexandra: That makes sense. I would even argue and say, of course, letting people know that they're interacting with AI is an important step, but it's not sufficient. With the future scenario, quite likely being that AI is more and more widely used, you will see this type of notifications everywhere. I think, and even more important step is also providing effective means and measures to people to provide feedback. Or actually get some modes to change decisions that were made about them based on artificial intelligence.

Maybe now that we already addressed the AI legislation before, I have a question on the proposed on the draft AI act for you. What's the status quo of the EU when it comes to AI, where do we need to improve? Are there some areas where we are already quite successful, geographically, topically what's your position on that?

Paul: I think in the terms of technology generally, and also transforming research into business ideas, which are good for the world, Europe is very, very strong. I think we have very, very strong innovators everywhere. The fact that in the consumer industries, in the consumer markets, there are US companies which are very strong, that has always been the case like this. Also, in the predigital age, Kellogg's conflicts was eaten and is still eaten today, everywhere. It's an American company, we wear jeans, and we listen to pop music, and we watch in majority American films, all these things have been criticized, of course.

I think that's the normality of the free Western world, and let's not be pushed into a mindset which is that we don't innovate or we can't innovate. The country is true. I would say that, of course, we must, and this is the strategy of European Commission on AI, we have to get better of course, in research and in the adoption of new technologies in particular AI. We have to prepare our people through continuous education. We also have to think about the social consequences of AI in terms of labor needs, and make sure that there are no let's say, structural disruptions taking place in regions of Europe, and we need legal rules.

I'm not talking about ethics anymore, because that time is over. No problem with company ethics which go beyond the law, but now we are on the face where I think everybody has understood that the wishy-washy talk about self-regulation and ethics is over, and it's time to make binding rules. These rules will create a level playing field in the internal market in contrast to ethics which doesn't create a level playing field, because these rules will be binding. Every competitor has to play by these rules, and that's actually also good for companies in Europe that they know what the rules are, and that they apply across the common market.

Now, where are we strong? I would say, that's not my area. I'm not a market analyst of AI, but I would say, just to give you an example, Europe is very strong in AI when it comes to health. We have companies who have already on the market systems which recognize, for example, in the breast scan that the scan is normal, and no treatment is necessary. If there is a problem, then they handover to a pathologist who has a closer look. In medicine, definitely, very strong companies. We have also strong companies, in other sectors, what is very close to my heart? You mentioned GDPR, I was the lead director on this technology, which helps us to protect people's data and their private life.

In this area, we also see a good development of technology, not the least coming also inspired by civil society, by people like you in Vienna, coming from Vienna, and that's why happy to do the interview with you. Vienna has a very, very strong thing of let's say what I would call critical data science, and NGOs. Let's say, critical potential of which is really concentrated very strongly in Vienna, in turn, I think leads to new business opportunities. As companies realize that data protection is a serious matter, they will invest both in technology and in people. Who make sure not only that one complies with the law, but actually that one can use data protection as a marketing argument because that's the nature of democracy.

GDPR was not put in place, because some terrible bureaucrats in Brussels wanted it, but it was actually voted in the European Parliament. If you look at opinion polls in the United States, and in Europe, people don't want to be snooped on, neither by the government nor by private companies, they want their data to be protected. It is, unfortunately, I have to say, American companies like Apple and Microsoft, which make big marketing campaigns, how great they are in data protection and privacy. I would hope that also European Service Providers go in the same direction.

What I'm happy to see is we have a segment of companies in Europe who are also investing in, let's say, application technology and enforcement technology for data protection and that's a good thing.

Alexandra: I agree. Absolutely agree. Since you brought up the topic of data, and we before that talked about the AI Act, naturally, high-quality data is an important topic in AI. Because without high-quality data, you won't have high-quality AI. We have the common EU data spaces in there, we have the need for testing and training data sets. Also the obligation to look into data sets to identify gaps and examine them on biases. I'd be curious to hear if you see a role for synthetic data, when it comes to complying with the future AI Act. Of course, it's currently only in the draft stage, but just out of the nature of synthetic data that it helps with opening up access to data.

We also talked about it earlier, that you have the capacity to make datasets that are more inclusive and fair and representative of minority groups, which is something that I would love to see contribute to helping us to better understand and test algorithms. Whether they really behave in a way that's intended and desired by us as a society.

Paul: Yes, now, we come to the business model of your company, and I have not investigated this business model further. What I can say is, first, wherever it is possible to avoid using personal data to make progress in science or business models, I would say that's a good thing. Because personal data apart from having to comply with GDPR, also have huge other downsides. They become a honeypot for people who love to concentrate them, they become a honeypot for crime, and so on so.

If there is a potential to avoid using personal data, or for that matter to replace them with synthetic data, which are not personal data, that's good news. Second, I have actually visited some technology companies in Israel and in other parts of the world, also in Europe. Which produce synthetic data, and which have the argument that synthetic data from the outset is very well structured, it's very well tagged. The congruence between the picture and the content descriptor which makes it possible for AI to learn fast is very precise, in contrast to personal data, where the tagging is an exposed function.

I can imagine that there are certainly also from, let's say, quality of data point of view, advanced advantages for synthetic data. It's not only a matter of data protection law, compliance, but in AI where clean data sets and quality data sets are so important. Synthetic data can, I'm sure play a role, a positive role in terms of providing the necessary preciseness and clarity of the content of the data, or the data and the descriptors. The third thing, which I'm interested in is that we don't move into a structurally conservative society through technology.

What do I mean by this? If we program artificial intelligence programs, let's say, for functions of the state only on today's and yesterday's empirics. Namely only existing and old data. If the machines can also just learn from today's data, we live in a world of what is and what is, is traditionally always subject to improvement. These machines on their own, are not able to improve. They may be able to optimize within a certain frame, but if they only work on empirical data, they will replicate the biases let's say, unemployment against women.

I think the idea to feed data sets with artificial data, which contain a vision of how it should be a normative vision. For example, if we have a company which wants to use an algorithm to select personnel, and so far, the winners in this company in the career progression were always white men. Let's not talk too much American, let's just say men. If then the AI, learns, okay, it's men we have to recruit, it will just replicate the bias of the past.

If we inject in the data set from which the AI uses synthetic data about women, then the AI learns, aha, we have 50/50 women or even we need 75% of women recruitment right now, because we have too many men in leading positions in the company. If synthetic data makes it possible to train AI in a very flexible way to the new priorities of the improved For example, if a government has AI in place, and a government changes and a new government comes in and says, "Now we want to do other policies, we would want to do better policies, we want to be fairer. We want to be more just." If synthetic data makes it possible, then to quickly retrain AI in light of this new political orientation.

If that could be demonstrated, I think that would be a very, very important argument in the, let's say, cost-benefit analysis of using AI in the democratic context. You see, I don't think we want to live in societies in which we have elections but once the elections are over, the new majority comes to the ministries, and the ministry say, "Oh, it's very, very difficult to change anything here because everything has been pre-programmed for years in our AI system. This costs millions to retrain it, sorry too expensive."

There is a relationship also actually in such a, let's say, blunt, and direct way between democracy and artificial intelligence in the public sector. Because we have to ask the question, who will be more flexible? The civil servants who can receive new instructions, and as humans are able to follow the new rules quickly? Or the machines which need complex retraining. Anything, which does a quicker retraining and which doesn't cement the decision-making of the machine to past empirics and biases but allows to inject new reformist elements, would be very interesting.

Alexandra: I agree. I think, of course, being able to quickly retrain will be an important aspect even more in the future. Now, we are in the stage where many organizations and many institutions first start out with their AI training initiatives. Already there we have the problem that they train on historic. Therefore, historically, oftentimes biased data. This is why we as a company are also researching quite a lot and published on fear synthetic data with exactly this goal of reflecting the world as we would love to see it. Or not we as a company, but the organization using the technology to make exactly that possible.

What you just described, but, of course, this is more early research but fully agree that this imagination that you can put into synthetic data That you can mold it to your vision is something that I hope will positively impact our future. We also mentioned discrimination and fairness. One aspect I really liked seeing in the proposed draft of the AI regulation was that there's actually I think it was article 10. Also the permission to actually use sensitive attributes for training of AI systems, monitoring and testing of AI systems to the extent of bias mitigation and bias monitoring.

Because this is an issue that's quite oftentimes addressed in the scientific community laws in the industry. That in the past, some laws prohibited them from using this information in the first place, and then forced them to operate in the blind, and didn't see sensitive attributes to counteract and correct biases. I was really positively surprised to see it in there. What I wanted to ask you is that the AI Act talks about anti-discrimination. Since it's a fundamental right that we're not subjected to discriminatory behavior, but still the whole field of fairness. What's considered as fair treatment is something that's quite hard to put in a mathematical definition.

There are also some papers out there, lining out the different concepts we have in law, when you translate them into mathematical fairness definitions that can be digested by AI, have some conflicts. That it's impossible to fulfill different required fairness definition. My question to you would be how could AI practitioners better understand when they're monitoring for bias. What actually has to be seen as a bias according to law, and what is in their room of decision-making, and could be decided by the company? Because this is still such an open question and problem I'm currently working on, where I haven't yet found the answer.

Paul: I think you are referring to the paper, by a Oxford University professor who described the discrepancy between the definitions of direct and indirect discrimination in EU law, on the one hand, and the 18 algorithms of fairness, which don't capture this. How shall I say this, here we have an area where it's very important to foster a culture of dialogue between AI makers, whether it's engineers or mathematicians or whatever, and other fields. Of course, it's not the job of the technologist to reinvent the world in terms of just ignoring the fact that there are already definitions of discrimination and hidden discrimination in European law. This is all very sophisticated.

The law on equal opportunities and equal treatment is actually based on the treaty itself. Already, the treaty has very strong provisions on equal wages and so on. This has been further specified in secondary legislation on which there's also jurisprudence. Whenever the technology goes into this field, and one wants to check whether the systems comply with, it's very important to have people on board. Who have a very clear knowledge or work themselves into fully understanding this law and jurisprudence, which is long-standing. It's not just yesterday's invention. We have it since decades, this jurisprudence by the Court of Justice.

Court of Justice of the European Union has always been very active on equal opportunities, very many, very good judgments. That's the first thing. The second thing is, there are limits of mass understanding the real world. I think we have to be honest about this. Stuart Russell, one of the foremost professors on IT in his book Human Compatible: The Problem of Control, he basically said, "Well, you cannot program an algorithm to comply with values. You may be able to program the algorithm to return to the human and ask the question, when the algorithm comes to a point where a value decision is necessary," maybe.

I think if you come to the conclusion in your research, and your work that mathematically, it's not possible to program into the system a function which fully reflects what is the law today. Then you need to do a number of things. First of all, you must be very transparent about this, when you try to sell your AI program to those who are obliged to comply with the law. You must tell them very clearly but this point of law this system doesn't comply with. You must foresee the point where the algorithm returns to the humans to then apply the law. I don't think that's a failure. I think math is limited in its scope to understand and describe the world.

It certainly describes it, not as fully as language can do it. I would think it's probably natural that not everything is accessible to a mathematical or AI formula. I don't even know whether we need to learn this. I would say that's motherhood and apple pie, if you're not totally sunk into an engineering view of the world. That’s what I can say to this. It's not a question of despair, but it's, "Oh, my God, the technologist can't probe." That's not the issue. The issue is to understand the natural limitations of math and machines when they face the real world.

Alexandra: That's, of course, important. Since we talked about changing data in a way that it's not reflecting the world as is, but reflecting the world as we would like to see it. One thing that comes to my mind since you mentioned that our goal should be to build algorithms that fulfill the anti-discriminatory requirements of the law. Which, of course, makes absolute sense. This brings to my mind a presentation from a statistic professor from the Imperial College of London, who pointed out that sometimes the regulations have an effect in the real world that's counter to their initial intention.

What you pointed out here was that I think the UK Equality Act stated that "People must not be treated different according to, as membership to certain groups, ethnicity, gender, and so on and so forth." Then in I think 2014, the EU gender directive allowed that proportion of differences between individuals can lead to different insurance premiums. The context of this example was insurance premiums for car insurances. In the past, insurance premiums for young men were higher than insurance premiums for women because of their, in general, riskier driving behavior.

He mentioned that in the year of 2012, ruling by the European Court of Justice mentioned that the clause of the EU gender directive is incompatible with this treatment or with this principle of equal treatment for men and women. Which led to insurance providers not being allowed to provide differently priced premiums to men and women. Therefore, the effect was, the premiums for women became pricier which, if we look into the gender pay gap is something where we could ask the question, "Is this the intended behavior that we're making it harder and more expensive for a group that's already disadvantaged in certain areas?" I'm just bringing here the example of men and women.

Of course, we could have the same discussion about, I don't know, requirements that are put on members of certain groups, for example, of different ethnicities, which historically and systematically have been discriminated. Therefore, are now in a position where it's harder for them to, for example, have a certain threshold of education, money, and so on and so forth. Just curious if with your expertise of the law, you could imagine some areas needing to be specified so that we could fulfill the intention that we have to not discriminate against people, as opposed to just bluntly fulfilling laws?

Paul: I think your example is very interesting, because, of course, what I don't want to say is that AI and big data should not maybe also be used to test the law and to see whether the law actually is fulfilling its intention. That is, of course, part of the democratic discourse that one always-- This is not something which is only left to law professors, that one returns to the law and asks, "Shouldn't the law be changed?" Finding deficiencies of the law can be-- Now you've inspired me a little bit. It's an interesting task of big data and AI in the way you have described this. I certainly encourage participation of the engineering world and of technologists in democracy generally, and also in the discourse on the law. I'm very interested in this presentation you refer to, which I don't know.

This is the constant work of the European Commission to always look at the law again, as it stands and to reflect on whether reform is necessary and how much reform takes place. It depends also on the elections of the European Parliament because the majority's there and the majorities in the council of ministers determine the composition of the commission in terms of which political parties sent which commissioners, who becomes president of the commission, and who becomes members of the commission and they are college which votes on the proposals.

Then of course in the second step, the political compositions, they determine in particular, on such important societal issues like equal treatment, what is possible politically and what is not possible politically. Here, you see, you could even construct a very direct link between the findings of technologists and the impact it could have in election campaigning. For example, if the findings serve certain parties to make their point that this law must be reformed. That's all I can say. These judgments at the time I remember, the judgment of 2011 Mrs. Reding at the time Vice President of the commission, of course, commented on this judgment and I don't actually know whether, since 2011, there have been further judgments or attempts to change this law. I don't know.

Alexandra: I don't know, neither, but definitely, a topic that's occupying my mind at the moment and we're really looking forward to seeing what we can achieve there. We haven't yet talked that much about your book, which is really a book I recommend everybody interested in the topic of AI, democracy, and law to read. Can you maybe share what the motivation of you and your co-author, Matthias Pfeffer was to write this book in general and what it is about?

Paul: That's very easy. We are friends from university times and we have been always very close friends. After 30 years of friendship of always talking on politics. Talking, talking, talking and this and that, we said, "Okay, now the talking was enough, now we're going to write something." This book is, first of all, a work of friendship.

We spent a lot of time there together. It was really great fun. There was also a complementarity which continues into the future in our work, we do a lot of things together now. Matthias, my co-author has actually studied philosophy. He is a philosopher. Everything I have learned on philosophy is based on reading suggestions from him and then discussions with him.

I really have to say, he is a great journalist. He has been very, very influential in German politics. For example, a defense minister once had to step down because Matthias brought back pictures of German tanks in Turkey being used against Kurds. This defense minister had promised that this would never happen. He had to step down and he was also very active in changing the media landscape through some legal procedures, which he brought.

He's very, very, strong on this whole question of the public, the European public, and the public as it's been shaped through the internet, breaking into this press and television world. I think we were very complimentary here, and you can see this also in the book. I must say I have benefited greatly from working with him on this. We continue this work in the future. We will be in Italy, in Bergamo, we will do the Bergamo lectures, two weeks of intense lectures every day which I hope will lead to a new edition of this book this time in English.

Alexandra: I hope so too. Quick questions. This lecture is this something the public can join, or you can sign up for, or is it just for a limited audience?

Paul: It's for the students at the University of Bergamo and we have not discussed yet whether it will be recorded or streamed, but it's definitely already a focal point in our minds where we will sharpen our thinking with the view to addressing both issues of the epistemology of AI, but also epistemology of the regulation and the rules for AI. We envisage for next year also for us as a focal point to then really bring the materials together for a new book. Matthias Pfeffer has already written a second book which is a little bit easier read than our joint book.

Alexandra: Artificial intelligence and human thinking.

Paul: Yes. Has a little bit more philosophy in it, a little bit more on China. Many people have criticized the lack of dealing with China in our book. We were conscious of this, but it was not possible to also put that into the book. Matthias also pursues something which was born out of this book namely the Idea of a European Television streaming platform for political and quality content of news and documentaries. That's basically the idea that we have. All the technologies now, we have streaming technologies, we have automatic translation and automatic interpretation. We have searched and we want to make accessible to all Europeans, all the good content of public and private producers in Europe, which produce news and political documentaries so that people in all countries have the possibility in their own language to get a plurality of use on their own country.

Alexandra: That sounds great.

Paul: Yes. The book started as a work of friendship and it leads to a lot of more work, which however is also related to Europe and is related to this triangle of democracy, law, and technology.

Alexandra: As long as all of this work doesn't endanger the friendship then-

Paul: [chuckles] No.

Alexandra: -perfect. What's are the most important takeaways from your book? That's an unfair question to ask since it's so many pages of so many important points and insights, but maybe specifically referring to the last chapters, what to do about all the problems you outlined in the book about how the big tech companies mainly from the United States, but also from China are undermining democracy and changing so many things of how our society is functioning nowadays. What are some of the action points and important things that we as a society should change?

Paul: Well, I would say that the book starts and ends with a theme, in which we can make abstraction from the companies. In a world in which technology becomes ever more important in which we live surrounded by artificial intelligence algorithms and the related machines, the control of technical power becomes a central function of democracy. It becomes a central function of democracy, both for the sake of the good function of the market, because if we don't control the powers in market properly, we don't benefit from the market in terms of choice, in terms of price competition, in terms of innovation, but it also becomes a key function for maintaining and strengthening democracy because if we let technologies just run wild, as we have seen, it can become a problem and a risk and it can undermine democracy.

This is I think the key message of the book. We have to take much more serious both in terms of lawmaking, but also in terms of political discourse, and in terms of enforcement the control of technological power that will be a central function of democracy. This ranges right from the control of the use of AI for military purposes right through the control of the algorithms on social networks, obligations of interoperability of social networks but right through to also the thinking and the measures which are necessary to maintain the fourth estate and the free press privately financed viable and not dependent on Google and Facebook graciously giving them grants and therefore getting to know all the good ideas because of course the grants are only for innovation projects.

Google and Facebook become the most knowledgeable companies in the world about innovation in the press, just waiting to buy Springer in Germany or other big publishing houses. We have to be careful here because these platforms, the nature of the platform economy is its multi-sided nature and it starts with one sector and it goes into the next and the next and the next.

These platforms have already gone into many of media and cultural sectors. They have gone into film, they have gone into music. There's certainly no economic reasons why they wouldn't one day go into news. I mean it, I think our Europe and the member states have to prepare to be able legally to say no, if one day Facebook says, and now we are buying the majority from Springer, which is held by an American investment bank, KKR, which wants to make a mega profit.

How can KKR make a mega profit from being a stakeholder in Springer? Well, by selling it to Google or Facebook or Apple or any other of the giants of the internet industry. This is something which certainly nobody would want to see. We have to, for example, adapt the competition laws which are often specific for media, the media competition laws also on the level of member states to make sure that in such constellations as I've just described, the platforms are considered like media.

Not in every respect, but in this specific respect, if one wants to call it a legal fiction, that's fine with me, but we would then have to count together the market share of the platform with the market share of the publishing house and basically count the market share of the platform as a market share in the media market in order to arrive at the same preventive effect which we have now when it comes to media mergers.

If one publishing house buys another publishing house, then the law hits. If today a platform buys a publishing house, there is no specific law of press competition which comes in. That's one of the niceties which we have discussed in the book. What I would say is the book is an effort to lead readers from the general considerations which can always be your compass in thinking about this technological world and what it means for democracy and the rule of law and to individual freedom to the very specific nitty-gritty measures and there are many which are necessary.

I think that is also the challenge for politics today that we must be ready to enter into the detail because the detail can make a difference in regulating these complex systems and these complex markets, but at the same time, we also always have to keep our radar on the big principles.

Alexandra: Can we become ready? Is it about upscaling regulators about digital topics and getting the big tech companies to pay taxes so that we have more resources on the regulatory side or what is necessary?

Paul: Well, I think the compromise brokered by the former German finance minister now Chancellor, Olaf Scholz on taxation of big tech, 15% corporate taxation, that's the first step. It's a good start, but I would also say that in the longer run, there should be no reason why these companies overall pay less than normal industry. I think we have to move to a world where it's just normal and it's not even an issue anymore that they just pay. Where they created value, that they pay their fair share. I don't think that there's a lack of money in the public purse for people to be able to understand the problems. It's just one problem which is that we cannot rely on the information these companies provide.

Unfortunately, big tech has not only a tradition of breaking the law. As we see very regular every week, we have new decisions on big banks and big tech not having complied with this or that regulation. If you look around the world, it's really becoming a gentleman's game and nearly normality that these companies just break the law and that's it.

It's a newspaper headline and the next week it's someone else who breaks law. I think it's a very bad culture, but what is also a very bad culture is the culture of lying. These companies lie. The most recent examples are in Facebook, for Google it is documented in the book of Shoshana Zuboff. There's a culture of lying in Silicon Valley and in these companies which make their things very, very difficult for governments and for parliaments.

I think we have to develop means to oblige these companies to speak to the truth in parliaments too not just before the court. For example, in European competition law, we have an obligation to tell the truth to the commission. The commission has fined Facebook already once for not telling the full story on the merger with WhatsApp. I'm starting to think that maybe we need a law that if these companies lie before parliament, they need to have heavy sanctions. This just has to end. One problem of politics doing the right thing. If we can't rely on information we are getting from the private sector, that's a problem.

Alexandra: I agree.

Paul: Second, we must maintain the independence of our science of academia. It is not a good thing that in very, very many universities where there's research going on on the societal impact of the internet and digital, there's immediately money from some of these companies. Find me someone who can either speak with the authority of a university chair on the big techs question or on the big tech societal impact were not in the vicinity, in the Institute, in the university, or some colleagues, or even the person himself or herself, some money of big tech has been there. That's a problem.

Alexandra: That's definitely a challenge. There are also many scientists outlining that that leads them to, in the questions, the research applying academic rigor and doing it properly, but stopping to ask the critical questions that could offend or step on the toes of the people that give the money, which in that case are the big tech companies.

Paul: I think, if companies want to do good, I think they should do foundations. They should give their 10 million and then walk away and never be seen again, but this idea that they just give for the next two years of professor X the salary, and then we talk again, that creates dependencies and that creates limitations on the freedom of academia.

I would think we need also in this debate on the form of support to universities, I would say capital foundations, fine. Operating income in short term, certainly not a good idea. Well, if it's a 10-year commitment, I would say, right. For 10-year commitment to pay the salary of one professor for example over 10 years, okay.

If there are no clauses, as we have seen in the Facebook "donation" to University of Munich when the contract came out, there were some clauses in there which basically allowed the donor, which was Facebook, I think this was 7 million they gave to University of Munich, to end the donation anytime, so the money was staggered and they could end anytime and you wonder, so who has negotiated this? Where is independence of academia? I think this is apart from they grip on media, public relations, lobbying, journalism, and academia, we need to have people also on the political side just being ready to dig into it, and I think there are many. I think our parliaments and also government to the authorities are getting more and more qualified, but unfortunately, well, we have to claw information together.

We have to go to proxies as I call them. Proxies are small competitors who may do the same and where you can hope that maybe they tell you the true story, while big tech is just telling a story which is serving their business model. I would say, if these companies really mean it, that they do good for the world and good AI and that they serve public interest, then they should strictly, they should-- Let's talk about ethics here. They should have an ethics code, which obliges their lobbyists to make strict abstraction of their profit interests and business models when they're asked questions by parliament and government, but just give truthful answers.

I bet Microsoft, Google, Facebook, Amazon, and Apple, they would still be very, very profitable if they do exactly this. If they have a culture of always speaking to the truth and always giving the full picture even if their business interests may be in this or that question go in the other direction, they would still be very profitable because after all, they have a business model, which works hopefully also without the lies.

As on AI regulation, I only believe that to some extent these companies are able to respond to appeals, talk, ethics talk, and so on, but in the end, I think we need a law also on this because unfortunately without the hard hand, without the obligatory force of the law, I have not seen a lot of good development among the worst offenders in this segment of business over the last year. It's really not.

Alexandra: Yes, I agree. I also think it's very positive that the European Union is moving in this direction because there's so many calls that you don't yet regulate AI with the argument that we don't yet know enough about it, but I think it's really a technology that has on the one hand tremendous or can have tremendous positive impact. On the other hand, I think it's highly important that we set the guard rails and really try to shape it in a way that we want to have AI being used for us as a society.

I think it's a really positive undertaking but the European Union. One last question I have for you speaking about positive undertakings and also where the parliament and the European Union did its work on really digging deep into the topic and coming up with something that was really a great piece of work that had global impact, and talking about the GDPR and also law that with its global scope or extra-territorial scope achieved that the big tech players are now taking privacy much more seriously, or as you pointed out earlier now also taking it as a competitive advantage.

Many mention now that since the GDPR came into force, while the intentions of the GDPR are in no way something you want to argue with because privacy is an important and fundamental right to protect, that they fear that they will hinder the European commissions or Europe's efforts to becoming the global responsible AI leader and that either GDPR needs to be re-written or interpreted in a more innovation-friendly way. I would just be curious to get your position on that, what you would answer to people with that opinion.

Paul: Well, I would say that AI needs to be a technology which is fully in line with our constitutional principles, and after all the GDPR only implements the highest law, namely the Constitutional Level Charter of Fundamental Rights of the European Union, which foresees in a very easy to read, but very clear article that personal data of people needs to be protected.

I don't actually think that these calls to re-write GDPR are either helpful nor wise why, because it's not that the law has to comply with the technology or the business interest, it's that the business interest and the technology has to comply with the law. That's what the primacy of democracy and actually also the rule of law is about. I would say also that in the same way that Apple and Microsoft have been able to make huge publicity campaigns for their worldwide compliance with GDPR, industry and European industry, in particular, should be able to develop AI, in which people can trust that this AI is for fully in line with GDPR.

Honestly, there are so many applications of AI outside the advertising industry, let's be clear, where first of all you don't need personal data and you can use AI for many other things. Let's not create the impression that AI is actually mainly based on personal data, maybe it is actually mainly based because many of the applications are now in targeted advertisement and so on, but that's certainly not the only way where the future lies for AI.

Secondly, for example in medicine and in medical research, the GDPR is very clear and we have seen this now also with COVID. In no way is GDPR an obstacle for anything people want to do in terms of medical research or developing systems which are good for public health. The exemptions are very broad. There are openings and people are able to use personal data if they cannot reach their goal without using personal data because already the legislator balanced here the public interest in public health with the individual interest in the fundamental right and this is actually how it should be. It should be the legislator, which does this balance and not just any company.

I don't see right now the concrete examples of AI projects which are failing because of GDPR. I'm really ready to discuss with anyone based on a concrete project where they see an obstacle and try to help finding a solution which is within the law. The European Union has a lot of experience of making laws which are good for the common market, and which are good for economic growth, and I remember that during GDPR there were some lobbyists who said during the negotiations that Europe will lose, I don't know, 6% of GDP and if it adopts GDPR and I've never believed this.

I also don't believe in blanket claims, that AI can not be developed under GDPR. What I would say is, the AI which is going to be developed under GDPR, will be in AI where people all over the world if they use it or if they buy it, they are sure that their individual right to data protection is protected, and that's a good thing.

Alexandra: Definitely, and I think if it's AI that's developed in compliance with GDPR and also in compliance with the upcoming AI Act, then they will not only be sure about their privacy being respected but also that tremendous care was taken that the algorithm becomes fair and doesn't discriminate against certain individuals plus we also will see this transparency, explainability requirements.

I think there are many good points in there which will help us to become a group of nations where AI is deployed in a responsible manner. On the other hand, I also understand some concerns in the industry that they're not are seeing that AI development is blocked totally, but just that they're slowed down and therefore fearing on falling behind faster moving organizations in countries like China or the United States.

Paul: Well, I would say to any government of the free Western world, I wish you good luck if you buy hardware from Huawei and I wish you also good luck if you buy AI from a dictatorship like China. The next elections are coming and you will feel the consequences. Let's be clear about it, an AI which has been developed with the values of a dictatorship built-in can only be sold to dictatorships. An AI which has been developed with the values of fundamental rights, rule of law, and democracy built-in, including data protection, this is AI which can be sold all over the world. I think if one wants to, one can make the marketing point out of this, including also out of the obligations under the AI act as you just started.

The marketing pitch goes like this, "Not only are we a great company, who is not yet very well known all over the world, but we promise you that we are doing the right thing." That's the first sentence. The second sentence is, "We don't only promise you, we are actually obliged and controlled and we are subject to sanctions at home if we don't comply with the following elements one, two, three, four, five." The more you can count out that from the law, the better will be your marketing pitch to your clients who want this.

Alexandra: Perfect, Paul. Thank you so much for everything you shared. It was really a pleasure to talk to you. As mentioned, I can recommend to everybody to read your book. Hopefully, there will be an English translation out any point soon so that even more people can profit from the points mentioned in there, and up until then, I'm highly awaiting your next book, which will hopefully be out rather sooner than later. Thank you so much.

Paul: Thank you, Alexandra.

Alexandra: Wow, with a guest as knowledgeable as Paul, I thoroughly enjoyed the conversation. I hope you enjoyed learning more about his perspectives on privacy ethical AI, big tech, and democracy as much as I did. As always, if you have any comments or questions or even suggestions for guests, I should absolutely talk to, please reach out to us and send us an email to podcast.mostly.ai. Until then, see you in two weeks.

Ready to start?

Sign up for free or contact our sales team to schedule a demo.
magnifiercross