Jeffrey Dobin: Hello, data peeps. Welcome to the one, two, three, four, five, six, seven, this is the eighth episode of the Data Democratization Podcast, where we bring you the best stories on data and privacy. I’m Jeffrey Dobin, lawyer and data protection guru from Duality. I’m joined by my co-host, Alexandra Ebert, the Chief Trust Officer at synthetic data company MOSTLY AI. Alexandra, good to be back with you. Can you introduce today’s guest?
Alexandra Ebert: Sure, Jeff. Today, we’ll be hearing from one of Europe’s most respected IT and data protection lawyers, Axel von dem Bussche. He’s a partner to global law firm Taylor Wessing, and not only has he a deep understanding of all things GDPR and data privacy, but he’s also an expert on synthetic data. I’m very much looking forward to our conversation.
Jeffrey: Me too. It sounds like he must be pretty passionate about data privacy. If I remember correctly, didn’t Axel also write a book about GDPR?
Alexandra: Yes, indeed. That’s correct, but we covered much more than GDPR. We also spoke about trends too, the recent proposal for the European AI regulation, and how the privacy landscape is evolving globally.
Jeffrey: Awesome. I’m sure we’ll learn a lot from this episode. Let’s jump right into things.
Alexandra: Axel, it’s great to have you on the show today. You’re a specialist in IT law, and you’re one of the experts when it comes to GDPR and data protection. You are even an author of a GDPR book. Where does your passion for data privacy and data protection come from? Can you share a little bit about your background?
Axel: Thank you very much, Alexandra, for inviting me to your podcast. Really nice to have that conversation. Why GDPR? Actually, it was a coincidence, to be honest. When I started my professional life way back in 1999, that was just in the midst of the so-called new economy. Probably, I don’t know if you remember the new economy. That was a really interesting time between I would say in Germany in ’98 and 2000, where the internet came up. You had these Bill Gates and Jeff Bezos who just started in the mid-90s their business, and the Silicon Valley was already, let’s say, very eager on that.
It went over to Europe, the new economy. Suddenly in Europe, we understood also the potential of the internet, and then we had a two years period where lots of investments went into the market. It was a crazy time. Lots of startups. Then suddenly in May 2000, dead. Then we had 20 years of silence. I just mentioned this because I think that’s an interesting historical context for what we discuss later. That was when I started as a lawyer. In our law firm, no one was an IT lawyer. That wasn’t even a name or a brand for a lawyer. I branded myself. I said, “Yes, I’m an IT lawyer.” No one really knew from the older partners. “What are you doing?” I said, “Yes, wait, e-commerce, IT, it’s a fine thing.” I just started because it was new, and I was new, and then we grew together. That was–
Alexandra: The beginning of 2000?
Axel: Two decades ago, yes.
Alexandra: Wonderful. Wonderful. Especially in the past few months, we’ve seen so much regulatory development going on in Europe, in the United States, in other parts of the world. What’s your perspective on that? Especially if you look at Europe compared to the United States and maybe also to China, what’s their strategy and approach to regulating digital transformation?
Axel: Thank you for that question. That’s a good question. That’s a broad question. It’s even a geostrategic question. I think it’s good that you mentioned these three areas like the US, the EU, and China because they have a different approach towards digitalization and towards its regulation. Let us start with the first movers. This is definitely the US. The US, they have been the earliest one who really understood the power behind the internet, maybe because the market is so large, so you have these scaling possibilities which are far more attractive than in Europe where you have these smaller jurisdictions.
Anyway, whereas our new economy suffered a sudden death back in May 2000, the US went on. I remember Google a small innovative search engine, and everyone was using Google. It was just small and minor no one was talking about. No one has foreseen power stemming from that. Same with Amazon, just selling books over the internet. Somehow whereas Europe went back to sleep in terms of business, the old economy in Europe strode back and said, “Oh, this is all rubbish. Internet, we don’t need that. We have machines.” In America, in the US, it was very different. They just went ahead, and I must say, without any regulation. They just did it.
In the typical approach in the US, you first do things, and then if it doesn’t really work or when it hurts, you regulate. Its regulation comes in a second step. Whereas in Europe, we always think we have to regulate upfront, and then only then we allow business going into that regulatory frame. Completely different attitude. However, I’m still the one who’s doing the business. It was the US first, and then the EU went back to sleep. Then somehow China at an amazing speed was and is bypassing Europe in terms of implementation of digitalization. That’s how the business worked.
Now, the approach to regulation and that’s I think it’s very interesting, US, in the beginning, no regulation. The business even said, “I’m a global business. I do not except regulation.” You remember Mark Zuckerberg who said, “Move fast and break things.” We see this as lawyers here in Europe when we take over jobs from the US. The idea is you first go in and maybe if– In the beginning, actually, in Europe, our regulators they didn’t even understand the business, so it was unregulated. Now, they wake up, and it’s reregulation what happens. That’s the US approach.
However, new laws are popping up. The CCPA which is a data protection law in California pops up. Obviously, in the US, too, they call a techlash, there’s an idea, “Oop. The business it’s too mighty, we need to regulate it, reregulate it somehow too.” It’s a different approach. It’s not really organized from topdown. It’s more some states like California they’re coming with national law trying to regulate everything. That’s the US.
Now, the one who’s on the second, let’s say, second speed overpassing, bypassing China, they don’t regulate at all. It’s a completely different approach. It’s more like this George Orwell scenario where you have social scoring everyone’s data could be used by the government without any restriction and is used. There is massive exploitation of possibilities. However, the downside is that the government is taking advantage of all that and you have these, as I said, George Orwell scenarios that unacceptable in Europe.
Alexandra: Yes, the fundamental rights are not being respected.
Axel: No, there are no rights. You have the tension in the US between big companies and the people. In China, you have government and the people. Here is, let’s say, the tension.
Now, we have Europe. Europe, as I said, is lagging behind I don’t know why, but that could be a separate podcast. Somehow we lost the speed I would say between 2000 and 2012, 2015, and then we woke up now. That’s a good thing. It’s totally realized that we have to do something and that we are lagging behind and that we also need to speed up in, in all respects in the administration with the businesses we need you innovation startups, so on and so on but from a legal standpoint, the interesting part is we do this with heavy regulation. I cannot give another word for that. We have an approach. That’s what we discussed today, which I say, wow, it’s good for lawyers, probably. I shouldn’t complain too much, but I don’t know how society is and we’ll be able to digest what is ahead of us.
Alexandra: This would actually be my question to you as a lawyer. You mentioned that Europe is lagging behind and that we probably should get or gain some speed. So is regulation the best approach to do that?
Axel: Well, what do you think when I say, I think being a business, you would rather say probably not, or probably we needed at the end. We need a good balance. To be honest. I don’t think that no regulation is a good idea. I don’t want to live in China in these conditions where you have the big government watching you all the time and you’re totally transparent if you, I don’t know, you do something wrong and then you get a minus on your social scoring. That’s not a world where Europeans would like to live.
Same as in the States. I would feel a little bit uncomfortable in a way how very big companies know so much about us, what we don’t know for the time being. I hope it’s not misused. It’s more used for direct advertising and other stuff, but who knows? Once the government has access to all this information, you suddenly can have a similar situation like we’re having in China quicker than we think, I don’t know, you would travel to the States and suddenly at the border, this has stopped and you don’t know why, and they know so much, you know nothing. Let’s say information deficit on the side of the involved people. These are large entities, which is also not very comfortable.
Alexandra: Absolutely. I would say with the things that are state here, it’s probably not only about speed, but also doing it right and I think there regulation definitely has its place.
Axel: It has its place in a way that if we transfer the analog world into the digital world, and obviously, we have rules in the analog world take like traffic car traffic. in the beginning, when cars started to driving around, maybe there was a little bit of a messy situation. We had no rules and then accidents are happening all the time and then he said, well, we maybe need traffic lights in order to avoid that people bump into each other and this applies to all areas of life and business.
The same as in a digital room. We certainly need some– I don’t know, left and right. A framework where we can develop safely. I think it’s fair to say that the digital world also needs some structure and reliability for those who are acting in the digital world. The if – I would say yes, and how – is the question, how are we doing this? How is Europe doing this? Are we following the right approach?
Alexandra: I think it’s always about finding this balance of enabling innovation, but at the same time, protecting citizens. I hope that we will manage to find a good balance for that so that Europe can speed up and become the global leader that they set out to become. Let’s come back to some of the regulations we’ve seen being put in place in the past few weeks months, or at least being proposed like the AI regulation or also some European Justice Court rulings like trends to what’s currently keeping IT lawyers and their enterprise clients busy in 2021.
Axel: Wow. What keeps us busy? So many things, to be honest, and when I looked back at the last 20 years, the speed of new issues popping up in new laws popping up in the last, I would say, just take the last four or five years is tremendous, it’s incomparable. I remember back in 2008 there was an amendment of the data protection law back in 2009. It was, that was a big thing. Everyone was excited. “Wow. Data Protection Law Amendments.” We had half a year time to kind of prepare client seminars on this new innovation.
Now we have like on a monthly basis, we have new laws, whistleblowing hotlines, which needs to be implemented it’s EU legislation by end of the year, as you mentioned AI regulation which has a head very, very complex. We have the GDPR starting in May 2018, which keeps us extremely busy. We have in the pandemic area, even fostered respectively cyber threats, we have a lot of cyber breaches.
We have to deal with various aspects around that also data protection law, but insurances and all that keeping me extremely busy and as you said, AI is a big component because we have AI providers already and they’re starting to sell their products, but their regulation is far behind. For example in the health sector there’s so much innovation and such a need actually for innovation at the one hand and there are ideas and you have the solutions, but then you have a regulation which doesn’t fit because it’s 20 years old and how our clients are asking us, we would like to implement these tools. How can we do this?
Then we need to look into the regulatory landscape and all that we say is bots. This is really very complex. Our advice these days is finding the right law, pressing the button, and saying, yes, this is how it works. It’s always like I see where you want to go. I see the regulatory framework, which is outdated. Now we need to do a risk assessment and we need to assess how a judge would look at the meta, if this pops up and comes to court, taking into account the need for innovation, and also take into account the law that we have in place is somehow outdated and probably doesn’t really fit that new way to do business. That’s keeping us busy.
Alexandra: Sounds like plenty of things that are keeping you busy.
Axel: Yes and platforms, e-commerce of course. If in a usual, we would say it’s three sectors. It’s what we call it. Agreements outsourcing that’s everything that goes into the cloud. It’s mostly contractual work. Then we have e-commerce platforms, platforms, business transactions, keeping us very busy and data protection, cybersecurity, but also other, other new regulation popping up.
Alexandra: Absolutely, you actually mentioned two interesting points before we continue. One thing is the complexity, which to me doesn’t sound like a super innovation-friendly environment and probably something where we can improve on a European level to make this easier, and then you mentioned outdated regulations.
I’d be curious to get your take on this because I’ve heard some experts saying now that the AI regulation was proposed, that there actually is no need for an AI regulation because we have plenty of existing regulations. We should rather think about how we can enforce these regulations. Others said that AI currently is deployed in the legal backroom and that we desperately need a regulation. What’s your take on it? Do we, it don’t we need it. Do we need to update the existing one? What’s the best approach.
Axel: That’s a really good question and before I answer this, I go a step back to bring that AI regulation into a broader regulatory context, because this is not the only thing we have. In Europe, people let’s say, had been thoughtful, and as I said, in the beginning, we have these three areas, us EU and China, and the EU commission people sit together. I think what is our way? They say, okay, somehow we need to find a good balance.
As you rightfully said, between innovation and protection and this, there is a thing, a strategy called Digital Single Market that came up in 2015. Folks have been sitting together very many experts that say, we need a new Digital Single Market and that must be strong enough to have a uniform and large enough market where companies can act in a competitive way to the US and China. Certainly, they have been aware. It doesn’t make sense if we have all these jurisdictions, it needs to be an EU law in the first place, we need to have uniform law throughout the various European jurisdictions, like actually with the GDPR. That’s the right approach, I would say, from the strategic thinking about regulation.
The digital single market act strategy, I would say, yes, that’s good but now we are already in the implementation phase. When this digital single market idea distinguished between three, I would say regulatory three pillars. The one is how to enable access of consumers to the digital market in a safe way. It’s more like consumer protection-related laws, various laws under this pillar.
The second pillar is how to regulate networks, communication, and also data protection. This is where the GDPR has its home. The third pillar of various regulations is where we need to allocate that AI regulation, this is all about how can we make sure that Europe can work in an innovative environment. How can we enable the business to benefit from digitalization to be modern, fast, and up to speed, but at the same way do it in a fashion that we do not have a capitalist approach where everything goes.
These are the three pillars. Under these three regulatory pillars, I don’t want to bore you and mentioned all the different laws. I would say we have like seven, eight various single acts dealing with that very complex acts and we know one of that, that’s the GDPR and imagine how complex the GDPR is, how much you see it in the news, how many experts are involved with only the GDPR. What my forecast is we will have maybe at the end of the road, we have 20 GDPR, or we have one GDPR and 19 regulation with the same and similar complexity.
Alexandra: Whoa, so how to keep up with that complexity?
Axel: It’s impossible for one person to keep up with that. We have in my law firm, Taylor Wessing, we have 50 lawyers who are experts, real experts in that area, what we call technology and data. You have already 50 people who are doing that every day, far too long into the evening. How can companies digest this all and get this on board? It’s a huge challenge and one of these 19 additional regulations popping up is AI.
I’m now coming back to your question and we can take the AI regulation as an example. I would say that the general concept and the general problem we have with our AI regulation applies to the other ones as well, like whistleblowing law, the e-privacy regulation, which should have been in place together with the GDPR already 2018, lagging behind.
Within the respect, you asked me if this is a good idea, how that plan to do it. I would say a yes or no, or as lawyers always say, it depends. It depends.
I think the general idea that we don’t simply let it go and ignore it, like what you’re like. That’s not a good idea because the potential of misuse is far too high. You have AI possibilities with low risk, medium risk, and high risk and this actually a differentiation in that AI regulation. They say, well, I would say, there’s good AI, and there’s dangerous AI and you have to find, and you have to regulate these different types of AI in a different, in a different way.
Good AI is not dangerous for people. You can have a soft regulation and the other way around. Here, you start, it’s a problem. I can tell you the differentiation, like dangerous, not dangerous AI, that’s going to be very, very complex. You won’t be able to put different providers into various boxes very easily. It’s you always have to go in again and make a complex, maybe a legal opinion or whatever to find out where you stand.
Alexandra: I definitely see the challenges that come with this complexity. On the other hand, I think it’s good that we have this risk feeds approach in the AI regulation and the proposal for it because I wouldn’t want to have an environment where there’s just a little regulation for high-risk system and al, of course, I don’t think it’s beneficial for the European Union. If we have strict regulations on every type of AI application because this is definitely not making it better for us to speed up and become a global leader in the domain of AI. How could we, in general, when we look at all these different pillars that you outlined and all the regulations that you forecasted, which are yet to come and have similar complexity to the GDPR and to the proposed AI regulation, how could we reduce this complexity? What’s the reason for it is it’s the plenty of different member States that are collaborating on this, is it that we’re trying to regulate something and make it more abstract and future proof since we’re not yet sure how, for example, AI is going to evolve in the coming five to 10 years or something like that. what could we do better to address this complex issue?
Axel: That’s also hard to answer. I think the law we are working with is not abstract enough. To give you an example, the German civil code is from 1900 and it still works. there have been amendments, but I must say light amendments, the general principles of the German civil court still apply over a hundred years later, which I find quite amazing 120 years later, it still works but only because the idea was I have very abstract, general rules, which allows me to constantly adapt the development of life and business under these general concepts. There’s an element of flexibility in that.
The same applies to my extent, I must say to the GDPR, which as a piece of law is I would say quite easy to read and digest. The general principles of the GDPR are quite transparent and clear, although it still now in practice provides lots of different difficulties in its implementation. Maybe that will go away in a couple of years. However, other laws are lacking this level of being abstract enough. They’re too specific and once you’re too specific, the problem is that the moment the law is out, it’s already outdated.
You see that with all the new laws coming up that there are too- let’s say they’re regulating, it’s too much fine-tuning. Maybe you may ask why we have so much fine-tuning. I think because we are too much in a haste being able to provide a piece of law, which very abstract on a high level needs lots of upfront consultation and thinking. Going back to the example with the German civil court, I think that was 20 years plus very smart folks had been working on that legislation and only when you understand everything, then you are able to find general rules here.
Here, we just dive into the digital world and we try to understand machine to machine communication, this and that, and this and that, and then all the experts sitting down drafting that law, and maybe they say they are far and deep into the subject matter and without doing a practical test, these laws come into force and that provides all the complexity. What I think is, you need a practical filter when you have a new law and you say, that’s what it is.
You need to go to, let’s say a group of smart people who haven’t seen it before, and then say, “Please read and digest is this manageable?” When they all fail when they’re all say, “Come on, I need a year to study this law,” the law needs to be revised again to obtain that abstract level. That’s, in my eyes, a problem in drafting new laws, not only the digital area, actually, it applies to any kind of areas these days.
Alexandra: I think that’s a very good suggestion because, of course, I understand that if you take your time and wait a while to see how things are evolving, that you’re in a better position to make generalized rules and regulations. On the other hand, my concern, especially since you mentioned increased speeds and especially the speed we see with artificial intelligence, that if we let this go on for five or 10 years unregulated, that we might move in a direction that would be hard to revert back from. I think it’s a really tricky situation and definitely not an easy task that the European Commission and Union are currently facing.
Axel: You’re totally right. We have that situation. What you are illustrating with AI. We have thought already in other areas because, in practice, the last 20 years we had this, let’s say internet based online business also in Europe. It just happened and it was unregulated because no one really understood and cared what’s going on. All the business models were now on the radar, let’s take cookies, for example, they’re even done in the advertising market. Like an instrument of the past, cookies are existing for two decades already.
In the beginning of the first, let’s say the first 10 years nobody knew with the exception of the business that cookies had been used. Then it took another 10 years to really bring the whole matter to the surface and then to the court and then to the European Court of Justice, back to the national courts, and now we have with Planet 49 in Germany, the first a high court decision on how to handle cookies after 20 years. It’s a long, long time. Then it’s gone again, which I find quite bizarre.
Take the big GAFA: Google, Amazon, Facebook, Apple businesses, we lost track. We don’t have this platform business is in a similar way like in the States and they’re already now subject to cartel law because there are so big that even the US government thinks about how they want to separate it. I find it quite amazing. At the same time, that’s a bit like going back to where we started this discussion.
At the same time where Europe needs to wake up, in the US the business has already so mature that you start to break it up again. What have we done in the last 20 years? The only good thing is now circling back to AI. We have been quite good in what we call industry 4.0. The whole, let’s say, engineer-based business. Here Europe has probably, I don’t want to say the first-mover advantage, but we are in a good shape compared to the mere internet-based business where we really lagged, somehow lost 20 years in development.
Alexandra: I mean also when we come back to AI, I think Europe is doing a quite good job in also giving research grants and fostering research. I think we struggled with then finding enough investments so we see then many companies or smart people wandering to the United States or to some other parts of the world to then really put these innovations into products and businesses. Definitely, also something that the European Union wants to tackle moving forward.
In general, you mentioned that this risk-based approach will be a problem of the AI regulation in practice, but what else do you see as some obstacles when it comes to implementing the new proposed regulations? Are just some other areas of concern that you’re aware of?
Axel: I haven’t done a deep dive into the AI regulation yet because, as I said, we have so many things [crosstalk] are going. I usually really start to work with the various laws when they are in force. However, as we are dealing at the moment with quite a lot of AI business on both sides actually. On the one hand, we have these innovative providers where we help those providers to in their sales processes in the negotiations with larger companies who would like to implement those solutions up the one side, but, and I would say that’s 50/50, we have our domestic client’s large entities who are buying these solutions and they ask us, “Can we do this?”
It’s always the same like you have the purchasing department to say, “Okay, I accept this. It’s a fine thing.” You have top management says, “We really need that.” You have this certain, I would say, research and development folks, or whoever’s asking for that solution. They say, “Great.” Then you want to monetize your, let’s say, take the health data as an example. You are in hospital, you are sitting on a treasure of data, patient data. This is the gold standard when we talked about personal data
Alexandra: Because of the research potential that is in there.
Axel: It’s amazing potential. Around these, let’s say treasure of personal data, which is unused at the moment by most of the hospitals I know, you have lots of providers who would love to have access to that data. They approached the hospital saying, “Oh, look I have a super good solution. We help you to monetize your data. All we need is access to your data.” They give it for free.
I have been in various rounds of negotiations where I have been amazed by respect to the, let’s say effort, the provider side is doing in order to get access to that data. They promise everything. “I’ll do it for free, and please just have me access.” The whole idea is to have access to the data and then allow AI learning on the data. The whole business environment is in need of fresh and good data so that they can enhance their products. That’s where we come into play. That’s when our clients are asking, “We would love to do that. Please Axel, can we do this?” Going back to the hospital example.
On the one hand, we have the AI regulation, which should cover this tool. What I see as a downside is, that we produce new law without taking into account old and existing law. This is what I see a lot in Germany on the EU level, we always have fantastic strategies. I was reading the new data strategy of the German government, and it has nice words everywhere. “We need this and we want to be innovative and we propose this.” Then they say, “Well, certainly we need to take into account data protection law,” for example, but they’re not looking into data protection law. They say, “We need that,” but they don’t take it onboard.
Then we have situations like with our Corona App or with our digital health or with a patient pile where every person has all its data on that would be accessible from everywhere. That’s a big project which doesn’t move ahead and how we have the obstacle data protection law. I sometimes wonder how can it be about our, let’s say a health ministry is developing a new strategy and probably a new law. Then six months before that law should be launched someone says, “Oops, we didn’t consider Accessing Data Protection Law. This is because of the complexity.
The government who on the one hand is I would say a stakeholder of the GDPR of Data Protection Law creates new innovative law without taking into account its own law. The same we have with AI at the moment. Going back to the example with the hospitals. You are a hospital and you have lots of patient data and then AI providers are coming and say, “Please outsource this to me.” The normal thing would be, in a classic scenario, you allow access to that data. That data is probably hosted on whatever, Amazon cloud, Azure cloud of Microsoft or Google Cloud, maybe even in the States because you are a US provider or at least with access from the States. Then the AI is learning. That all sounds good. I must say, if you look at existing law, I would say no, it doesn’t work. Why? Because we have only in Germany for each German Bundesland, local hospital laws, landes krankenhausgesetze, and those are from the year 2000 partly.
They are very restrictive and they have the idea if you have data and patient data, this is very, very vulnerable. In order to keep it safe, you should keep it in your building. I think this is very naive. This is a little bit like, “Where should I put my money?” “Oh, I’ll just put it under my blanket at home because then it’s safe.” This is the idea of 20 years ago. That means in 50% of the 16 German local hospital laws, already mere outsourcing is not allowed or restricted, even within hospital structures that go beyond one German Bundesland.
That’s the first step. That’s only outsourcing to a data processor who’s probably doing the hosting. Already the hosting from someone else is restricted. Now we come to AI, and AI is more than hosting. You give away the data, that’s the outsourcing aspect. Now the AI is on top and learning. The question is what is this from a regulatory standpoint? Is this only data processing, or is it already somehow data controlling in a way that you have owned a new purpose you would like to use that patient data for, which goes beyond hosting?
I don’t want to dive into legal details but if that happens, then the AI provider is not a data processor who just like a machine without its own intelligence taking care of your data, but that’s a data controller and you have a joint controllership. Then you look again into the law and you’ll see are these 16 German local laws are allowing joint controllership? I must say no, they clearly don’t know they don’t allow this.
Alexandra: What you described is a really high level of complexity than outdated laws and not enough experts on a European level that really understand these laws in detail.
Alexandra: What does this mean for legal experts? Is the education of today’s lawyers equipped to that? Is there enough education on tech law, or is there room for improvement? How do lawyers prepare themselves for this environment?
Axel: Let’s have two views on that. First of all, from my view as an entrepreneur, it’s a feast. The only ones who are really benefiting from all that are some specialists.
Alexandra: Lawyers and IT providers?
Axel: IT providers, lawyers. Definitely lawyers, so our area is booming. We have a shortage of experts in our industry among our clients and among law firms. There’s really a war on talents. What we’ve witnessed, of course, that when we have new lawyers on board, they often get really very interesting job offers from clients, which is okay because then we still stay in touch, but there’s a higher fluctuation among these resources because we have a shortage of experts. My vision here as a forecast is as this regulation pops up on all ends, this problem will grow.
You need people who understand that complexity and who are able to somehow navigate or to transform the complexity of the law into the companies, I would say in a pragmatic way. If you employ someone who tells you, “No, it doesn’t work,” this is of no help. We see that sometimes. You need someone who is not only able to understand what’s going on. You also need one who is mature enough to do a risk assessment because, as I said, we are entering a new greenfield, which is partly regulated by outdated law and partly that is, like the AI regulation, a new law that is not implemented yet. We are somewhere in between that old law and that new law, which is not existing already.
There you need experts to digest this and we don’t have them. They’re not enough experts. Everyone listening to the podcast who is not really sure what to do in the future and who’s interested in, let’s say, digital law and tech law, my strong recommendation is to just dive into that because the next 20 years, we are just at the beginning of our development, I must say. Everything we had so far, even in the data protection area was a little like a prologue and now bam, we have that explosion with a strong need for legal navigators in the digital room. This is what we’re looking for.
Alexandra: I completely agree with that. One thing, we just have a few minutes left but talking about all these different dynamics in one of our previous conversations, you also mentioned that current Schrems II is one of the big challenges for organizations since we not only have European legislation but also US legislation and legislation from other countries. What are the big challenges here for organizations? Why do they face them and what would be ways to help against this?
Axel: Schrems II enough to talk about for eight hours, I must say. Again, allow me to step back just to put Schrems II into a context where we are. As I said in the beginning, we have this digital single market as a strategy. We have these three pillars. Under each of these three pillars, you have, let’s say, five laws, just to give you a number. Under pillar number two, we have the GDPR, one of five laws, the only one which is enforced yet. Only within the GDPR, you have one aspect of law and this is the transfer of data outside of the EU. Schrems II is, let’s say, a very complex situation on that transfer outside.
It’s just one little problem in one little law in that whole big area of new laws we haven’t even digested yet. This little thing like Schrems II, its case law from the European Court of Justice of last year, caused amazing problems. It’s good that you mentioned Schrems II at the end of the podcast because we circle back to the beginning because Schrems II has a strong view strategic aspect. For those who don’t know what Schrems II means, I’ll try to explain it in a nutshell.
All EU data which crosses the border going to a so-called third country outside of the EU, when these personal data you extract from yourself and which goes abroad, our law travels with the data. With all personal data we extract every day, our law travels with the data abroad, for example, into the US. Now our strict GDPR law with respect to territorial aspects, we say it should also apply abroad because our personal data, which from a personality right’s view is something belonging to us, needs to be protected already also outside of the EU.
The company, for example, the employer who collects employee data, HR data and send this abroad, our law says you as a responsible company as a controller, you have to make sure that the data of your employees you send abroad is safe abroad. Now Schrems II says you need to examine if you would like to send the data abroad. The data is exported into a country where we do not know about the standard of data protection, you as a company, as an exporter, have to make sure that everywhere where you have sent your data to there’s an adequate level of data protection.
Somehow you need to make sure that the data you send out is safe abroad. Otherwise, you’re not allowed to send out the data. What does it mean? There’s a step plan you have to follow. You first of all have to map all your data transfers. We have a client where we currently are mapping thousands of transfers. It’s amazing. You have to map your transfer. Then you have to make sure the country I sent the data to, how is the law? Is this safe? Then you have to make a legal assessment about the local law abroad. The ECJ ruling with respect to Schrems II, made an assessment for US law and they came to the result no, it’s not adequate enough. The data is not safe.
The consequence of that is not yet, but it’s prohibited to send out the data. The consequence is you have to take additional measures to make sure that the data is safe also in those countries where you do not have adequate laws. Somehow you need to build a fence around the data so that it’s protected. These additional measures, cause, let’s say, headaches for the companies because the European Data Protection Authorities, said, “Come on, guys, I help you with a nice guidance and explain to you how to understand the Schrems II ruling.”
The guidance, let’s say it’s quite structured. I think the guidance it’s good in a way that you understand what it says. Unfortunately, all of what we get from our authorities is that 95% of the existing data transfers are illegal following their interpretation of the Schrems II and only 5% survive. I would like to highlight those 5% because, Alex, we have been working together on your product which goes around anonymization. I must say, for those who are listening, that’s really good news because the authorities in Europe, said if you send out anonymized data, that’s no problem.
The moment when data is anonymized, it falls out of the scope of the GDPR in the very beginning with the consequence that a ruling like Schrems II doesn’t apply. Anonymized data, that’s one option where they say lawful. The second option is pseudonymized data but the key remains in Europe at the controller. You can pseudonymize your data, send that abroad. It’s important that the data importer cannot look into the data. Then you can put your AI on top, do something, and then you can send back the data and the controller is allowed to unlock the data.
These are the only two options following the understanding of the authorities which are fine, but that’s just 5% of all data transfers we know. For most of the data transfer, it’s inevitable that you have access to clear data abroad. We are in a dilemma situation with Schrems II. We have no solution. Everyone tries to gain time. Companies what they do is a little bit of window dressing. Everyone is very active so that they’re able to demonstrate they are implementing the new rules. Everyone knows including the authorities we need a solution, but there’s no solution.
Alexandra: That’s definitely a challenge if you need to keep data and summarize. With Schrems II, data sharing for sure got more difficult. You would say that with your understanding of synthetic data and anonymized data, synthetic data is one of the solutions to actually circumvent all these regulatory burdens and make data sharing easier again.
Axel: I see a great future for any aspects of synthetic data or anonymized data because you are in the very beginning falling out of the scope of quite a lot of those regulations we had been talking about. The GDPR, the General Data Protection Regulation, is regulating personal data. At the moment, what is personal data? Personal data requires that there is a connection between an individual person and information about that person. You always need the link between the individual and this is the information so you need that link.
Without that link between person and information, so if you cut this, then the person is not identifiable anymore. Hence it’s not personal data. If it’s not personal data, no GDPR applies. For anonymized data, the whole set of rules of the GDPR, nothing. It’s like you can do what you like. Certainly, there are other laws, for example, the EU privacy regulation, which is discussed at the moment that is focusing on let’s say secrecy of communication.
They take all data, even machine-to-machine communication, and communication between companies could be subject to that law if it falls into the specific industry area. What I want to say is you’re not totally free of any of those regulations I have been mentioning with anonymized data, but it’s a big step forward to facilitate the business with respect to monetization and exportation of data.
Alexandra: Absolutely. We have some clients that have their headquarters in the United States, for example, and that shared synthetic data with data scientists in the European Union because it’s just so much easier, and the data is so granular that they can still perform all of the advanced analytics on this data. There we really see that it helps clients with the current regulatory environment.
I think we can continue talking about both Schrems II as well as AI regulations and the general approach to regulation in different parts of the world for hours to come. Unfortunately, we need to come to the end of our episode. Are there any final remarks that you have for our listeners? What’s important for them in 2021 and beyond? What do you want to leave our audience with?
Axel: Maybe I was a little bit too apocalyptic and negative in some respects. My final remark here would take the bull by the horns, or the other way around, don’t put your head in the sand and hope this is a bad dream which will go away. It won’t go away. From a legal perspective, we’re just at the beginning to enter a new digital world. Obviously, when you start doing something there are hiccups and you need to find your way. Maybe you do something wrong and then you have to rework and reschedule.
What we see is those clients of us who take a proactive approach, and that’s my stake, take a proactive approach to say it’s very complex, we need to invest money.
Hey, come on, we take it as an advantage. We take the good sides out of it. We make sure that we know what we’re doing. Even if we are sometimes, let’s say, take a little risk because the law is not clear enough, we accept we know what we’re doing, we assess the risk and we still move forward instead of doing either nothing, then you’re falling out of business or you say move fast, break things, and ignore everything. This is very dangerous because then your business comes to a sudden end. Be positive, be proactive, take it on board, open your mind for these new areas of law. There’s no way to avoid it.
Alexandra: Yes, and make it your competitive advantage.
Axel: Yes, make it your competitive advantage. Definitely.
Alexandra: These are great last words. Thank you so much for your time, Axel. I think it was a super, super insightful episode for all of us. I really look forward to our next conversation. Thanks a lot for all you shared today.
Axel: Alex, thanks for taking your time. Sorry, when I’m too enthusiastic.
Alexandra: I enjoyed your answers.
Axel: I enjoyed a lot our talk. Thanks for the invitation.
Alexandra: Thank you very much, Axel.
Jeffrey: That’s it. I am a fan of Axel and also the GDPR. Who would have thunk it?
Alexandra: I knew that as a lawyer you would really appreciate the points actually made about legislation, and also the wider European data strategy. I’m glad you’re a fan.
Jeffrey: Me too. There are many takeaways here. Let’s highlight just a few for our listeners.
Alexandra: Sure. Axel thinks that regulation is necessary in the tech sector. I really liked the car analogy. When cars were first manufactured, there were no rules for how to drive them but we eventually ended up with a framework that governs traffic really well. The same framework needs to be developed for the digital world.
Jeffrey: Yes. AI regulation is a bit like that since products are already out on the market but there is no regulation to govern the technology, and old regulations don’t fit new advancements in tech. Axel’s advice for businesses is to conduct risk assessments based on the regulatory framework.
Alexandra: Indeed. Axel forecasts that we’ll have as many as 20 GDPR-like regulations eventually and that this will be a huge challenge for companies. The solution is to make regulation less complex. GDPR is a great piece of legislation due to its flexibility and abstract nature. However, the implementation is not always cleared. Companies need people who understand the complexity and are mature enough to be able to conduct a risk assessment, instead of saying no to ideas and innovative projects.
Jeffrey: Regarding Schrems II, we heard that the European Court of Justice ruling as much as 95% of data transfers are now illegal. Companies need to map all of their cross-border data transfers and assess their compliance. Modern approaches to anonymization with synthetic data generation can solve this issue and make data transfers compliant after Schrems II.
Alexandra: Exactly. Finally, Axel’s advice to businesses regarding data privacy is to take a proactive approach and treat compliance as an advantage. Know what you’re doing and when to decide to take the risks, make data privacy your competitive advantage.
Jeffrey: I love that. I think that’s an excellent piece of advice. I’m really glad Axel is such a huge fan of emerging privacy-enhancing technologies. It goes to show that techs really make life easier for businesses and also safer for consumers.
Alexandra: Indeed. Thank you everyone for listening. We’ll be back in two weeks’ time with the next episode. Until then, please subscribe to the Data Democratization Podcast and write us a review. See you soon.
Jeffrey: See you next time. Bye.
Alexandra: The Data Democratization Podcast was hosted by Alexandra Ebert and Jeffrey Dobin. It’s produced, edited, and engineered by Agnes Fekete and it’s sponsored by MOSTLY AI, the world’s leading synthetic data company.