
We sat down with Maximilian Gahntz, Mozilla Foundation’s AI Policy Lead, working on questions around the regulation and governance of AI around the world. Previously, he has also led work around data governance and platform accountability.
Before he joined Mozilla, he was a fellow of the Mercator Fellowship on International Affairs, working on the EU’s AI Act at the European Commission.
We talked about the currents in the AI policy development, the issues with media representations and political interpretation of the AI issues and more.
Transcript of the episode:
00:00:11 Domen Savič / Citizen D
It’s the 9th of January 2025, but you’re listening to this episode of Citizen D Podcast on the 15th of January same year. With us today is Maximilian Gantz from the Mozilla Foundation, he’s the AI policy lead, working on questions around the regulation and governance of AI around the world.
Previously he has also led work around data governance and platform accountability, and before that he was a Fellow of the Mercator Fellowship on International Affairs, working on the E US AI Act at the European Commission. Happy New year, welcome and so good of you to drop by.
00:00:48 Maximilian Gahntz / Mozilla Foundation
Happy new year, thanks so much for having me.
00:00:51 Domen Savič / Citizen D
Well, to jump right into the gist of it, we’re obviously going to talk about artificial intelligence, the AI act, the regulatory frameworks and maybe for an opening question is: How will the EU AI Act help or prevent the big tech capture of the AI field that we are facing today?
00:01:19 Maximilian Gahntz / Mozilla Foundation
It’s a good question and I think it’s important to also take a step back and think about what the AI act actually is and what it isn’t, because in in the end it’s a Product Safety law, not competition law. So, like the primary intention of the legislators here was never to make sure that there is no concentration of power or to, you know, curb the power of big tech.
The intention was to essentially make AI products in the usaver and prevent harm first and foremost, so I think that’s just like important context here because obviously there are some implications. There are some tiered rules that might apply to bigger companies, but not to smaller companies, there are some exceptions, for example, when it comes to open source.
But ultimately, it’s a Product Safety law, so I think if we want to talk about concentration of power and market power here, we need to look at this in a broader context, because there are other levers we can pull and that regulators in the EU, in the UK, in the US are trying to pull as well, and in general, when it comes to the success or not success of the AI Act, I think it’s also important to say that it’s too early to know really because there are so many vague provisions in the AI act is so much implementation work to be done, standardization, secondary legislation, there is a big code of practice process currently underway for developers of large language models.
So, in the end, what the rules that the AI act put forward will look like, we’ll only know in a few months, a few years, and then how these rules are enforced is actually the long game here, so it’s too soon to judge how well the act is working out at this… we’ll actually have to wait a few years and then also just be… As activist and civil society organizations very vigilant about how these rules are implemented, how oversight authorities are enforcing the law and then what companies are doing in the market.
00:03:39 Domen Savič / Citizen D
So currently like the broad perspective of, let’s say, the AI landscape consists, if you look at it from, from, from a perspective or through the eyes of let’s say a regular digital user, it looks like you have a couple of big companies that are battling over for market dominance and then you have some localized or individual tools that are being used by smaller actors.
Would you say that assessment of the AI landscape is correct or are there other players who we as activists should pay attention to when we’re looking at this situation?
00:04:31 Maximilian Gahntz / Mozilla Foundation
Yeah, I think naturally a lot of the attention right now is on the big AI models and their providers. If you look at what the people in the press are writing about and people are talking about is ChatGPT and Gemini, and what those tools can and cannot do and how they’re implemented into different services, but I think it’s important to keep in mind that, sorry, this is the tip of the iceberg of the AI value chain, which is pretty long and complex, but if we’re talking about the distribution of power and concentration here, it’s important to go down or go up the value chain and think about whose strike dominance in these different markets that are really important for AI.
Because in the end, obviously like what, end users, consumers, people are going to be using or going to be affected by, it’s the applications and how these tools are put into practice and deployed, whether it’s in like a consumer setting or even by governments, so naturally like a lot of the attention is there.
But if you think about what you need to build these applications, there’s a lot that goes in there, so you have the big model provider is Google open AI anthropic and they train large language models. If we’re talking about generative AI here, using a ton of data and a ton of computing power and raw processing power and to do that, they need a lot of data. So, who has the benefit here?
The companies that already do collect a lot of data from other services, for example or that have the capacity to enter into licensing deals with publishers or other rights holders or who have the capacity to build a web crawler who can crawl large parts of the web and download their data and feed it back into the model.
The other part is computing power and there, if you look at who the big cloud providers are, who provide that computing power, it’s roughly the same companies or many of the same companies that you also see strike being very active in the AI industry. It’s AWS from Amazon, it’s Google… So they actually have a lot of market power in the cloud market as well. There is the chip providers where I think when it comes to a specialized AI chips, NVIDIA controls large parts of the market, I think it’s like somewhere in the 80 to 90% market share range when it comes to AI chips.
So, the further up you go, you still see market concentration and you can’t really look at any one of those levels in… you do need to look at each one of those in combination, because only that way you will actually see who’s like if. If you want to, you know, put it in a poignant way, like who holds power over AI?
00:07:48 Domen Savič / Citizen D
You previously worked on data governance and platform regulation… How is let’s say the area of AI and maybe more specifically generative AI, how does this differ from the situation that we’ve been having with digital platforms, with social media, with trying to regulate them for the past few years? Listening to you naming all of these actors, it would seem to me that it’s just a little bit of history repeating, right?
00:08:28 Maximilian Gahntz / Mozilla Foundation
I think in parts, yes and parts no. Obviously the problem space is still somewhat different, because, for example, if you’re talking about safety and bias in AI, it’s a bit different with many AI products compared to online platforms, social media platforms who have their own problems of bias that’s like deeply enmeshed in content moderation and recommendation algorithms.
And obviously this is a very salient topic right now, again because two days ago, just announced that they’re going to change their content moderation practices, so I think some of it is similar, but obviously if we’re talking about content moderation, there’s been in the past 10 years a lot of debate around the limits and you know, liberties of free expression and the value of preserving free expression that’s been politicized in different ways.
It’s probably something that will come up in the AI debate as well at some point, and we’ve already had some discussion around like, do chat bots display political biases, for example, but that’s not the core of the issue.
So I think there’s definitely lessons to be learned and we should look at the different experiences we’ve made in different digital policy and other policy fields in the past ten, 20-30 years to learn more like new debates on AI, but it’s not going to be, you know the same thing all over again, because there’s just different equities, different changing political circumstances, obviously.
What the European Commission and the new US government and the new UK Government are going to be talking about in the coming years is going to be somewhat different from our political focus areas have been determined in the years.
So, there’s a… I’d say there’s a shift in the grandeur political context as well here.
00:10:38 Domen Savič / Citizen D
Yeah. And is that going to, let’s say, positively or… I know I’m a bit naive… positively or negatively influence the way we’re addressing all of these… you’ve already mentioned some of them, you know, dangers or pitfalls of regulating this field?
00:11:00 Maximilian Gahntz / Mozilla Foundation
I would say yes. Umm, if you listen to policymakers in Brussels right now, for example, there is a lot of emphasis on European competitiveness. If you’re talking to people in the UK, there is a lot of focus on enhancing productivity, if you’re speaking to people in the US, there is a lot of focus on national security and AI, and especially in relation to China.
So, I do think that changed significantly… and if you’re an advocate, you kind of need to adapt to that, because fundamental rights and consumer protection to the people who are emphasizing that often aren’t the primary concerns. The challenge for us here is to figure out, how we can still insert those concerns and make sure that like despite a broader competitiveness agenda, which is, you know, OK to have, that despite that fundamental rights aren’t kicked to the curb, but rather it’s like integrated as a fundamental part of that. And then the same is true in other countries and other contexts as well.
But I do think it will like drastically change how we’re having these policy debates in the coming years because I think… in the past European Commission mandate, for example, curbing platform power and discretionary… that a lot of platform companies, for example, were able to make that was very important to decision makers.
I think it still is as we’re currently seeing playing out debates around, you know, Elon Musk and Mark Zuckerberg and how they’re inserting their voice in the EU but overall, the debates are going to be different, and there’s going to be a lot more talk about competitiveness, national security productivity, etcetera.
00:13:14 Domen Savič / Citizen D
OK. And how would you, let’s say rate the impact of AI on these topics; on national security, on the competitiveness, on the labor force. Because on one side you have these public conversations where the AI group is saying, “OK, AI is here, it will solve all of our problems, increase productivity…”, on the other hand, you have skeptics who are saying, “Oh no, the current state of AI market isn’t sending very strong signals that these tools can actually do the jobs, they’re being highlighted for” and then you have the pessimists try who are saying, “OK, they’re going to take our jobs and we’ll be left with nothing”.
Based on these three camps, where would you see yourself in?
00:14:24 Maximilian Gahntz / Mozilla Foundation
It’s good question, probably it oscillates a little bit as well. Historically, I’ve probably been like a bit closer to the skeptics’ corner, but I think it’s also important to disentangle like a couple of different things here, right?
There is a lot of hype around AI, is it like a bubble? Maybe yes, but I think we’re also seeing that at some things it might be… if you’re talking to people in the creative industries, you’ll hear that you know, like AI isn’t going to replace everything, but it might replace somethings. If you’re talking to someone who’s writing… I just listened to a podcast with an essay of a composer who writes jingles for advertising sounds like, you know, generic background music.
That’s something that AI can like do reasonably well by now and I think there are more and more areas like that where we like starting to figure out what it might be good at. There are many areas where it’s not good at and it’s like it’s made out as this magic bullet and eventually, we’ll find that maybe it’s not that magic bullet.
So, I think we’ll also over time, just have to figure out what the nuances here are and also stop using AI as a blank canvas that we can like project anything we want onto. If you look at like party manifestos in Germany, we’re voting in a couple of weeks and parties just released like draft manifestos, everyone’s mentioning AI in very vague terms, it’s supposed to solve a lot of challenges.
Will it solve those challenges? Probably not, at least not the way that people think it will.
So, I think overall, as the debate is maturing and the hype is maybe dying down, we’ll be able to have a slightly more grounded conversation about where AI is going to really change things. And obviously also I mean with that… the benefits and the risks of AI, because I think, it’s not necessarily a net positive in every respect of AI is really good at a job if we’re talking about applications, used for surveillance or used at the European borders in the migration control context.
There are things were like AI might actually be quite effective, and we still might not necessarily want it deployed at scale in those contexts, which I think is also important.
00:17:14 Domen Savič / Citizen D
How do we direct the public debate or you’ve mentioned political, political campaigning before the elections, right… So how do we direct the public debate around pros and cons of implementing AI in in in particular fields as opposed to just going all out and saying “OK, just put an AI on it’ll work”?
Who should contribute to debate? Or how do you see different actors meshing together on this topic, like the public framing of the power or lacks of artificial intelligence solutions?
00:18:04 Maximilian Gahntz / Mozilla Foundation
I mean, big caveat here is that I’m not a campaigner. I’m a policy wonk, and I’m constantly struggling with, like the messaging aspect of doing this work. How do you actually change like a dominant frame and a policy debate? And I would say often it’s not up to… It’s just something that happens and that suddenly drastically changes how debates are being had in Brussels or DC or London or Berlin or wherever.
Case in point is like working on AI policy and working for example on the AI Act was a very different job before the release of ChatGPT and after the release because that that really kicked off the frenzy, right?
That really changed the discourse and changed the emphasis of a lot of policy debates, although I think it’s important to educate decision makers about how this technology works, because there is a lot of talk of AI as the sort of like mystic thing, and it’s computing. It’s non-deterministic, which is why the unpredictable stuff happens, but in the end it’s hardware plus software plus data.
So I think demystifying AI is important and also, you know being mindful of the hype merchants are, if you want to call it that, who the players are, who have a vested commercial interest and playing up this technology and talking about future scenarios such as “AI is going to upend everything,” because they might stand to benefit from it, and because it’s good marketing.
And then also, make sure that there is space for people to think and talk and debate about it’s like where we want AI and where we don’t want it and then how… because obviously this is going to be, I do think it’s going to be relatively pervasive in different areas of life and sometimes it might just replace human decision making.
And then it’s a really important question in what domains? What decision? What’s at stake there for people? Especially given that there is often like a tendency to experiment with those technologies on marginalized, vulnerable, disenfranchised communities and to make sure that even when the zeitgeist currently dictates that the EU needs to watch out for being competitive and having a strong position vis-à-vis other geopolitical actors that doesn’t that that doesn’t void the need to talk about risks and fundamental rights because ultimately, that’s the baseline for a flourishing society, right? That people retain their rights and liberties and that technology does not interfere with that because it’s not a deterministic development that we’re having here.
And ultimately, like it’s, it’s up to us to help shape how this technology is being developed, how it’s commercialized, how it’s deployed, that’s not something for like a few techies to decide, something for a broader debate within society in the long run.
00:21:51 Domen Savič / Citizen D
You’ve mentioned policy development and we’ve been talking about the EU, but what are your thoughts if you, let’s say, compare the US, the EU and China in this regard in terms of regulatory frameworks or policy developments that are focusing on the artificial intelligence. What are some in your opinion main differences and what are some similarities in that regard? Is there a common ground that literally the world agrees on?
00:22:35 Maximilian Gahntz / Mozilla Foundation
I’ll have to say I’m not an expert on… there are people who know much, much more about tech policy and tech in general in China, so take what I’m saying with a grain of salt.
I think over the past maybe 5-6 years like one very common narrative we’ve heard is that of the three different pathways and tech and tech policy, right? It’s the US, that’s completely laissez faire deregulatory. We’ll just let our companies do whatever they want and it’s China, which is putting its foot down and being very prescriptive and then Europe is the middle ground where we’ll have innovation, but also preserve European values.
There’s probably still some merit to that, but I also think it’s a bit of oversimplifying things because if you look at the US at this point, at least for the past couple years, it hasn’t been completely lassie affair.
Obviously, there’s always been a lot of gridlock in Congress, so Congress doesn’t really pass a lot of new laws also on AI, even though they introduced a lot of bills on AI in the past year or two, but there are many different states which are trying to regulate AI so if we are talking about EU, that’d be like a lot of Members state regulation in the absence of European wide or in the US Federal Regulation.
If you’re talking about California and I think Colorado, there is a lot of debate and some adopted laws on AI already, there was a big executive order by the Biden administration in late 2023 that put a lot of rules on, for example, government use of AI… So, I would say the idea that like nothing is happening in the US is slightly outdated.
Obviously, less happened in the US than in EU, for example and then we’re going to see what’s happening with the new administration, the Biden Executive order on AI is probably going to be repealed, but it’s kind of hard to foresee right now what the Trump administration will do on AI because they’re also competing camps.
Obviously there’s a lot of venture capitalists who supported his campaign, there is also Elon Musk, who’s like a big AI safety proponent and talks a lot about existential risks from AI, so it’s going to be interesting to see that play out, but overall, we can probably expect a bigger shift of focus away from like AI ethics and civil rights to national security and American competitiveness.
And then in China, when people in EU have always been saying like, you know “We’re the first to regulate AI”, that’s also only semi-true because China has put forward rules and regulations for platform recommender systems for example, but also for large language models a while ago and I can’t really, I can’t really speak to the exact nature of those rules and how effective they’ve been, but China’s legislators and policy makers are very active on a AIaround the world, not just in EU.
And that’s just talking about the US, the EU and China, right, where that hasn’t even mentioned like a lot of what’s going on at the international multilateral level because the G7 has adopted its code of conduct on AI, the OECD, the UN is increasingly inserting itself into policy debates and they’re building out that capacity. So much more will happen at the international or transnational level as well in the coming years.
00:26:50 Domen Savič / Citizen D
And speaking of this happening on many levels, is there like a way to sort of dissect the actual chamber of power, where decisions are being implemented and are put together?
So again, listening to public debates, listening to political speeches, listening to discussions from the industry, it seems that they all, these different sectors, all have their perception of where the real decision is being made, right?
The industry is saying, “Oh no, you know, the regulators will just come up and mop up everything that we’ve been doing for years”, the regulators are saying “No, no, no, these decisions on regulatory frameworks are being written by us and we are the smart ones” … so how would you see this flow from innovation towards regulation that is actually working out for the general population?
00:27:59 Maximilian Gahntz / Mozilla Foundation
I think, at least for us in Europe, it’s also just democracy at work. And with all its flaws and benefits and intransparencies, because obviously if we learn about our political systems and high school or at university, studied political science back in the day, you get a very formalized process that you can draw upon a sheet of paper, and that’s how it still works.
But obviously, like the reality is much messier and there’s a lot of lobbying which I would also use as a value neutral term because there is a lot of people lobbying for important and good things as well. There is a lot of backroom, there is a lot of public back and forth depending on how politicized an issue is.
And I don’t think that necessarily varies with AI from other policy… Obviously, that’s the one I’ve been following the most, but I think the old blame game about how regulators are over regulating, U.S. companies are profit driven and are thus trying to preserve their interest and then wherever civil society and other actors fit in this, I think that’s largely the same.
Obviously if we’re looking at the AI Act for example or like other tech policy files in EU and elsewhere, I do think it tends to be an uphill battle for civil society and everyone who works in civil society and has tried to engage on tech policy files will know this because there are other actors with much more resources to put into this and big public policy and government relations teams and agencies who just got a leg up.
So, the challenge of coming together as civil society and despite maybe having fewer resources, making your voice heard. That’s just one that won’t change, we just have to come up with strategies and scheme together.
00:30:18 Domen Savič / Citizen D
Speaking of this exact issue, working as an activist or as an advocate, do you think it’s better to focus your intentions or your energy on the local level or the national parliaments and governments or is this a Brussels game exclusively?
00:30:43 Maximilian Gahntz / Mozilla Foundation
Definitely not a Brussels game exclusively, I think it really depends on the issue and on political context. Obviously, if you live somewhere where your local politician or the rapporteur for a big legislative proposal in the European Parliament that gives you different leverage than like someone from a very different country in EU being responsible for that file.
I think there’s always value in engaging locally, but I think, when it comes to tech policy, I will say a lot happens in Brussels and Brussels has its own logic and structures and I say that as someone who’s doing this job from Berlin.
There are people who are much more in the center and I’m more of a tourist who every once in a while, goes to Brussels. I think it really varies and it really depends, but it just like requires you to be become familiar a little bit with how the process works and like who the people are that you want to talk to affect change.
00:32:03 Domen Savič / Citizen D
And looking forward and slowly wrapping up our conversation… What are some of the things you are paying attention to as the year progresses, right? We’re doing this in in the beginning of January, so what are some of the policy developments or interesting topics that you are focusing on in the field of artificial intelligence?
00:32:31 Maximilian Gahntz / Mozilla Foundation
I mean, I think there’s the legislative side of things because we’re still waiting to see what the new European Commission will do. For AI, there’s debate about an AI liability directive, which basically sets rules for a civil liability for AI developers and we’ll see where that is going or whether it’s going anywhere.
The UK is working on an AI bill and I think they will release on Monday an “AI opportunities plan”, where is currently unclear what exactly that means, but it’s more about the investments side and less about the regulation side.
We’ll have a lot of debate about copyright rules, for example and how those might need to change or not and what the other plumbing is in our like copyright and rights enforcement system to make sure that there’s sort of balancing of interests between tech companies and creatives and rights holders and journalists, for example. So, I think that’s the nitty gritty policy side.
Obviously, there is an overarching development that we’ll all be watching, which is like how is the new US administration going to work? How are they engaging with for example EU in general, but also on issues like tech because this has the potential to be very contentious and we’re already seeing strike some posturing right now around, for example, the EU imposing rules on US tech companies and the Trump administration may not like that, and there’s a lot of US tech executives who are also like starting to play into that narrative a bit more.
So, we’ll actually have to see, you know, whether UK policy is going to suddenly be part of trade policy debates because if the you start sanctioning American companies, that might or may not trigger a response from the US… So, I think like at the geopolitical level, things are going to get much more complicated and probably a little messy and I think it’s important to get in that a better understanding of how that might pan out and then what different actors, both at the government level, also at the civil society level can do here.
00:35:07 Domen Savič / Citizen D
And before we wrap, up just one more question… so AI went through several periods in the history going back to the 70s in in the 20th century. Looking at the current situation, the players, the regulatory frameworks, the situation in the in the industry, can you sort of describe how the next AI winter looks like.
00:35:42 Maximilian Gahntz / Mozilla Foundation
I you I think you’d have to ask someone else. I think that whoever gives you an answer to that, it’s a creative exercise. There’s… it’s hard to predict how hype might die down or not, like what technical developments are looking like… It’s good to remind yourself of what, like the market forces are here and like, think about some of the trends in AI, because I think like, what’s safe to assume is that this debate and this landscape is going to continue to change.
I think there’s a lot of people who talk a lot about acceleration, there’s a lot, some people who talk about a bubble bursting, but I think if you look a bit closer and like you know what might happen within those things.
There are so many different scenarios of how this might pan out, AI might just become a commodity that gets like at some point much more efficient than it is right now, much less energy intensive. We won’t be talking about big models as much anymore because we’ll all have a small model on our phone and then it’s just, you know, regular computing. It’s a new technique and it’ll be good for something, then bad for other things.
I think that’s one scenario, obviously, like we might also continue on “bigger is better” trajectory that we’ve been on for like the past couple years where companies are just continuing to throw a lot of money and capital into this to build ever bigger and bigger and more powerful models. And then we really have to also talk about resource constraints both like economic resources but also natural resources because we know that developing AI models and actually using them as super energy intensive, has real environmental impacts and it’s no coincidence that a lot of big American technology companies are currently talking about or already doing it, bringing nuclear power plants back on the grid, building their own power plants.
So, I think… everything might get bigger, everything might get smaller, or somewhere in between. And then what? I think well, to be honest, we’ll have to just like be wait, have to wait and watch and be vigilant because I think as someone who’s like not at the on the technological front line, but rather like an observer of the technical ecosystem, to then start thinking about like what societal and political consequences of that are you just got to make sure that you understand what’s going on in order to be able to act and then ideally shape where things are going.
00:39:05 Domen Savič / Citizen D
And that goes for, I would assume that, the self-aware AI. So, I would love to hear a few more thoughts on this matter, because this is constantly popping up every time the industry it seems needs a PR boost, Altman and others are starting to talk about self-awareness.
So, is this a feasible or is this a realistic path or is this just something that the PR department needs to put out just to ramp up the groupies?
00:39:46 Maximilian Gahntz / Mozilla Foundation
I mean I have at this personally, a lot of skepticism, but ultimately, I don’t know, right? I think on average, humanity is probably not super great at exactly predicting the future, so I’d have my doubts that, you know, Altman, or some tech executive or, you know, some civil society advocate is actually going to predict what’s going to happen in 2040.
I do think there’s a lot of hype, I do think there is a lot narratives that are being pushed because someone has an interest for pushing them. I think what’s also clear is that, like the technical community, like very much doesn’t agree on what’s happening there or maybe some voices that get a bit more airtime because they’re like, you know, powerful CEOs or because they want a Turing award and are famous computer scientists. And I think these are all positions that have merit, we also need to look at them as a prediction of what might happen rather than someone actually knowing what’s going to happen.
I think there’s been a bit too much of that in the press, I think we all just got to learn how to deal with uncertainty, to be honest, because… It’s just an AI, it’s a bit crazy right now and everyone’s imagination is running wild. In the end, we… just got to see which person, just like by pure coincidence, happens to be right. But that only applies if we’re talking about the very long term… if we’re talking about what’s going to happen in two years, that might be different. But I’m not going to make a prediction about 2050 ever.
00:41:35 Domen Savič / Citizen D
…unless it the year is 2048, right?
00:41:39 Maximilian Gahntz / Mozilla Foundation
Yes, we can meet again in 2048.
00:41:44 Domen Savič / Citizen D
I’ll pick you up on that. Thank you, Max, for dropping by, this was the first 2025 episode of Citizen D, we publish an episode every month and we focus on different range of topics from digital policies, regulation and human rights advocacy. See you next month!
Citizen D advice:
- Pay attention and listen to arguments, not wishful thinking
- Political arguments stem from media representations
- AI needs a wide discussion on different social impacts
More information:
- Mozilla’s research: Unlocking AI for everyone, not just Big Tech – article
- Online life is Real life – podcast series
- Redirecting Europe’s AI industrial policy – analysis
- Accelerating Progress Toward Trustworthy AI – whitepaper
- Generative AI and Labor: Power, Hype, and Value at Work – primer
- Critical Dependencies: power consolidation of digital infrastructures – report
About the podcast:
Podcast Citizen D gives you a reason for being a productive citizen. Citizen D features talks by experts in different fields focusing on the pressing topics in the field of information society and media. We can do it. Full steam ahead!
Podcast: Play in new window | Download
Subscribe: RSS