089 Philip Di Salvo and the question of media in surveillance capitalism

We sat down with Philip Di Salvo, a researcher in the field of whistle-blowing, investigative journalism, internet surveillance and the relationship between journalism and hacking to address the issues related to technology regulation, keeping the artificial intelligence under human control, media framing of these topics and the possible way forward.

He is a postdoctoral researcher at the Institute for Media and Communications Management at the University of St. Gallen, Switzerland.

As a freelance journalist, Philip is a contributor to the Italian version of Wired where he writes about media, Internet, technology, and culture. His fields of interests include digital whistleblowing, media censorship, digital journalism, and the impact of new technologies on information.

Transcript of the episode:

Expand the transcript

00:00:10 Domen Savič / Citizen D
Welcome everybody. It’s the 17th of May 2023, but you’re listening to this episode of Citizen D podcast on the 15th of June 2023. With us today is Phillip Di Salvo, a researcher in the field of whistle-blowing, investigative journalism, Internet surveillance and the relationship between journalism and hacking. Welcome.

00:00:33 Philip Di Salvo
Thanks a lot for having me. It’s a pleasure to be with you today.

00:00:37 Domen Savič / Citizen D
Before we start with surveillance capitalism, mass media whistle-blowers, and all of the other topics we’ll address in this episode of Citizen D podcast, I’d like to hear your opening statement about the role of of journalism in the information society and in surveillance capitalism – how do you see it differing from, let’s say the analogue offline societies we used to live in in the past?

00:01:10 Philip Di Salvo
Well, I think the role of the media and the role of journalism at large in this discussion is about power. So the role of journalism in all these debates and in this overall context of digitization, datafication and the growing application of digital technologies in society is still the one of trying to re-balance power, and in this sense I don’t think it is particularly different than how it used to be before the Internet or before anything digital.

So in those regards I think the role is still the one of making those in power accountable and in shedding light on how power happens on how power is applied in society. And I think the digitalization paradoxically has made the role of power even more insidious, and sometimes even more invisible.

So in this sense, I believe that the role of journalism is so much more important now than it probably used to be before.

00:02:26 Domen Savič / Citizen D
Now when you when you talk about power, I remember going back, let’s say twenty years in the early 2000s, we used to hear these, I’m gonna say fairy tales, about the democratic potential of technologies, about the participatory nature of the Internet, of I’ll go back and say web blogs, but other publishing platforms as well…

Who has the power in today’s information society, and why do we constantly fail to deliver this notion of democratic technology that will enable everybody to have the power or to be in power?

00:03:10 Philip Di Salvo
Well, if we look at what the Internet is today, I’m afraid most of those things that we used to believe in the early 2000s and even before in the 90s didn’t materialize. They didn’t materialize because the technology is somehow bad or because the technology is used in in bad ways. They didn’t materialize because of a specific economical political asset that has become mostly hegemonic on how the Internet functions today, and I know we will discuss surveillance capitalism after this in our conversation, but what we have today is the precise outcome of a series of things that have been. And I would set aside it to be as such, so the Internet didn’t evolve naturally in being such a centralized, commercialized place because it follows some natural events or it followed some natural ways or directions that were inevitable.

It became this because the drivers of that asset have won, so the Internet wasn’t supposed to be necessarily a place of a deeply commercialization commercialization. It wasn’t supposed to be a place where surveillance is the dominant business model and the dominant power balance online, it did become this because commercial drivers, commercial companies and money maker players have exploited the Internet in a direction that was effective, and we cannot say that the Internet doesn’t work effectively today – it does, it definitely does, but with consequences that have become increasingly more visible and impactful on the lives of everyone who is on the Internet, and I think what we should do today in 2023 is trying to revitalize some of these discussions and trying to see what are the spaces and what are the potential alternatives to build something different. There are days where I’m particularly pessimistic, and I still think, and probably I think that this is no longer possible, so that it is almost totally utopian to imagine alternatives to the status quo we are into.

But I think actually this is one of the good days, so I am a bit optimistic and I think what we need to do before we start thinking about how to do that, we need really to start thinking about how to make space for, for different ideas and then how we build them. It is a different story, and it’s even more complicated, but what we need to do from the very beginning, from scratch, is trying to make space, which means getting out of the mindset that sometimes make our imagination very difficult to even, you know, proven.

Imagine something different and is a sort of realism, I would say, quoting Mark Fisher, that referred this to capitalism realism. We think there is a form of realism also when we think about how the Internet functions today, and that’s precisely what we should start to challenge, because once you open space for imagine an alternative. Then you make space for it and then you can really start a discussion which is technical, practical and political on how to build alternatives.

00:07:25 Domen Savič / Citizen D

And you, you’ve mentioned that we are constantly having these debates and that you know we are basically or almost like spinning spinning in a circle related or when we are discussing the democratic potential of technologies about the openness of space and so forth. But do you feel that or do you think that with every circle around the sun when we are debating these things the circles are getting smaller and more, let’s say inevitable or more without any possible alternative outcome that will or that could, as you said, open spaces and create these alternative realities?

00:08:12 Philip Di Salvo

I think we recently saw that something different is possible in Italy, my home country because, well, it has become an international case. So I’m pretty sure the listeners will be aware of it, but Italy has been the first country to try to regulate how artificial intelligence operate and that happened because the Italian Data Protection Authority opened an investigation into open AI and how ChatGPT is functioning.

How ChatGPT has been developed and how data about Italians has been gathered in order to build the machine so that has been really the first attempt by a political authority of any sort to intervene in how artificial intelligence is being developed today, and that’s interesting on many levels.

It’s interesting on the level of data justice because it opened a discussion about how data is being treated when it comes to the creation of artificial intelligence and at the same time, it has been a direct interval tension into how a technology and highly impactful technologies being developed today.

I have nothing against open AI, I think they’re building something great. I think they’re doing excellent work, but we cannot allow to happen something that has happened with the Internet at large 10 years ago, so we cannot allow companies to drive this discussion completely.

We cannot allow them to make their own rules live by them and force everybody else to adopt. Not because this is eminently wrong, but because this position doesn’t work and we have seen it with social media. We have seen it with other technologies and other actors in this discussion, so I think we have now a decade long experience in how not to drive these discussions and how not to drive these debates.

So the fact that the Italian Data protection authority in Italy is not one of the countries really at the core of this, I mean it’s not the US, it’s not the UK, it’s Italy.

But the fact that someone raised a hand and said well, wait a second, you’re building something interesting, no one is denying this. This is potentially great, but how are you doing it and how are you applying this to people?

So I think there was an absolutely good sign and this is potentially also a sign that it is possible to have a say in this discussion. It is possible to decide what we want to do and what we don’t want to do. And if I see this also in what has happened last week with the artificial intelligence act on the European level that is, trying at least to give some limitations and some, you know, clear rules to what cannot be done with artificial intelligence. If I see these two things together, I still think probably, and this is also because this is one of the good days where I’m optimistic, that something can be done.

So I think the important thing is not thinking that nothing can be done. Something can always be done, and I really think that what happened in Italy with with open AI is a sign of the way forward is a sign that States have a say in these discussions and like it or not, we have to rely on these systems. We have to rely on regulation and I think it’s it’s it’s healthy that this has happened at this moment when when ChatGPT is being developed, but it’s not dominant in a sense that you cannot live without it.

So it’s very interesting that the intervention has happened now because it will be applied by others and potentially this can be a new driver of this debate and a new driver of how artificial intelligence will be developed from now on so that I think is a direction for a different imagination around around these technologies.

00:13:17 Domen Savič / Citizen D
And when you were talking about, let’s say state regulation or regulatory models that are addressing not just GPT, but artificial intelligence and let’s say global technology – how do you see the difference between regulatory attempts by let’s say states or countries within the European Union, or even more precisely, on the on the West side of the European Union and some same or similar attempts that are now being in place in in let’s say more the eastern part of the world like China and others.

So I have a I have an anecdote I usually bring up when when we’re discussing regulatory models and let’s say self regulation of of the industry Is that I was attending an international conference in Kotor and we were discussing this was back in 2016… And we were discussing this issue of Facebook and self-regulation versus regulation right and the members of the conference that were coming from, let’s say the more eastern part of of the Europe were absolutely shocked to suggest that the state should or could regulate this systems because back then, in those countries, Facebook was still a was still a platform of expression that wasn’t wasn’t controlled by the government, while the Members from let’s say, Italy, France, Germany, UK were absolutely, you know, focused on regulatory models, saying that you know, self regulation doesn’t work, Mark Zuckerberg doesn’t know what he’s doing, and you know the state should really put the put, put its foot down.

So I would like to hear your thoughts on these different perceptions of of regulatory models and maybe to maybe counterpoint the current state in which these big tech giants or big tech intermediaries found themselves in, where you can see that you know they’ve lost quite a lot of goodwill that was that was present with the people, with the regulators, with, with politicians, you know, 5-10 years ago and are now basically the bad guy or the baddest, the baddest of guys in in this debate.

00:15:52 Philip Di Salvo
Well, I’m not a regulation expert, but I think we can tell that self regulation doesn’t work when it’s applied to technology and we we see it clearly if we look back to anything that happened with social media in the recent years. 2016 has been an interesting year for many reasons in in politics, in international affairs and also in technology and we have seen there more clearly than in other instances how leaving companies making the rules and deciding how to walk it’s not a good way of addressing these issues so I believe there is the need of having clear rules where companies can operate as we have rules for many other fields.

I mean, basically every other economic field that has rules set up has been decided by public bodies, so I think that should also be the same for technology of course, because otherwise we are simply relying on assumptions that companies are eminently good, they know what innovation is and they know how to deliver, which which I think it’s fundamentally wrong and it’s it’s visible, I would say.

So yes, we need, we need rules decided by public bodies and at the same time we need clear rules for avoiding problems and we also need to avoid that… I mean, I don’t like the term, but it works in in the context of the sentence, we need to also make sure that innovation can somehow be created.

So again, I think that what the European Parliament has as the leader so far when it comes to regulating artificial intelligence, it is good because it starts. From from a good principle, I mean the AI act is based on a risk scenarios approach and consequently also technologies are labeled on the basis of the risks which I think is a good way of putting on black and white what the dangers are when it comes to artificial intelligence and at the same time living space for other good things to be developed. So generally speaking, I think that’s a good that’s a good approach and I’m glad to see that Europe is going in that direction.

But of course we need to see how the draft of the regulation will be then discussed and what the final outcome of it will be, but I think that’s a good approach and we we are seeing for instance a clear position against facial recognition, a clear position against the most abusive uses of artificial intelligence in public spaces, which I think it’s brilliant so far.

Then we will see what the outcome is … so that will be my general comment on how regulation should be should be put in place, but at the same time, we shouldn’t forget that it’s really a case by case discussion here. And again, I’m not an expert, but I doubt that there are principles and ways of regulating technology that apply at the same time, for each and every technology we can discuss.

So I think when it comes to digital technologies that have impacts in society like AI this risk based approach is definitely a good a good idea, but I cannot answer your question when it comes to other contacts because I’m not completely familiar with them.

00:20:00 Domen Savič / Citizen D
But looking at looking at the role of of an individual user or of a citizen or of a consumer, right, if you take a step back and look at, let’s say the GDPR legal frame or the power or the importance of an individual user that takes care of its rights and data and yeah, other instances in in the information society.

So how would you, how would you interpret or how would you explain? You know the change that has apparently the AI act doesn’t pay particular attention to the to the individual, but turns its regulatory flashlight towards technology and towards these AI systems that are that are in place.

Do you see that as a learning moment for the regulators, did they realize that, you know, we cannot put too much emphasis on on the end user, but have to think more broadly, what what do you think was or how do you how do you see it those difference?

00:21:22 Philip Di Salvo

Generally speaking, I think that we had for too long these discussions based on on individual decisions like around the 2013 when when the Snowden revelations were published. I remember that one of the leading frames in which the discussion was constructed was kind of pointing fingers again the users and said like, well, you have given all this information to Facebook and the others, so you cannot complain etc etc etc.

We have been feeding the machine for years, but it’s an extremely limited way of of framing the discussions. I mean it’s public knowledge that while we willingly and explicitly tell social media about ourselves, it is just the tip of the iceberg of what this companies can know about us by simply getting data from data brokers or acquiring information from the offline world and merging it together with the online world.

The way in which the digital economy works is way more rhythmic than how it is usually described. So yes, I post stuff on Instagram about myself, but the information that Meta can have about me comes from so many sources… it has been documented how Facebook, for instance, is capable of profiling non users of Facebook on the basis of what the company can get from from, from its own users.

So I think what we came to learn about all these dynamics definitely is a sign that discussing this only on the individual level doesn’t work, and of course there are steps that each and every of us can do to mitigate the consequences of this state of things, but I think we cannot, and this goes beyond regulation.

We cannot look at this only on this way. Because I think all these arguments are based on an understanding of digital technology that it’s… I think stuck in time. It’s really about how we used to think about the digital at the beginning of it… now it is so difficult because it is so powerful. It is so interconnected with basically everything we do that we need to inevitably look at it in a more systemic way.

So again, I don’t know how to put this in regulatory frames, but when it comes to discussion and awareness about what is happening. I think it is important to constantly focus on these issues on a collective way and we see this clearly, for instance with, with privacy and privacy is definitely one of the most concerned issues in these discussions.

But to most people, I think privacy is something obscure, is something that really Is sought as regarding their inner selves and not as a collective justice issue. So that’s why, for instance, I really like the data justice concept, which is a good way of framing these problems in terms that are, I think, less obscure and more connected to the lives of people.

In practical terms, so we have social justice outside the Internet in the way in which we are treated. In a way in which we are framing the ways in which we are kind of playing a role in society or having a say in society, and that should be the same basis on the Internet as well. We should definitely look at these issues as justice issues.

So to go back to my initial example when when the Data Protection authority in Italy addressed open AI and ChatGPT, I think that was a quintessential data, just this discussion and of course, it was based on on on privacy. It was based on how personal data have been used, but it’s a justice issue at large, it is about deciding what others can do with ourselves. It is about putting a limit to what companies can do with our identities, with our expressions, with what we, what we have been putting on the Internet. So far, so it is a justice issue and in such is absolutely a collective issue and I think that the evolution of these debates in the last 10 years, if I look back again on on the Snowden revelation than we are and where we are now, it has definitely evolved in a positive way.

So I won’t be surprised if all we have just discussed also is implemented in our policies and regulations are now drafted.

00:27:06 Domen Savič / Citizen D
Since you mentioned Snowden I’d like to move on to the next topic of the role or the journalistic, let’s say not sector, but the journalists in the surveillance capitalism and their their need to protect, their sources, their own person – identities and work, and so on.

There are some instances of, let’s say, journalists being hacked, being targeted by by spyware and other types of of malware. That are, you know, sort of questioning the whole personal responsibility angle like you already mentioned, Facebook, Meta, other tech giants are are gathering information about us from many different places and they’re they’re using it themselves or they’re selling it to the highest bidder.

And in that regard, this notion of, let’s say, personal responsibility of digital literacy, of digital skills, of everything that, let’s say, the European Union as well, is now pushing to the front line of this digitalization can be put under question in terms that – is this enough or are we by shifting the responsibility again to an individual user to a journalist?

Are we still prolonging or enacting this total avoidance of responsibility from the industry, from actors in this field that are actually doing these, yeah, spying activities?

00:29:09 Philip Di Salvo
Well, I think what is crucial here is the notion of the black box and in most instances digital technologies are black boxes in terms that they are obscure and we know very little about them and about how they, how they work – and we see these with algorithms, we see this with artificial intelligence, again we seeing it with the with the surveillance market.

For instance, so when when you face something like that and this is why at the beginning I was mentioning power to be obscure in the digital digital area, you clearly see that there is a power issue there. So if something is allowed to operate without being accountable I think it’s an abuse of power in in general terms and in those regards, journalism is needed in order to shed light on on these systems and to open the black boxes, as is often said and whistle-blowers and other sources have been crucial for this, because when you don’t have access.

To these systems, when you don’t have ways of gaining access, you need to rely on others who have access and decide to share information with the journalist or with the public. So when it comes to investigating technology, I think that whistle-blowers have been playing a crucial part. Especially when it comes to big tech companies, one of my recent articles is about with the blowers coming from big tech companies and the kind of invisibility they help to shed light on.

But as you were mentioning—relying on whistle-blowers, if you are an investigative reporters in this context and in this digital scenario where surveillance is, is rampant, it’s it’s extremely dangerous. And because the surveillance practices that are out there for governments but increasingly also for other actors are numerous..

In 2021, the Pegasus investigation really shed light on the ultimate nightmare scenario in these discussions, because spyware and the most advanced instances of spyware technology really jeopardize any discussion around information security. Any discussion around around encryption, I don’t want to go into the technical side of things, but they basically make any effort, any technical effort in security, obsolete.

If a spyware is capable of being infected on my on my device without me making any mistake simply by a remote action. So this is where we are – We are in a situation where journalists can be put under surveillance with absolutely no scrutiny, with absolutely no way of being found out. And it is also part partly connected to how the Internet functions. The Internet is full of bugs, is full of vulnerabilities.

It’s full of loopholes that can be exploited by others, and the ways in which these can happen. Is frequently in the hands of those who are capable of using surveillance against actors like journalists, activists, human rights defenders, and the public, what journalists can do in this discussion is only mitigating the risks.

And, of course, not every journalist is exposed to the same risks as someone doing national security level investigation, but all people, including journalists, are exposed to some of these risks – it’s just that the threat modeling is different, but in principles of the risks are the same.

And this is really a scenario where the individual also really left really left alone because the surveillance market is even more insidious than other black boxes is absolutely not regulated in a proper way, and you can also shed light on one company as it happened with NSO and Pegasus. But it’s just one piece of a very broader puzzle.

I think regulation on surveillance is limited and it should really be systemic and it should really—I mean that’s my view and it should really state that certain technology shouldn’t be developed in the first place because there is no way of using them without creating them and spyware is definitely in this area, and most people will argue that we need them for conducting investigations, we need them for law enforcement.

And I can agree on on a certain level that this is true. I mean, they can make a difference in certain scenarios in certain investigations, but I’m inclined of being in favor of this use only if a clear regulation if a clear set of rules is there because otherwise we have seen way too much how that is a slippery slope and you start by allowing the use of these things for certain situations and then they become normalized.

Just because the technology is there, so if you want to go in that direction, we would we check and somehow agree we need a broad solid clear set of rules of what cannot be done. But again here it’s where I’m particularly pessimistic. Even if you have rules, then it’s very difficult to control the market and the producers of surveillance technologies have been very effective in avoiding regulations and in finding ways of making business anyway, so we are really in an open sea where journalists and others are left, are left alone and it’s an open battles.

Progress has been done, but we are still far away from a situation where we can feel we can feel safe and this cannot be solved with technology only. I mean, we cannot continue to say that journalists need to protect themselves with encryption, full stop, that that’s not enough.

00:36:13 Domen Savič / Citizen D
And speaking of journalists and encryption and privacy in surveillance capitalism, there’s also one issue, and it pertains to Italy as well, So we might find some new connections – it the geopolitical debate between the Chinese tech making its way into Europe and into into the US, there’s been quite a few debates, arguments, even legal propositions on both sides of of the Atlantic Ocean.

Regarding the ban of Huawei of ZT, of Hikvision, of other Chinese giants and at the same time it was underlined by the realization that you know you, you literally cannot survive without the Chinese tech being present on on different levels in in the West and in the US so on one side you have the cold hard silicon of Chinese technology being installed in The West and on the other hand, you have these legal proposals, the regulation that you know, tries to ban it or tries to stave off its influence in our parts of the world. So how do you see that developing in the in the future? And do you think banning or limiting almost creating like this alternate splinter net is an effective solution to the problem?

00:37:59 Philip Di Salvo
It’s interesting question. Well, I think overall when you can refer to our technology by saying it’s Chinese or it’s Russian, and then that’s always an easy argument for pretending that the problem it’s only in China or in countries where democracy is not protected or not, or not existing and that’s a very limited

We’ve seen it with with TikTok as well – I mean with the US authority that is being very concerned with TikTok because TikTok is Chinese and they don’t have the same level of preoccupation when it comes to other companies which are either US based or Western that do pretty much the same business model.

One starting point of the discussion and then of course, buying surveillance technology from a surveillance state like China is concerning in double ways for the impacts that the technology can have outside of China and from the kind of economical political implications of that technology that is implemented in the systems themselves

Again, I’m not entirely sure that banning completely this technologies on the basis of where they are developed is the right way because you can buy surveillance cameras from China, but you can buy them from various other countries and buy them in in Western country, you can buy them in in Israel, you can buy them from all over the world, you can buy them from Italy – Italy is an excellent producer of surveillance technologies as we know.

So I don’t think we should frame the discussion only around geopolitical terms because that limits also the understanding of the implication of the world of the whole situation. I think we should really focus on the technologies themselves and the specifications that they carry and the uses that can be done.

I think that is that is definitely the way the way forward and then I agree that the geopolitical traits of certain producers are more concerning than others, but when it comes to the technology itself, I don’t see that huge difference, probably in a way in which than they are applied in my home city of Como.

In Italy, there has been an implementation of facial recognition systems – they were pushed by Huawei itself because they were producing the cameras and they were approaching the municipality in trying to get their attention and for the implications that facial recognition has on people.

I think that if the camera is Chinese or if the camera is Italian it doesn’t really make a difference in the end, although the geopolitical implications and the political, economical elements of a Chinese surveillance cameras are clear, I think.

When it comes to the adoption and the use and the harms you can create, there is not such a huge difference if the camera is Chinese or whether the camera is Italian in the end.

00:41:49 Domen Savič / Citizen D
Yes, I I would agree, although and this is sort of the final topic for today the, the issue of representing or getting the political response. In particular, let’s say instances or issues within the surveillance, capitalism might be in some cases, at least in my opinion, easier if there is an almost like a nationalistic, nationalistic approach added to the issue. Right?

Nobody worries about the surveillance capitalism. Everybody worries about surveillance, or politicians are usually worrying about surveillance capitalism coming from, let’s say, China or other tricky country, to say to say the least.

So how would you or how do you as as a journalist when when addressing these issues in in your articles, in your investigative pieces, how do you get about tagging the relevant the relevant parties in, in, in a, in a discussion or when addressing a certain problem is there? Is there a like a blueprint or is there something that sort of helps you with overcoming the current type of the issue and then really take a deep dive into into a problem without being too, I’m going to say sensationalistic, but also without being too cheap or too superficial.

00:43:26 Philip Di Salvo
Well, I think what is crucial is to always frame these discussions as things that relate to the real world, because too frequently when we discuss digital technologies or artificial intelligence and the impacts they can have, the discussion is conducted as if all these things operate outside of the world.

So artificial intelligence is artificial, the Internet is digital. It doesn’t have an impact on on, on the reality it doesn’t have an impact on on society. It’s not walking among people, which is incredibly misleading, and it’s incredibly dangerous because for years it has obscured any potential critical discussion.

So the strategy, I think for journalists is to always connect struggles around digital issues, which struggles with with other social or political cases that are that are happening. That’s why again data justice is such a powerful framework, is such a powerful way of addressing privacy and surveillance and algorithmic discrimination, for instance.

So I think for way too long technology journalism, if you want to call it this way has been interpreted as as a technical as also our consumer electronics oriented field of reporting while it was political at the core.

So I think that’s the golden rule that we should live by, and that’s that’s the approach that journalists need to have even when they address the most abstract and the most you know, distant issues that we can think of – even quantum computing is a political issue to me, algorithm profiling is a political issue, surveillance, of course, is a political issue, social media regulation is a political issue and etc.

And we should also constantly look at the most direct ways in which humans are involved in this in these topics. That’s why, among all the great reporting that I’ve read about open AI, the Billy Perrigo story on the Kenyan workers employed as moderators for helping the machine to learn things and now they have been exploited and they’re horrible working conditions that have been forced. I think there was one of the most interesting and revealing pieces of reporting I’ve read about about ChatGPT because all these issues otherwise are left unreported. They’re left unnoticed and they’re easily forgotten.

So I think considering technologies as political, considering the real physical world as the place where technology happens is the starting point because if you move from there you can have a real social political discussions around around these things. And secondly, look at how humans are involved, because when we talk about facial recognition, yes, we are discussing computer vision, but we are discussing discrimination that is applied to certain humans.

And if we discuss artificial intelligence and how it works, we also need to take into account that humans are involved and moderators are of course, the most exposed groups out there and you can continue this way relating to every technology we are discussing. That’s also an environmental issue, for instance.

So we should always look and frame these topics in this way in order to make them more connected to the lives of people and more connected to social justice, political and profoundly also human stories that are involved.

00:47:46 Domen Savič / Citizen D
Perfect time to end this episode of of Citizen D. Thank you Phillip for dropping by and best of luck with your future endeavors.

00:47:55 Philip Di Salvo
Thanks for all the invitation. It’s been a pleasure to talk with you today.

Citizen D advice:

  • Rethink technology through lenses of human rights
  • Media framing of issues matters
  • Surveillance capitalism surpasses east-west politics

 More information:

  • “Leaking Black Boxes: Whistleblowing and Big Tech Invisibility” – paper
  • “AI Errors and the Profiling of Humans: Mapping the Debate in European News Media“ – report
  • “’We Have to act Like our Devices are Already Infected’: Investigative Journalists and Internet Surveillance” – paper
  • Recent interview with Meredith Whittaker at the Wired festival in Italy (video – in English) – interview

About the podcast:

Podcast Citizen D gives you a reason for being a productive citizen. Citizen D features talks by experts in different fields focusing on the pressing topics in the field of information society and media. We can do it. Full steam ahead!

Join the discussion

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Podcast Državljan D

Naročite se na podcast Državljan D!