093 Sarah Lamdan and data cartels

We sat down with Sarah Lamdan, Professor of Law with a Master’s Degree in Library Science and Legal Information Management.

Professor Lamdan works with immigration groups on government surveillance issues, with library advocacy organizations on open access and researcher privacy projects, and with open government advocates on federal records preservation and access initiatives.

She recently wrote a book called Data Cartels: The Companies That Control and Monopolize Our Information that is focusing on the issues of internet feudalism, data monopolies and solutions to these issues. You guessed it – this will be the topic of our conversation.

Transcript of the episode:

Expand the transcript

00:00:10 Domen Savič / Citizen D

Welcome everybody. It is the 25th of September 2023, but you are listening to this podcast of Citizen D on the 15th of November 2023. With us today is Sarah Lamdan, professor of law with a master’s degree in library science and legal information management.

Professor Lamdan works with immigration groups on government surveillance issues, library advocacy organizations on Open Access and research and privacy projects, and with open government advocates on federal records preservation and access initiatives.

She recently authored a book called Data Cartels, the companies that control and monopolize our information and you guessed it, this will be the topic of our conversation, Professor lent and welcome. Welcome to the show.

00:00:56 Sarah Lamdan

Hi, it’s great to be here.

00:00:59 Domen Savič / Citizen D

It’s such an interesting read and it’s such a great topic to discuss because you start with the issue of open data, of data cartels, of companies that control and monopolize our information from a very, I should even say, personal experience. So I would like to know, to get us going, what was the reason you developed an interest in this topic and how did you go about how did you go about researching it.

00:01:28 Sarah Lamdan

Those are two really good questions. So to answer the first question, I kind of fell into this topic by chance. In a way, I’m kind of the ideal person to write about it, cause I didn’t come in with any sort of agenda. I was a law professor and also I’m a librarian, so I deal a lot with information access and informational resources.

And I actually saw in the news that a lot of our research providers in the United States were vying to work with our Immigrations and Customs Enforcement Agency, so ICE. And I’m not sure how it’s viewed across the world, but in the United States, especially in 2017, when I first started digging into this issue, ICE was problematic.

Immigrations and Customs Enforcement in the United States was known as committing human rights abuses. Basically, separating families at the border, putting children in cage like enclosures and just doing all sorts of really icky gross things. So working with ICE wasn’t positive, right? It wasn’t about helping immigration agency reunite families or help people ascertain citizenship, it was really about human rights issues.

So I became very interested in why our research providers were working with ICE and what they were giving ICE and I started asking, you know, I started asking our research vendors at the school that I worked what is your company doing with ice? And I asked in my professional organization, the American Association of Law Libraries what exactly is Lexus, our main research provider and Westlaw, our main research provider. How exactly are their companies working with ICE?

And instead of getting the answers, I got kind of censored, the American Association of Law Libraries wouldn’t allow me to ask that question on their web pages and at my law school, my vendors became very agitated when I asked them about about their work with ICE.

I wasn’t getting answers and that’s really what started the research process behind data cartels and kind of all the research I’ve uncovered then. And really what came after that even-though I’m an academic, the research process for digging up those connections that I described in the book, it was almost journalistic. It was almost more of an investigative reporting type of research where I was trying to look at corporate filings and advertising and what work that other journalists had done to connect the dots between our research products and government data brokering and surveillance, which is what I uncovered ultimately.

00:04:44 Domen Savič / Citizen D

Do you have any reasons… you mentioned in the book and you just mentioned the journalistic way of researching the topic, it seems that on one side there’s let’s say a fair amount of information on these systems in the public, you just have to find a way to get it, at the same time or on the opposite side people, as you’ve mentioned just now, are very are being very obtuse about it in terms of, you know, they’re not giving you answers and stuff. How do you reason those two extremes? Why do you think there’s such a such a code of silence related to to these systems in, in, in academia and in a broader society?

00:05:31 Sarah Lamdan

That’s a good question. So I think most of it is obtuse for kind of public relations reasons. I don’t think that Reed Elsevier, Lexus Nexis or Thomson Reuters or any of the other entities that are doing government surveillance… I don’t think… let me turn the negative into a positive. I do think that these companies recognize that the public generally doesn’t like to be surveilled by the government and that the work it’s doing with government agencies probably doesn’t have a good public relations base. It doesn’t look good.

So the companies themselves are purposely obtuse about the work that they’re doing with the government, in fact. I work with a bunch of organizations that do legal work around these issues, and they’ve actually in the United States found that American agencies actually have non-disclosure provisions in their contracts with Lexis Nexis that prohibit government agencies from discussing their contracts so they recognize it.

They they know that the public doesn’t think it’s great that they’re working with ICE or that they’re sharing information with the FBI or the, you know, or other government agencies. It doesn’t look good and I think one of the reasons in academia that we feel a lot of discomfort around this topic… I think there are two reasons. I think the first is that we recognize how much we depend on Elsevier in science, direct and you know all of the other products that these companies provide us, we need them, right?

We need to have good working relationships with Elsevier in order to get contracts in order to make sure that our academics get their work in the right journals, right? So it’s important for us and for all librarians and for our administrative staff and to work with these.

And also the other reason. The other reason is that we don’t know a lot about what is going on in these companies, right. And more information about that is starting is to come out. And when it does, people like me or brands or everybody else who kind of does work in this area, we tend to get a lot of push-back from the companies. We get nasty letters from them in the mail. They call our deans and our and people in our academic institutions to to tell them that we are spreading rumors or false information.

So it can be kind of scary for an academic who stand up to these companies without a lot of backing from their institutions or their professional groups.

00:08:37 Domen Savič / Citizen D

So in your book and in the debate around data cartels.. you have, let’s call them two sides, right? You have government agencies, government institutions, public institutions that are buying or that are using these data sets that are working with these private companies, essentially, that are putting together these data sets. So who do you think started this, should we say the ball rolling?

Is it the government coming to individual or independent companies, saying OK, we need that. Can you provide that or was it the other way around? Was it the public private companies putting together these data sets and asking or, you know, pitching this to the government saying “I’m sure you could find a way to use this?”

00:09:32 Sarah Lamdan

That’s also a good question. I’m going to say that after every question cause they’re all good questions, but the that question just got an entire book answering it, so one of my favorite journalists who writes about data analytics companies, his name is Mackenzie Funk, his book is coming out, I think in the next few weeks. It’s called the Hank Show and it’s actually about the man who created these data analytics systems and it’s it’s a really fascinating story, I urge everybody to go out and get The Hank show, it is from Saint Martin’s press, it’s coming out I believe at the end of this month or maybe next month, so keep your eye out for it.

But in the book he describes how the data analytics systems were created actually in the private sector. They were created in Florida by this man, jh created a company called Citizen and Citizen became Matrix and after September 11th, the man who created these systems, named Hank Asher, actually decided to market terrorism prediction products or products that would predict who is more likely of committing a “terrorist act”, he took that product to the White House, to the government and sold it to them so I think that that is kind of the system that these companies have been working on ever since.

They create these predictive policing systems, these predictive fraud systems, these predictive you know, they predict all sorts of different things, who might default on a loan, who might be a bad tenant, who might be a good employee, right, who might be an insurance risk. So they they create all these systems and then they market them to health insurance providers, to auto insurance providers, to ICE, to the FBI, to, you know, any government agency, Social Security, Administration, IRS.

All of these are examples of agencies that use predictive systems that that companies like these built. So I think that these systems are created in the private sector and then marketed to the public sector as easy solutions, and then they’re very appetizing to a public sector that can use algorithmic solutions and hiring more investigators and hiring more people, which is more expensive, more time, you know, it takes more time for a human to do investigatory work than for an IRS agent to just press a button and pull up a hot list of, you know, the 10 most likely fraudsters or what have you.

Fraudsters is an adjective that Lexis/Nexis uses, so I like to use because I think it’s just like… they they find fraudsters. But the government finds it appealing to use these, you know, easy algorithmic digital seemingly miraculous solutions instead of hiring more agents and, you know, doing more expensive things and more time consuming things.

00:12:53 Domen Savič / Citizen D

If we go step back and we leave the issue of the government buying these systems, how will the government effectually regulate them? That’s that’s a spoiler for later on but let’s start at the beginning. So why are data cartels bad for consumers or in citizens cause cause usually when you ask the government or you ask these companies, they go exactly the way you went.

So they’re talking about, you know, effectiveness improving the work process, the finding, you know, a needle in a haystack. So they don’t talk about about the negative things while your book is basically just, you know negative and I would like to know why. Why is there such a discrepancy and what are some of the bad consequences that that derive from from the these these data cartels that are that are all over the place.

00:13:53 Sarah Lamdan

Right. Yeah. It’s funny. People assume that I’m, you know, a Luddite or that I hate technology based on the things I describe in Data cartels. But I don’t. I I wish that these products were as efficient and miraculous as they reported. Unfortunately in their current form they are not and that’s kind of that’s what I described in my book.

So a lot of the problems are problems around, you know, algorithmic bias and biased data sets, but I’m not an algorithm expert. I’m not a technologist. I’m a librarian, so I always direct people to other authors and other books about algorithmic integrity and algorithmic biases. You could read any work by Ruha Benjamin, by Safiya Umoja Noble, Kathy O’Neill… there’s a lot of really great work out there and research out there that describes why it is problematic for government entities to rely on algorithmic solutions for ranking people and sorting how risky people are, right?

And there are also a lot of books out there about the problems with our data sets, you know, like Victoria Eubanks wrote a book called, I think it’s called Algorithms of inequality… I’m looking back at my bookshelf trying to find it, but about you know how law enforcement data sets tend to disproportionately have certain types of demographic data in them and certain communities are over-represented in in law enforcement data sets, right? So both algorithms and data sets tend to be biased.

What my book focuses on and what I feel safe discussing as a librarian without falling into unknown territory for myself is the fact that there are a few companies that have swallowed up and now control a lot of our informational resources right? So what I focus on is why it is harmful that Lexis/Nexis for example which is part of the company, Reed Elsevier, Lexis/Nexis. Why this one company or how this one company came to dominate so many of our informational markets and why that’s a problem.

And I use Reed Elsevier, Lexis/Nexis, as my example not to pick on the company, but because they are so uniquely massive. So usually if I’m doing a presentation I have this slide where I show that Reed Elsevier Lexis/Nexis in the United States dominates the legal information market, right?

In the United States if you want to do legal information research, research or you want to look at legal information you have to subscribe to either Thompson orders Westlaw or Lexis/Nexis Lexus research platform. Those are the only two really only two competitive games in town. Those are the gold standard of legal research in the United States.

If you’re an academic and you either want to publish or do research, you have to have access to Elsevier and you have to have access to Elsevier Science direct platform, right? And if you want to assess how you’re doing as a researcher, how impactful you are as a researcher, you have to use Elsevier Scopus or you know your data somehow has to feed through Elsevier academic data analytics systems or you have to use Clarivate, which is the really the only major competitor to Elsevier in the academic data analytics sector, Lexis/Nexis, boasts the one of the largest news archives in the world, so media and news information is another market where Lexis/Nexis Reed dominates.

Financial information, so information called for from public sector like SEC fillings, financial fillings that companies are required to submit to the government and also news about financial institutions… Lexis/Nexis has kind of a…is part of an oligopoly of companies that that control that information.

And I feel like I’m forgetting one of the markets here…oh, and also this is the most creepy one, right? It turns out that Lexis/Nexis is also one of the biggest personal information providers, one of the biggest personal information holders and personal information data brokers in the world.

Right, so they have all of our personal data too, and what the company does is it doesn’t just stay in those unique sectors. Academic information, legal information, financial information, personal data. Lexis/Nexis has found a way in multiple ways to use all of those different datasets and combine them to make new informational assets, to make new “data analytics products” that mix and match those types of data – our personal data with data about our academic success, our personal data with news information, our personal data with legal information and all of that data is kind of mixed and mashed and “crunched”. It’s put through data analytics systems, machine learning systems, you know various algorithms to create new unique information types that then Lexis/Nexis can sell for even more money, right?

That’s where our predictive policing products come, our predictive insurance products, our legal analytics and academic analytics products come from.

00:19:51 Domen Savič / Citizen D,

Hmm. But, you know, speaking as a devil’s advocate, I would go and say, you know, if you did nothing wrong, if you have nothing to hide, then these data sets that are then brought up by ICE, by police, by other other agencies, they don’t concern you, right? So what’s wrong with police having all of this data at their fingertips to sort of, you know, fight crime and other illegal activities?

00:20:23 Sarah Lamdan

Absolutely. That’s a really good question, right? And I think in an ideal world where there is no algorithmic or data error or bias, I think that that’s a really good question to ask. Like how how amazing would it be to have a digital data fueled product that would make policing perfect, and that would maybe eliminate bias in academia by really using proven data to to determine which types of research are the most important and which types of research should be funded, and then in a perfect world, these products might really be miraculous, right? In fact, one of the things that Mackenzie Funk heard about in the beginning of the Hank Show is how Hank Ashers matrix product actually identified 5 of the people who planned the September 11th attacks before law enforcement even knew who they were. That’s miraculous, that is amazing in a moment where there has been some sort of horrible crime committed, to know immediately who committed the crime.

Fantastic. The problem, unfortunately, is in our current system, we still have the problems that that the algorithmic bias problems and the data bias problems that Virginia Eubanks and Safiya Umoja Noble and Ruhab Benjamin and all of the other kind of critics of our current AI systems have unearthed those problems. Whether we want to acknowledge them or not, they are present, right now, our algorithms are imperfect. We don’t exactly know how they work and we do know that they tend to be biased and we actually… one of the things that my book focuses on a lot because although I’m not, like I said, I’m not an algorithmic expert, I do know a lot about, you know, information assets and information.

A lot of the information that Lexis/Nexis and companies like Lexis/Nexis use, are erroneous right now, right. One of the articles that I point to a lot, is an expose in Newsweek called, I think it’s something about how Lexis/Nexis, how, when Lexis/Nexis makes a mistake, it hurts you … and the article is basically a bunch of interviews with people who have been harmed because Lexis/Nexis has incorrect data about them in their systems.

There is an example of a woman who gets locked out of her own bank account because her sister is having credit problems, but in Lexis/Nexis’s data set her sister’s data is combined with her data, and so she gets locked out of her bank account. There’s a story of a man who has the same name as a completely different person, the other person has insurance issues, this man doesn’t, but because this man’s data is conflated with that other man’s in Lexis/Nexis, he can’t get auto insurance.

So you run into problems like this again and again and again, and the more when an insurance company uses the wrong data about your or it has erroneous data about you. That’s annoying, right? You can’t get the insurance you want.

But when your landlord, when your when your landlords tenant screening product, this is Lexis/Nexis, you might not be able to get housing, right. There’s a whole expose in the I think it’s the Texas Observer about how people get blacklisted getting apartment from getting a roof over their head because Lexis/Nexis has erroneous data about them, right?

And then that becomes even more harmful when law enforcement uses that data, right, you can get arrested because your name is the same as somebody else’s name, right? Or because Lexis/Nexis another person drivers license conflated with yours in their system and that becomes really problematic so until we can ensure that the data about us is correct in these systems and that the algorithms aren’t biased in these systems I still have a lot of concern about us using these systems.

00:24:47 Domen Savič / Citizen D

Yeah, and do you see that happening in the future? Like when you usually debate around these topics and you have like representatives from an industry or decision makers, they usually go “Well yeah you know, no system is perfect. These are glitches. We’re ironing them out.” Do you see that happening or do you see us as a species, to be a bit dramatic, coming into a situation where we have, you know, the perfect data set, the perfect algorithm, the perfect automated decision making system and the perfect result in the end?

00:25:23 Sarah Lamdan

That’s a good question and I mean, I will… I wanna before I even answer that hypothetical, I will also point out, like humans are innately biased, right?

The reason that there’s so much data bias in law enforcement data is because for you know, for over a century, our law enforcement agency has been doing things that are are racist and otherwise, you know biased and prejudice, right? So humans have their own… I don’t, I don’t want to pretend that like the perfect system is already in place and that is the human system.

There are problems with the human systems as well, so I just wanted to put that in the world, because that’s an important thing to recognize. Our systems are also problematic and I do mean, yes, if we treated data systems with the care that they warrant, with the care that they deserve, I do, I could envision a future where we have these systems in place and they aren’t biased, but I think a couple things that we’re missing right now that we’d have to put into place are… OK, so one of the main things we would have to put into place, it is more transparency.

Right now I can’t see what my data dossier looks like in Lexis/Nexis, even if because I live in New York State. So even if New York State and my local law enforcement agency is using Lexis/Nexis to determine whether I might commit a crime or whether I have committed a crime, I can’t see what data they’re using about me.

I can’t know if it’s correct or if it’s erroneous and I can’t correct it right? Like I can’t say “Ohh I never lived in that address” or “Actually that wasn’t me who was at that place at that night” or “I don’t drive that car”. I can’t say those things, I have no power, so I think in order to move to a place where maybe we could use these systems and we could use them well and we could use them effectively, things would have to be a lot more transparent for consumers and for the public, we’d have to be able to view our own dossiers, and we’d have to be able to correct them, right?

We’d have to be able to ensure and feel comfortable that when NYPD or you know any other law enforcement or other type of agency was using the data. I want to feel comfortable knowing that what they were using about me was information that I feel OK sharing and that I know is correct. I think that would have to happen, and that’s a big undertaking.

I don’t think it’s impossible, and I don’t even necessarily think that it’s a bad idea, but right now, that type of transparency and that type of corrective measure isn’t in place so that I think first and foremost, those are two really basic things that would have to happen.

00:28:28 Domen Savič / Citizen D

Would you say lack of transparency is also one of or maybe the reason that data cartels are so hard to regulate… you can’t even see them or you can’t even see what you’re regulating. And this is something that, yeah, impacts the regulatory frameworks or would you say something else is going on that these data systems are being almost, you know, untouched and free to do what they do?

00:28:59 Sarah Lamdan

Yeah, I think you’re… I think what you’re getting at is absolutely correct. The lack of transparency is a huge problem, right? Lack of transparency to the public to even know that the government is using these types of systems or know that their insurance agent or their landlord or any other entity that’s making decisions about their lives.

Usually the public doesn’t even know that these algorithmic systems are being used behind the scenes, right? You don’t know when you get hired that your job application was run through some sort of employment screening system that was algorithmically powered, right? So that type of transparency is absent.

And on a deeper level, we don’t, let’s say you do know, let’s say you do get. You live in a place where you’re required to get a notice that your job application has been run through one of these systems, you don’t know what the data about you that they ran through the system includes. You know what the data inside the system that your applications being checked against includes and this is key. You don’t know what the algorithm is assessing and you don’t know how the algorithm is working to make that assessment. The major problem with that is these algorithms that are being developed at Lexis/Nexis and at other types of of data analytics firms.

We don’t know how the algorithms work and most of the people who design the algorithms don’t know how they work because the algorithms work so quickly and do so many things with so many data sets that algorithmic transparency is not only unavailable, it might be impossible in the current system, right there are.

There are a lot of entities and advocates that that are trying to demand algorithmic transparency, which is an important and noble aspiration, but a lot of times, even the people who developed the algorithm in like a Lexis/Nexis or an Experian or other system…they don’t know how the algorithm they developed works, so how are we supposed to understand? How it works?

00:31:10 Domen Savič / Citizen D

You mentioned that in in the conclusion of your book and I love it because it’s such a refreshing take that differs very much if you compare it to these calls to, you know, active citizen that that just needs to be educated about certain topics, saying everything will be alright, because we’ll know what’s going on and we’ll know which buttons to push to to get what we want.

And you basically called for the right blend of of governance, oversight and support to resolve the issues of of privatized data collections and treating the essential information as a public resource. So if we take a walk down the memory lane and look at the history of the net.

How feasible do you think it is your call that’s A… and B, where did we go so wrong that we ended up in this privatized two companies in the world type of digital sphere that started off as a part of the expression, like a like a hippie commune, right?

00:32:22 Sarah Lamdan

OK, I’ll admit that I tend to gravitate towards the hippie commune ideal so… Yeah. OK. First and foremost, yeah, I think right now we are in kind of a hyper capitalist system when it comes to tech especially, and I’m not sure if that’s because it’s so kind of Silicon Valley based and that’s very like American and we just love our Uber capitalists, you know. I want to call it fantasy, but really we are playing it out in reality so our Uber capitalist tech reality.

And yes, I’ll admit… I do put my ideals, kind of over the other extreme, but really what I… in my ideal… It’s not about choosing between capitalism and like some sort of hippie Co-op for data, but it’s about putting the onus where it belongs. I think the problem with our current hyper capitalist system is that what we’ve let the companies do is make consumers think that it’s their fault and that it’s their problem to solve like “Ohh, I’m sorry, do you feel like data companies are too invasive? You need to change the privacy settings on your phone. Or you need to not use social media.

We put it all on the individual to try to keep their data out of these systems, or even to request their own data dossiers. Right? Like now, our kind of new ideal is passing these laws that allow individuals to get their own data dossiers. Like, what’s the benefit of that? Let’s say I live in California.

Yeah, well, I’m legally entitled to my own data dossier, so I live in California. I request my own data dossier from Lexis/Nexis. Great. Now I have a PDF full of data about myself. Some of it is right. Some of it is wrong. That doesn’t really empower me to do much right. I could try to call all the companies that I think have wrong incorrect data about me and beg them to fix it, but that that would take hundreds of hours that I do not have right.

So right now, even in our current best scenario legal framework, where we can see our own data dossiers, the responsibilities placed on individuals and then individuals have very little power. I am urging us to rethink this system so that instead of the onus being on the individuals that way it is now on the companies, the companies who create these systems should be responsible for making sure the data is correct, giving members of the public easy recourse to correct their data or to erase their data, right?

That should be the responsibility of the companies who are making billions of dollars building these systems. And really those companies are the only entities that have the real power to make our digital lives better.

So what I’m trying to do in a way that I’m not sure anybody could succeed. That is, I’m trying to take that boulder of responsibility and just like, dump it across the ground from individuals back to the powerful companies that could actually make the data world better because like I said, I’m not a complete Luddite. It’s not even a question of whether these companies should have a right to exist. They exist. This is reality. This is the system. Some of the ideas that the companies have are very, very cool how can they execute those ideas in a way that doesn’t harm us or put us at risk.

00:36:23 Domen Savič / Citizen D

Before we wrap up, I just want to mention something. You apologized sort of in advance several times saying that you don’t hate technology. I have a same same feeling working on issues like like privacy and security and open access… you always have this need of apologizing in advance if you’re not totally on board with you know the complete version of surveillance surveillance capitalism.

And I wonder, do you think the issue starts at just basic language structures around these issues, like saying OK, “I’m not sure this is the best way to go” equals you know, “We need to tear everything down and just, you know, build those to the to the ground”. So. So do you think we’re sort of like cultivated in this this techno solutionistic approach that basically prevents any normal discussion happening… not, you know, not to start with with solutions, but just, you know, have an honest conversations about these issues.

 

00:37:39 Sarah Lamdan

Absolutely, I think that the discussion around these issues is so polarized that if you say anything less than “Ohh I love you!” or “I love TikTok. It’s favorite thing the whole world!”… If you say anything less than that then you hate technology and you want to throw your computer in the river, right.

And I think we really haven’t been able to have nuanced discussions about practical solutions. I’m not sure why that is, I do know that since my book has been published, people who don’t like what I have to say and people who don’t agree with what I have to say, which is completely fair and I expect like. Yes, please, let’s have these discussions. But people who don’t like it really do paint me as somebody who just hates technology or hates data analytics and it’s a shame, because I think that that really diminishes the ability to have real fruitful discussions, right.

And I do wonder if that’s because of the way it’s being framed commercially and politically because one of the things that happens is whenever the government, any government works to incrementally place responsibilities on the companies that develop these types of tools, whenever, especially you know in the US or in the EU when we look to clamp down on the data collection that, like a Facebook or a, you know, Lexis/Nexis is doing… there are lobbyists who visit the politicians office and say “This would destroy the Internet, we can’t do this!” and now also pundits who come out on the Internet and on talk shows and say you know “These people just wanna ruin the Internet”, or “They don’t care about law enforcement”, right?

But I think it’s possible to care about law enforcement or to not be an absolute, just about law enforcement but to be concerned about law enforcement using, you know facial recognition apps or predictive policing apps.

I feel like sometimes it becomes kind of a scare tactic used by tech companies to make critics look silly, non professional, you know. So it diminishes detractors and critics and it also kind of silences people by making them think that if they critique tech platforms, tech companies, then they might destroy them or ruin something that they like or that they enjoy. Because I do think there’s a way for us to say, buy something on Amazon and get the helpful “More like this” option because I’ll admit… if I read a book and I enjoy it, I want to see more books like that book. That is a really cool tool and that’s a data-driven tool, data-driven by my data, the data of other readers and users.

Yeah, that’s a data-driven tool, but is there a way that we can implement that and also protect my data privacy and be transparent about how those recommendations are being developed right? But I think that that would put a lot more expense and work in Amazon’s products, right. They have to do a lot more work and engage in a lot more transparency and I think that a lot of companies don’t want to pay those extra costs and don’t want to do that extra work.

So it’s a push and pull game where tech companies have a PR advantage and they can make us look like we are foolish or that we’re not real experts or that we don’t really have real solutions.

00:41:56 Domen Savič / Citizen D

And just one more question before we wrap up… Let’s compare notes on the countering tactics or on the tactics countering that narrative, what do you think works in sort of persuading people that, yes, you’re not throwing laptops into the river and smashing light bulbs and yeah, writing your stuff with ink and pen.

00:42:19 Sarah Lamdan

Right. Yeah. No, that’s a good question. One thing that is surprising, like a few months ago, I was asked if I’d be willing to be on a panel at a conference with somebody from Lexis/Nexis. And my answer is yes, absolutely.

I would be really glad and excited for the opportunity to talk about, you know what our concerns are as academics, as consumers and then you know, have that discussion about what are some real solutions like if we sat down and thought of three things that we could work on together. What would those three things be? I think that would be really, really cool. But notice, I’m a willing panelist, it’s not me who’s not willing to show up on that panel. I guess that’s what I’ll say, right?

So I think just to show a continued willingness to have that discussion and also to not be cowed by push-back you get from other companies, cause I’ve got I’ve gotten push-back from representatives at Lexis/Nexis and people who work at other data analytics firms, right? They’ve panned by books, they’ve said really nasty things about my motives, right?

But I’m completely willing to have a discussion about why do you feel that way? And is there a place where we agree, right? Because I do think that there are places where we agree, we just need to find those places together and that that has to be a discussion that both sides are willing to have and everybody I want to say sides but like that, everybody in the community creating this work and then using this work is willing to have.

00:44:20 Domen Savič / Citizen D

Professor Landon, thank you so much for dropping by. This has been Citizen D podcast. We publish an episode every month, so see you next time. Thank you again and best of luck.

00:44:31 Sarah Lamdan

Thank you.

 

Citizen D advice:

  • Algorithm and data transparency is the first step towards corporate responsibility
  • Personal responsiblity will only get you so far
  • Technosolutionistic language is affecting the debate around the social impact of technology

More information:

  • McKenzie Funk, The Hank Show: How a House Painting, Drug Running DEA Informant Built the Machine that Rules Our Lives – book
  • Andrew Guthrie Ferguson, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement – book
  • Who’s Behind ICE? The Tech and Data Companies Fueling Deportation – analysis [PDF]
  • Alice Holbrook, When LexisNexis Makes a Mistake, You Pay For It (Newsweek Magazine, 2019) – article

About the podcast:

Podcast Citizen D gives you a reason for being a productive citizen. Citizen D features talks by experts in different fields focusing on the pressing topics in the field of information society and media. We can do it. Full steam ahead!

Join the discussion

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Podcast Državljan D

Naročite se na podcast Državljan D!