Since 08/2022 6 episodes

Netzfest EPISODE 04: Democracy and Civil Society with Dr. Martin Moore

The OXFAM Podcast for Fair Digitalisation

2022-09-12 49 min Season 1 Episode 4 Oxfam Deutschland | Katrin Steglich and Landeszentrale f. pol. Bildung BW

Description & Show Notes

In this episode, we explore the interrelationship between digitalisation and democracy. 

Transcript

KI Linda
00:00:00
Digital is now everywhere. The promises that it would make everything better have not been confirmed for everyone. In this podcast-series, we explore the question of how digitalisation can contribute to a more livable world, e.g. by reducing inequality and strengthening equality, sustainability and civil society.
Katrin Steglich
00:00:28
Welcome to the Oxfam Podcast, which is dedicated to the topic of "fair digitalization”, and which is produced in cooperation with the Federal Agency for Civic Education Baden-Württemberg. My name is Kay and today I am talking to Dr. Martin Moore. He teaches political economy at King's College London and is the Director of the Centre for the Study of Media, Communication and Power. His research interests include political communication, technology platforms, digital dominance, and the future of democracy. Thank you for taking the time for this interview!
Martin Moore
00:01:06
Thank you very much indeed for inviting me.
Katrin Steglich
00:01:09
Martin, in 2018 you wrote a book titled “Democracy Hacked – How Technology is destabilizing democracy”. On the cover it reads: “We stand on the precipice of an era where switching your mobile platform will have more impact on your life than switching your government. Where freedom and privacy are seen as incompatible with social well-being and transparency. Where your attention is sold to the highest bidder.” You also noted that politicians don’t understand what is happening. Has your impression changed – after all, almost 4 years have passed?
Martin Moore
00:01:37
Certainly, a lot has changed. I suppose the biggest thing that has changed is that suddenly across the world, governments have woken up to the power of these very large Tech Platforms. And what we're seeing now is a swathe of attempts to try to tame not just the Tech Platforms but to tame the whole wild west of the Internet as it is often called and we are not just talking about democracies there either and we see that across the world and China, we've seen it in Europe with very large bills under the Digital Services Act in the Digital Markets Act, we’re seeing it here in the UK with the Online Safety Bill and we’re seeing attempts at it in the US with a series of different attempts to bring in different legislation, some of which I absolutely think is very welcome and indeed sort of belated. Other aspects - perhaps we can talk about later - I'm more concerned about in a sense that I think that we are going through a really fundamental transformation of our media and communications environment. And that has implications across politics but across society and really the way we’ll live over the next century. And I worry that in the rush to try to tame the internet and these very large platforms that actually we're going to in some ways make some of the problems that are I talked about in the book worse.
Katrin Steglich
00:03:06
I'd like to ask you to get to the root of the problem and to explain the structure of the Internet in a nutshell, i.e. how it has historically grown and what consequences this has had on the perception of its infrastructure, as well as how it is used, especially regarding interaction with other people and groups online, as some of these problems that we have probably come from this perception.
Martin Moore
00:03:32
Sure, I'll do my best. It's a very big question. So one of the things I was trying to do in the book, it was a genuine curiosity of my own, I suppose a lack of understanding, of my own ignorance, about why it was that it seemed as though suddenly, when I was writing, this new digital space which was clearly becoming so integral to our lives, why it was so easy for people to manipulate it, to use it for kind of fraudulent and nefarious purposes, to distort political outcomes? I ended up going right back to try to understand how the internet evolved and in particular understanding some of the principles that underlay in its very early stages. So obviously, as many listeners will know, that the original has been an internet that emerged during the Cold War. In its very early stages, it was essential a panicked reaction by the US when they thought that Russia, the Soviet Union, was taking a lead in terms of technology, when they launched the Sputnik in 1957 and in terms of that military hardware. And so, in its first manifestation, the idea was: how do we make sure that a single strike from a nuclear missile fired from the Soviet Union will not disable all defense and communications systems within the US? So how do we create a decentralized network to avoid that sort of situation. And that was the clever original if you like, and the nuts and bolts of it. But quite soon and particularly after America felt in many ways it had caught up, especially after it had landed a man on the moon, those who took over this new network were hackers in the sense of computer nerds and who were just fascinated by the potentialities of this new system of communicating with one another, of developing collaboratively, of working together; and these pioneers, if you like, I think they were informed by certain philosophies and principles and we see that in the seventys, the eightys and the early ninetys, and we see that, particularly for example, that this amazing conference that happened in 1984, irony that it happened 1984 - the year of George Orwell's famous novel - but that it happened at the first ever hackers conference, it happened in California, near San Francisco, and it was set up by a remarkable man called Stewart Brand who is this amazing figure, who started out in about his sixtys as a counter-cultural figure and then became fascinated by technology and ended up actually setting up not only the first web bulletin board “The Well”, but also the Bible of Silicon Valley and the West Coast: “Wired Magazine”. But in 1984 he set up this conference to bring all these nerds together, these hackers together, to both encourage collaboration but understand what direction they were going and where this whole internet thing was going. And one of people that was there was a journalist, Michael Steven Levy and he recorded it and wrote down, taught a lot of the hackers there and wrote down what he felt informed their work with the principles that underlay what they were doing. And one of the key principles which Steward Brand himself actually voiced at the conference was that information wants to be free. Information should be free. At least this idea that information - and there was a deliberate ambiguity - free in the sense of free to travel where wanted, free for people to use and collaborate, but also free in a sense - at least this is how people interpreted it - free in the sense of not costing anything, not expensive and that was very much the ethos that if you put any blocks on people being able to collaborate, that will be a restraint on progress or a restraint on growth. The second principle that I think really informed the early web was developed by someone who wasn't at that conference but knew Steward Brand well and became an evangelist for a certain understanding of the web. And that was a John Perry Barlow who was actually the guitarist of Grateful Dead - the famous band - but became completely entranced by the early internet and by the bulletin boards in the late eighties and early nineties. He saw as this other space that was free from the governments of the (as he saw it) the terrestrial world, the real-life world and a space where people could do things differently, a kind of frontier space similar to the frontier that some the first, pioneers in the US in the seventeenth, eighteenth and nineteenth century had explored and he was adamant that this was a frontier that the people should be able to populate, the people should be able to build, the people should be able to inhabit free from the constraints by which he meant government, by which he meant laws, by which meant some of the norms of the real world and he set that down in a manifesto in 1996 which has become one of the founding catechism of the early web. So these two ideas, the idea that information wants to be free and the idea that the people should be sovereign in the sense of free from government control on the internet - I think were crucial to the formation of the early web and really informed an awful lot of those who are building the tools in the 1990`s (obviously 1990’s and early 2000’s) when we saw the formation of these behemoths, the Googles and Facebooks and the Twitter's of this world, and also what informed many of those who were starting to see the potentialities of the web for political and social purposes.
Katrin Steglich
00:09:35
We've also seen some toxic culture come out of this development. And the question is: Is this original understanding of the web being free and unregulated still viable for today's reality of this infrastructure, which is almost as important as physical structures or the terrestrial world as you said?
Martin Moore
00:09:56
I think you’re absolutely right. The implicit question is that this is not sustainable when you have this entirely free world, which is free not only for people to do what they like, but also for people to commit fraud, to abuse others, to harass others, to mob, to doxx, to brigade others. Clearly, it's a world which is uninhabitable unless it has some rules, unless it has some norms and some structure. What we found is that an awful lot of that structure was developed by these what were the nascent platforms in the early 2000’s but that have become huge platforms: The Meta and the Alphabet, Google and Amazon and Microsoft and they have created if you like their own sovereign empires in the sense when you are participating within their realm, you to a certain extent obey their rules, you certainly - if you had anyone who's been on the wrong side of their rules - will know that if you get locked out it's very hard to get back in and they - to some extent - are judge and jury within those realms. I think that one of the problems has been that as people have realized that this space needs some governance, those that have filled that space have been private enterprises and I think it was only in the late 2010’s, particularly subsequent to Brexit and Trump in 2016, that people started to realize you can't…. if you let these organizations control and govern this space - yes, they might have good intentions, yes, then - but if they’re for-profit companies and if their business models rely on advertising as some of them do, then they necessarily will structure these environments in a certain way. And if their structure them in that way, then that can have really quite negative consequences. We might come on to talk about this, but I think one of the most toxic and negative consequences has been the development of this Ad-tech model, this new model of advertising, which drives much of the information which we see today, and certainly what makes up most of the revenues of some of the largest platforms like Meta and Google in particular and I think which is in many ways responsible for many of the problems that we see. Until we really understand exactly the mechanics of the political economy if you like of these platforms and why these business models are so troublesome, then we won't get the root of exactly some of the issues you talked about.
Katrin Steglich
00:12:30
Looking on the positive side: with this kind of technology available to approximately half of humankind allows us to personally broadcast information to the world. And it also frees us from possibly limited narratives from mainstream media, at least in the west. What meaning does that have for society.
Martin Moore
00:12:46
That’s a really good point. I think we’ve moved from a society which is characterized by information scarcity to a society which is characterized by information abundance or indeed even obesity. Where we have certainly gone from a world where the amount of mediated information that you could access is relatively small, in some ways effectively we have access to infinite information. I think in some ways that’s kind of a simple point I think that people … It’s going to take us decades to come to terms with that, not least because what it means is you have more of everything. As you say that there's more opportunities for all of us to express ourselves for all of us to collaborate, for all of us to overcome constraints of distance and speak to people across the world whenever you want virtually for free. So, all of which is a remarkable thing, but in the same way as it allows for us to produce more information, to publish more information, access more information, so necessarily it means that there'll be a lot more bad information out there as well as a lot more good information. So, it's no surprise that we've seen an explosion of misinformation and disinformation and mal information because there's just a huge amount more information. Similarly, I think it’s no surprise that if you go online, not only are you seeing, are you able to access public figures, are able to access many communities you weren't previously able to access, there's also masses of people out there who maybe don't have good intentions, or indeed who aren't people at all because they are automated, they’re bots, or indeed people who are speaking - perhaps for a very good reason and perhaps not - under pseudonyms or created so called sock puppets or catfish etc. So, I think what we're starting to realize is that an information environment that is characterized by abundance is fundamentally different from an information environment that is characterized by scarcity. And we have to think about it in different ways. We have to recognize that in that environment we are susceptible to some things like a selection bias to increase to being more likely to gravitate to those who we agree with and those who we like and therefore might be less exposed to those people and information that we are less inclined to like and then that veers into what we believe and what we choose not to believe. So, I think it does feel like a very febrile time. We feel very vulnerable, I think, in this period. But that's an inevitability of the change in the information environment and what I worry about is that those who want to try to recreate or regress, if you like, to an environment of information scarcity, I worry that they're not taking into account all that we would lose if we tried to reduce those who could speak, reduce the amount of information available to us and crack down on - yes, there's lots of misinformation out there, but there's also an awful lot of information which is maybe misguided, but actually it's quite helpful to be exposed to, particularly if it comes from a very different place than the one that we were used to. So, I think it's going to be a rocky period.
Katrin Steglich
00:16:07
It seems to me we do have a need for coping mechanisms with this mass of information to be able to actually digest that on a personal level and we do have a requirement for filters. But maybe we should be in the position to define - well no - that would contradict what you just said.
Martin Moore
00:16:24
I certainly think we're learning a lot. Suddenly we’re learning more about what's harmful where I think we’re still far away from figuring out what's helpful. But what we're realizing as harmful is the Ad-Tech model and things like programmatic advertising, which I think one can see some of the negative effects that that has. But equally, I think we're starting to realize - and you see this particularly on platforms like Twitter - the adjustments to design and format, and functionality of science can have a pretty profound effect on how people engage with one another. So, for example, inserting a degree of friction in how we communicate, so I don’t know if you've experienced this, but if you try to retweet or share a tweet that has a link to a news article or another web site that you haven’t read or haven’t clicked on, it will at least delay you and say “Are you sure you want to send this, forward this or retweet this without having read it first?” So, just by putting that small hint and equally by making it clearer who you're sharing information with -because this idea that by default we should be sharing information with the world I think is quite odd – and actually for most people, the idea of just putting information out there to the world that doesn't make any sense. I mean you speak to people either directly or you speak to me within your community, you don’t just tell the world. Why would the world be interested? So, I think we're starting to see how you can change the functionality, the design, the format of sites and services. That does help in terms of - at least trying to reduce some of the vast quantity of just purely mistaken or overheated information that goes out there. But I think in the longer term it seems to me we do have do recognize that actually, unless we grapple with the scale of these vast platforms, unless we create a more mixed information and media environment, unless we start to recognize the influencer business models and ownership on these platforms and the information that they serve us, then we won't get to the root of the problem.
Katrin Steglich
00:18:39
One thing that I noticed with the volume of information is that people tend to handle topics very superficiously and they don't think deep about problems because there's so much to be taken in and to be shared and so on. One doesn't have the time to actually delve deep.
Martin Moore
00:18:56
Well certainly I think there's a sense - and I certainly feel this when I'm online and I’m on one of the social media platforms – it’s a flow, almost like a river, of a fast-moving river. And in order to engage, you have to sort of have to jump in and then - but as soon as you jump - all you can see, obviously is exactly what’s immediately around you and at the same time you're moving all the time and things are changing all around you all the time. And since it's very distracting, it's very confusing, it's very difficult and if you want to be heard, necessarily you have to shout very loudly or make yourself very visible, which I think goes to your point which is that attention has become such a premium commodity, it is so fought after and competed for in this environment that it is extremely easy to be constantly distracted and therefore I think you're right. In a place like Twitter there is a kind of feeling of endless movement and of endless superficiality and flitting from one thing to the next. At the same time the weird things about this world - it's almost like the middle has disappeared because I think if you do look, if you stop, if you like you get out of the river and you look at specific things you realize there's a remarkable degree of depth around particular issues, depth both in the sense of people - access to people who are really, really knowledgeable about it. Access to, as I say, because of his information access to huge amounts of a material - almost any issue you want to become more knowledgeable about in the world. But you obviously have to in a way almost step away from the stream, away from the river in order to focus on that because otherwise you find yourself swept away.
Katrin Steglich
00:20:43
I want to talk about the power of platforms a little bit more. As colleagues of yours like Jillian York have shown, that not only governments had the power to censor in the web. Also, companies in the US have legal cover to censor people and web spaces. Has your research been able to show why censorship regarding content on US platforms often aligns with political dominance and less with human rights principles?
Martin Moore
00:21:09
Certainly, it’s true. So going back to some of the development of the web the absolutely critical development from the perspective of the platform's was 1996 in the Communications Decency Act section 230. One American lawyer has called it a 26-words decree to the internet because those 26 words essentially gave the platforms the exemption from liability. I mean to distill it down or to simplify – oversimplify- such that they don't have to really take responsibility. Not only not have to take responsibility for whatever appears on their platform in the US, at least it slightly different in Europe, but also to your point, they can absolutely remove whatever they would like from platforms, in the sense of platforms according to their own community guidelines, which is -as I was saying earlier - where they become judge and jury of what's on the platform and that has… I mean one of the difficulties I think, and again we come slightly back to scale here, one of the difficulties here is that these platforms are of such a scale that we necessarily are guided by what we can see and what we are aware of and what we hear about them. And that can be based on a whole number of different factors but it’s unlikely to be based on a Big Data Analysis of the platforms themselves because they hold their own data incredibly close, and they give very little access to it. So, for example, in the US, the right in the US, the Republicans, have become absolutely convinced that the social media platforms are systematically favoring the left and Democrats and the left of the political spectrum over the right and that they are censoring on the right-wing content. Now they can point to specific examples of this, but I am yet to see any large-scale analysis that demonstrates this, if anything, actually, I mean the large-scale analysis shows that because of fears of over-censorship on the right, actually in some cases some of the platforms, particularly Facebook, have gone the other way and tried to sensor more on the left and centre-left. So, I think there is a real problem, which is that once these platforms become equivalent to the public sphere and there's a growing consensus, they have in many ways become equivalent to the public sphere, and yet they're still private companies and therefore they can pick and choose what they keep up and what they take down. You have a real problem because you have a situation where, if they deny access to someone or as they reduce the visibility of the political perspective that can have a really profound effects on politics and society and often they can do it without you even knowing that they are doing it which is, of course, one of the efforts behind the Digital Services Act, this new big new piece of legislation from the EU. One of the things that the Digital Services Act seeks to do is to at least make transparent some of the decisions that these companies are making and give people the ability to figure out whether information has disappeared, what information has disappeared, and why it disappeared and so you have some recourse. It’s better, it’s an improvement, but I think we have to recognize that until we grapple with the fact that we have, to some extent, by default handed off the public sphere to these vast American companies, that that in itself is a problem. I think we’re going to continue to have issues and will continue to have in fact, in some ways… I think some of these issues have become worse in the sense that as regulation consolidates to a certain extent a pair of these companies people become more and more concerned that they are hiding things, that they are censoring things that they are collaborating with governments to prevent certain things from coming to the public domain and unnecessarily and probably quite wrongly, there'll be lots of conspiracy theories about why that happened and I think that's a real danger over the next ten years about where we’re going.
Katrin Steglich
00:25:15
With the rise of Big Data and as you said, the Tech Companies don't like to share their Big Data that they have. The possibility to analyze these volumes of data in combination with their platform technology, what has changed with respect to lobbying for specific political interests and what impact do you see on democratic structures and processes?
Martin Moore
00:25:36
Where I suppose I focused some of my research around the role that these platforms are playing during elections in election campaigns and the role that data is playing during election campaigns. Because, as you say, I think - similar to the kind of transformation of the media communications environment from one of scarcity to one of abundance - we've seen a real shift from the analog currencies, if you like of paper money etc. to more digital currencies and the digital currencies include if you like almost an alternative currency of data in the sense that people talk about data being the new oil. But actually, in a way, data I think is more equivalent to currency and in some sense it’s the money than it is to oil and what I mean by that is if you have a really significant lesser political party or you are a plutocrat or you are a candidate. If you have access to significant amounts of voter data, that gives you a similar advantage in a sense of how you can plan your campaign, who you target; you prioritize your time to the advantages that money still does but used to give to campaigns. We have certainly seen that a lot in America the last decade or so, and we are increasingly seeing that, I think, in Europe and yet our electoral systems and processes or not geared to dealing with data. They're certainly here in the UK, and I think in a number of other European countries, our electoral protections, our electoral rules are based very much around spending. They are based around what candidates and parties can and cannot spend. How much they can spend in particular local areas in order to try to both stop elections from being bought, but also to create a level playing field between different candidates and parties. And the difficulty with data is that it is not regulated because not calculated. Once one candidate for one party has much better data than another party and is much more capable of using that data in order to target communications in order to prioritizes its attention, then - in a way – it doesn’t destroy but certainly supersedes many of those rules put in place to try and create a level playing field and stop elections from being bought. I think that that's the problem - that we're moving into an era where data becomes incredibly important in democratic politics. And yet it's not yet properly accounted for.
Katrin Steglich
00:28:14
Another topic I find belongs in this context is: you wrote that people are more susceptible to a political campaigning if it happens by social relationships, offline and online. Maybe you can elaborate a bit more on that with regard to tech platforms as well.
Martin Moore
00:28:34
Yes, absolutely - jump back in history again, but then by training I’m a historian, so I look for the roots of these things. But I was looking back - that research that was done into understanding why people vote and how people vote and what influences their vote. And there was a series of really in many ways the first quantitative studies of voters, what makes people vote, in the 1940s led by a man called Paul Lazarsfeld, an Austrian scientist, who migrated to the United States. He did this series of studies with his colleagues in the 1940s trying to understand why people vote, in particularly the effect that different factors had on their vote (the media and party commitments and that sort of thing) and his and his team's surprise, they discovered that actually – maybe as the most important thing in determining a vote, over and above your family background, was your social network, was the people around you and what they said to you, in particular what they generally saw was that within each of the social network's there are one or two people who tended to have a much greater influence and much more overbearing influence than others within the network, mainly because they were interested in politics and they read a lot and they talked about it a lot. And so, they had an outsized influence. So, he pointed to this what he called a two-step flow where you have the media, and you'd have politicians make commitments which would be - which tends to be read or consumed by a relatively small number of individuals. But then those individuals would be incredibly influential when it came to communicating those - obviously from their perspective and in their opinion to their particular social network or group. Now, in many ways that didn’t disappear, but it became less important in the broadcast era, in the later twentieth century when TV and radio and mass media became very dominant, that became, that kind of two-step flow and that kind of social network became slightly less important. Of course, it was still a factor - most people were getting more of their information from TV, radio and elsewhere. But what we've seen, I think in the last fifteen years or so is as social networks have become more and more important, so has the two-step flow increased in many ways; it has been enhanced in terms of how we make our decisions when it comes to politics and voting. And we see this really clearly in a place like India. If you look at India and the way it's politics now work in the last couple of elections, or certainly the last elections, the role of WhatsApp in India was crucial and what’s happened in India, which is I think it's WhatsApps biggest market, it is huge in India (WhatsApp), everyone is a member of multiple WhatsApp groups, but particularly family and community groups. What parties like BJP (Narendra Modi's party) have done is they very deliberately sought to either get themselves into those family and community groups or find someone within those family and community groups who is very sympathetic to BJP as their social influencer, as their political influencer within that group and it has been remarkably effective. But we also see that, of course, on other platforms as well: on Facebook itself, and Instagram, on Snapchat and I’m sure, although I’m less familiar with TikTok, I’m sure we’re increasingly seeing it on TikTok. So, I think that this is a phenomenon that has grown and will continue to grow as social media becomes such a critical platform for political communication.
Katrin Steglich
00:32:02
And seeing all the advantages that one can use on tech platforms has obviously a huge impact on influencing people. I'd like to ask you about the role of journalism in the future. What do you think? How does journalism contribute today to democracy, as the role of journalism has changed a lot with the rise of social media?
Martin Moore
00:32:29
It has changed a lot. I think we are moving into a slightly different stage with journalism. I think that there was certainly a period of a couple of decades where journalists – there was sort of an existential Angst amongst many journalists about what their role was and is, because if anyone can be a journalist, if anyone can record footage, can take a picture, can write an article and publish it online, then the distinction between a professional journalist and someone who is, let's say, an accidental journalist or an occasional journalist or simply an expert voicing their opinion becomes more and more blurred and it becomes more difficult to distinguish the two. I think we are starting to see, that people are recognizing that there is a very significant function for professional journalism. The difficulty we have now is that there seems to be a divergence where we for a long time it looked as though not only was journalism unclear what role journalism was going to play, but it was very unclear whether it would be able to afford to play that role because the tech giants were taking all the advertising income that used to flow to news organizations. But we've seen how the largest news organizations, by which I am talking about the New York Times, the Times in the UK, the very big national news organizations, if you like, have figured out how to overcome - most of them have overcome - the funding crisis and most of them have done so by subscription or versions of subscription. The Guardian has a particular unusual model in terms of membership and contributions, but they figured out ways to become financially sustainable.
Katrin Steglich
00:35:57
I think accountability is one topic here for politicians on a local level, as journalists also look into matters on a local level.
Martin Moore
00:36:07
Absolutely. If you can't fund journalism, especially in investigative journalism on a local level, then you will never be able to do that kind of accountability journalism.
Katrin Steglich
00:36:17
I’d like to shift to another topic. With all the tracking and data trading going on, big tech platforms will have the possibility to also dominate in other fields, for example, the health or insurance market and also digital identity management is a new market. So, I’d like to know from you: Should tech companies be allowed to be digital identity management platforms as this type of service used to be a classical responsibility of governments? What is your stance on this?
Martin Moore
00:36:44
Yes, I absolutely agree that this is a field that they have moved into substantially. And I think it is a key area where one needs to constrain their powers because as a consequence of their collection and the organization and analysis and use of personal data, they have been able to move, as you say, into all these other fields all these other areas including areas of public life, whether it is healthcare, or education, or transport, or energy. And what, when I looked at it, what seemed to me their vision was - and I can't speak for them because they don’t really talk about this explicitly - but it seems they see themselves increasingly as the gateway to these services. So, if you take something like healthcare - if, as is increasingly happening, we rely more and more on our personal data to inform healthcare, so if you have an iPhone, it is also considered to be a medical device because you’re able to track your movements, you're able to record various health aspects through the iPhone and then use that data to inform health decisions or health interventions. It can be from your doctor, but equally increasingly can be via an app which may or may not be designed and built by Apple or Google or one of the others. And so, one can see a future in which we have or where these companies act as if they are stores or trustees for all our personal data. And then when we or someone want to use that data to inform decisions on our behalf, be it education decisions or health decisions, then they have to go to that platform for that data and that seems to me as though that’s a problem on many fronts. First of all, a lot of that data feels like our data, not necessarily the company’s data. Why should they - if you like - have ownership of it or the ability to control it and control where a goes? But equally, if they do control then, as you say, they are starting to take on some of the responsibilities we normally associate with states and with governments. And actually, if all the decisions about our healthcare or transportation or housing or our education have been made as a consequence of the data that is held on these platforms, then what role does government have over and above that of monitoring or overseeing these aspects? So, I think there is a really serious issue in terms of how much control they should be allowed, how much of our data they should be able to collect and hold on to. Clearly there have been moves to try to constrain this, into trying to figure out how to dilute some of their power. But a lot of these initiative haven't gone that far yet. I edited a book with my colleague Damian Tambini, which came out the end of last year, called “Regulating Big Tech”, which is a series of essays about how we might regulate these platforms by some scholars and policy makers from around the world. One of the essays within that, talks about putting obligations on these platforms as public data trustees, so giving them many more responsibilities for what they are and what they are not allowed to do with this data. Others have gone further and said “Actually, what we need to do is set up new institutions, set up data trusts which are either overseen by the government or by independent authorities.” - which would hold our data on our behalf, and we empower them to make particular decisions. But it takes the data away from these very large platforms to they can’t use it for both the competitive advantage and for increasing control over our lives. So, I think there are a whole series of ways in which one can rethink the collection and use of data. But I think we need to move faster because these companies are clearly a long, long way ahead when it comes to collecting and using the data at the moment.
Katrin Steglich
00:40:42
The EU has brought out two new laws, namely the Digital Services Act and the Digital Market Act, which you already mentioned earlier on, which require tech platforms to be more transparent which would solve some of the problems I suppose regarding the ads problem that we discussed because it requires them to be more transparent about recommender systems and online advertising. It also includes fundamental rights in their terms of service and to cooperate with authorities. But it also defines rules for so-called gatekeepers. What you just mentioned. Do you think that these laws can tackle the problems that we've been discussing?
Martin Moore
00:41:22
I certainly think it’s a big step forward. I think of all the laws that I've seen, I think the EU Digital Services Act and Digital Markets Act, I think do take bigger steps forward and as you say, they are very much geared to better understanding what these platforms are doing, which is a huge and important step towards them figuring out how to address some of the problems, because until we get more transparency, until we understand what they're doing, until we understand better why we've got these what economists call negative externalities, these problems as a consequence of what they're doing, we won't be able to bring in new laws and regulations to resolve those. We should understand also the limits of these laws. I think that in the same way as we understand the limits of transparency itself - transparency only gets you so far – e.g., people often talk about the Google algorithm or the Facebook algorithm; it's not an algorithm, it’s a vast amount of code, it’s a vast number of algorithms that are changing all the time and to understand the interconnections between them and to understand the impact of them on individuals let alone on society as a whole, is going to be a very, very complex thing, and to take ad tech or digital advertising as an example - ad tech - it works at such a vast scale and so quickly, in nanoseconds, that actually you could have huge amounts of transparency and still not be able to figure out exactly what is happening and why. So, I think that transparency certainly gets us a step forward, but it doesn't solve the problems if you like. And then there's a problem beyond that which I hope the Digital Markets Act who go further towards resolving. But the problem beyond that is: As we regulate these very large online platforms as they're called in the Digital Services Act (DSA), we do absolutely risk consolidating their power and consolidating their business models and entrenching them as our public sphere. And so, it seems to me that at the same time as one must try to absolutely interrogate what they're doing and put obligations on them and responsibilities on them to avoid harm. At the same time, one has to - in parallel - introduce ways in which to reduce their power, in which to enable other platforms and digital organizations to emerge and to compete, because actually, certainly in my view, what we need to be moving towards is a much more mixed ecology. I don't want to live in a world in 10- or 20-years’ time, which is still dominated by half a dozen West Coast American tech platforms.
Katrin Steglich
00:43:54
I think one can sum it up with saying that one fact that remains valid with technology is that it can be put to use in different ways. So, it depends on the context and the legal framework which defines whether it brings about good or harm. I wanted to ask you if you have any suggestions or proposals how democratic structures and practices can be protected, adapted and strengthened in the digital sphere?
Martin Moore
00:44:17
Well yes, certainly, as I say, one of the reasons why we, my colleague Damian Tambini and I, edited this book that just came out, was because the very reason that we think that actually what we needed to do, and we still need to do was to recognize that these relatively - these are quite new beasts these companies - and the digital sphere whilst into its third decade or so, it is still very young compared to lots of other media and communications technologies. So, we should still be creative about how to address some of the problems that have emerged. And so what we try to do in that book as we try to produce what we call a kind of digital toolkit for policymakers and others to help at least provoke some thinking about ways in which you could deal with it because one of the problems at the moment is that certainly in this country, in the UK, there is almost a kind of feeling like you just need a piece of legislation, the Online Safety Bill, which I referred to – like J. R. R. Tolkien's Lord of the Rings as the ring to rule them all - it certainly, when it was originally introduced, a consultation paper in 2019, was talked about as the way in which the internet was going to be tamed within the UK. This law would make the UK the safest place to be on the internet. Now I think the problem there is that if you try to solve all these issues within one law, you’ll inevitably make that law extremely vague, which has become huge but also very vague and not addressing but specific problems. I think one has to identify the problem you’re trying to solve, e.g., let’s say with election campaigns or let’s say with ad tech and programmatic advertising. And then you have to address it with the right tool for that particular problem and the right tool might be some form of restrictive legislation, but it might not. It might be more competition in that particular instance. So, I think I think there are a whole range of different ways, things that one can do, but I think one has to be quite specific about what one is trying to achieve whilst at the same time having an idea as to wear one is going, where one wants to end up.
Katrin Steglich
00:46:21
To conclude, I would like to know what three points based on your research belong on our agenda as political actors, but also as private individuals?
Martin Moore
00:46:30
Well, I suppose that last point that I just made, I think the first thing which I – I mean I have seen, but I didn't see nearly enough of, particularly from political parties or from civil society groups or others - is a clear vision, is where we want to go, where we want to end up. I mean, do we want - in ten years’ time - to be in a world which is still dominated by six American tech platforms? Or do we want to be in a world where there are multiple search engines, multiple social media platforms and those different platforms have different business models; so, some of them are co-operative, some of them are a non-profit, some of them are a for-profit but have a different model than this surveillance capitalism and tech model that Google and Facebook have. So, I'd love to see more vision, if you like, or visions rather of where we want to end up. A second thing, I suppose I would point to is the need for positive as well as regressive or negative interventions. I see a lot of the legislation and the regulation that is coming in now, whilst much of it necessary, as regressive and a sense of trying to restrain and block and censor and tame the internet and tame these very large tech platforms, when I'd really love to see more positive interventions, but positive in a sense of - take a hundred years ago - we had the biggest, if you like, positive intervention by any government in terms of mass media in this country with the formation of the BBC and the BBC has been a hugely important and influential organization and still is. I want to see more thought and ambition put into the types of positive interventions we can make now. And in a third, I suppose the third thing I would say is that I just worry that we are still not focused enough on how to reduce the scale of these platforms. I mean these platforms they are too big. If these platforms aren’t too big then one wonders what on earth is? I mean they are vast, both in terms of number of users, in terms of their revenues, in terms of their scale, they are vast - and many of the problems I think we have in society and our politics are a consequence simply over their vastness, over their scale. I think we need to be more focused on how we reduce their scale.
Katrin Steglich
00:48:37
It's been great talking to you Martin thanks a lot for this interview.
Martin Moore
00:48:41
Thank you so much, Kay, thank you!

Give us Feedback

Whether you'd like to give us general feedback on our Podcast or discuss a certain episode, this is the place to go. Just enter your message and select the specific episode. Thanks so much for reaching out to us!

By clicking on "Send message", you agree that we are allowed to process your contact information for the sole purpose of responding to your inquiry. The form processing is handled by our Podcast Hoster LetsCast.fm. You can find more information on their Privacy page.