Since 08/2022 6 episodes

Netzfest_EPISODE 05: Inequality + Discrimination with Lorena Jaume-Palasi

Netzfest. The OXFAM Podcast for Fair Digitalisation

2023-02-18 44 min Season 1 Episode 5 Oxfam Deutschland | Katrin Steglich and Landeszentrale f. pol. Bildung BW

Description & Show Notes

In this episode, we explore the relationship between biometrics, technology, inequality and discrimination.

Transcript

KI Linda
00:00:00
Digital is now everywhere. The promises that it would make everything better have not been confirmed for everyone. In this podcast-series, we explore the question of how digitalisation can contribute to a more livable world, e.g. by reducing inequality and strengthening equality, sustainability and civil society.
Katrin Steglich
00:00:28
Welcome to the Oxfam Podcast series dedicated to the topic of fair digitalisation and which is produced in cooperation with a federal agency for civic education in Baden-Wuerttemberg. My name is Kay, and today I'm talking to Lorena Jaume-Palasi. Her research focuses on ethics and technology and she has founded more than one NGO, most recently the Ethical Tech Society, which studies automization and digitalization processes and their societal relevance. She also advises the Spanish government on artificial intelligence, amongst others. Thank you for taking the time for this interview, Lorena!
Lorena Jaume-Palasi
00:01:08
Thanks for having me.
Katrin Steglich
00:01:10
So biometric data used to identify people can be based on different aspects. For example, retina or iris scans, fingerprints, voice patterning, even the scan of hands or veins in the arms, face, ear geometry amongst others. There are ever more systems entering the market, with a face scan being the dominant system, boasting to be able to identify all of humanity in real time by the end of this year. So, showing your face, which you cannot really hide, makes you identifiable in real time, wherever you might be. What relevance might this have for John Doe's everyday life? Taking into account the way facial recognition is deployed all over the world - what different potential futures are there?
Lorena Jaume-Palasi
00:01:55
Thank you for the question. I think that this has been a question that is actually quite old, that goes back to the 19th century. So actually, what we are seeing with this new increase of biometric usage, which encompasses a very vast range of several technologies. It’s not one technology we have. As you already mentioned, there's a list of biometric or biological data that can be gathered and it starts with the face. But it could go over to biological data about the voice, about the gait, about the body fat, about the keystroke, for instance. So many other things as well. It's a vast range of technologies that are being used. But the central idea behind it, as you were mentioning, has a lot to do with what, for instance, Nicholas Rowes called the securitization of identity, which is an idea that started over in the 19th century by people like William Herschel, who was a colonial administrator, or by people like Alphonse Bertillon, who was a detective and at the Fin de Siècle in France and was together with people like eugenicist Francis Galton thinking of a utopia where the identity could be fixed. And the basic idea of fixing identity was during colonial times, was looking at the bodies and trying to create metrics around the body to be able to individually measure and identify people, but also add traits or correlate those body measures to specific traits. Now, the biometrics that we're using nowadays are very much related to the methodologies that were developed in those days. So it's not only a specific idea of trying to base security and trying to monitor what people do while working in society through biometrics, but also the methodologies that are being used are that old, are 200 years old, and the consequences are clear. I think if we go back to the history of when it was conceived, the idea of identifying bodies, what do we say? It was birthed within a context of trying to control the body. It was created on the idea of trying to understand how individual bodies might commit fraud or become criminals or be inside a crime. So this whole idea of biometrics was birthed within a context of criminal thinking and control and submission of the body. So, for instance, Alphonse Bertillon was very worried of people going in the streets and not being identifiable because he was assuming that was the perfect situation for crime. But what we need to think about is society from a place of security and control, which is if we look at the world and if we look at society, most people when they are on the street, they are not committing a crime. There is a specific percentage of people that will commit crimes. So when you think about general rules based on behaviors that are usually not common but are an exception to general behavior in our society, this is when repression happens. This is what the thinking of authoritarian regimes sort of center and what dictatorships put in the middle of it. So thinking from that place this is a highly problematic technology because of that, because if we look at our societies nowadays, most transactions, our mobility, is not based on us being identifiable. Most of the things that we do across the day do not imply that we identify ourselves, but they are based on other criteria of trust and cooperation. And through the reintroduction of these theories and methodologies, what we are doing, is practically changing all these social interactions that were based on another premises to a premise of distrust and a premise of control of the body. And that's problematic, certainly.
Katrin Steglich
00:06:25
Even if it is this old and the idea and lots of time has gone by, how well does this technology actually work? Is it accurate in it's proposed allocation of personal data. What is the probability of possible limitations in biometric identification due to perhaps bias? And is there a relevance again for John Doe or Muhammad Ali?
Lorena Jaume-Palasi
00:06:49
If we look at how this technology works, theis technologie in plural, work, what's important to stress is that those technologies are based on statistics. Statistics is never about certainty, but about probability. There is no system, no technology that is 100% accurate because that's not in the nature of statistics. And the interesting thing of statistics also and biometrics is the tension or the inner contradiction of the whole purpose, because on the one side, the purpose is to identify each body and identify each person to a certain extent with some types of biometrics. It is not the purpose for all biometrics. We have other form of biometric usage that's not about identity, but it's about control. Perhaps we can talk about that later. But when you use biometrics to identify for the purpose of identifying people and identifying specific individuals, there's a methodological conflict with that, because the statistics is not about understanding individuals, but it's about creating a coded gaze about how to look at people. It's about creating specific categories through which you try to categorize people. And what that creates is not a contextual view of someone, but that creates a standardized view of how to look at people, which was what the detective Alphonse Bertillon wanted. He wanted to create that sort of systematic methodology of how to view individuals. And when you create a systematization of how to look at people, what you are also doing is restricting the ways, how you can look at people and excluding other ways or other methodologies and excluding also because it's statistics and it's about standardization, about specific standards that you need to use and categories. Excluding also the capability of contextualization. Contextualization is what you do once you have your probability being spit out by the system that says this person is probably to 96% individual XY. And you need to contextualize whether this probability that has been computed by the system is a good approximation or it's not. And so, what happens is that on the one side you are creating standardized ways of being, of looking at people and with that, avoiding the possibility of contextualization. But on the other side, contextualization is key to really being able to identify someone. And this is how we humans use our brains to look, because looking and observing, it's not a task that happens with your eyes, but it's a task that is being put in place in your brain. And what you basically do in your brain is that you, based on what you've learned in your own society and your own culture, you learn to contextualize what you're seeing. So, for instance, if you look in a book and you see a red circle, you know that a red circle could be a decoration or it could be an element of a flag, or it could be the sunset. Then, depending on where this red circle is placed, you will be able to give a different explanation for what that red circle is. And this is what you do with many other things, depending on what you've experienced in life and how your nerves and so on are developed. You will look at things differently. So, context is key. And this is not what the system is providing. There's an inner tension of this technology being able to do what we are asking or protecting of it. Those systems do not see. They just compute in a very specific way that's very systematic. It's about stripping away everything from the context and putting them in just a row of categories and then computing all those categories and standards that have been created while we on the other side as humans are respecting the systems to see, which is something very human, that is very much about just applying the opposite method. So, for that reason, from expectation perspective, what those systems do is a different thing from what we expect them to be. And that's one of the main problems. Also, when those systems are being deployed by people that are not technicians and do not really understand what is happening. So that's the first level of insecurity or bias can happen, or problems can happen. But then of course you have all other problems that you have too in programming because of the way how the systems are being programmed and the people who are programming those systems and the assumptions of those people about the world. So usually you have a lot of examples right now with biometric technologies where you see that those technologies assume that there is a specific range for eyes open or eyes close. And then we have seen examples of people from Asian descent that were unable to get a biometric picture of them being made because the system would keep saying your eyes are closed, although they weren't. So there's also that level of problems that can happen within the programming, depending on what assumptions you have about what a face means or what the thickness of skin and what ranges of thickness of skin, or how many colors of skin you can insert in a system and so on, which also show the impossibility of doing that. We can talk perhaps later about the possibility of programming everything about the bodies and the faces of people, because they all stand in a continuum. So that's that level of mistakes that can happen. And then, of course, there are a lot of other problems that you can have with those systems, because those systems usually are not placed in a vacuum, but they are integrated to other systems. So, there's also a lot of collisions that can happen across different technologies that have to talk to each other and input from one system, one data bank to the other data bank. And so, there's a lot of different levels of problems that you can have when using those technologies. No, they are not accurate. First of all, because statistics is not about giving 100% certainty, but it's about probabilistic. Second, because of that inner tension methodologically that I was referring before. And thirdly, because those technologies are being programmed and deployed by humans, and also that is a specific factor or dimension that factors bias and subjective expectations about other people and the world.
Katrin Steglich
00:14:01
And as far as I understood you, the contextualization is basically added to by the implementers of this technology.
Lorena Jaume-Palasi
00:14:11
You cannot contextualize with that technology. This is what you need to do with the results of the technology manually, so to say. Or what do you need to do as a person evaluating what the system tells you, because the system cannot. So, for instance, if you train a system to identify cats and suddenly you input a sofa with a tapestry of a cat, the system will think it's a cat, but a human being would say, well, that's a mistake.
Katrin Steglich
00:14:39
Exactly. But that was what I was saying. Depending on who implements the system, might have an impact on how data is interpreted and that is not transparent. So, there might be something within the system that is not transparent to everybody else, but to the persons implementing the system. And that might be not fair in treating all people equally.
Lorena Jaume-Palasi
00:15:04
That's a great question. I think it's important to contextualize the whole data parity debate that we have, that if we have the same data about different profiles of people does not imply that those different profiles or people are going to be treated equally. So, having more data about Afro Americans or black Americans does not necessarily imply that those black Americans and Afro-Americans are going to be treated equally to white people. Because the context in the very end of how this technology is being used is very much dependent on the social, economical and political context. In a society where mass incarceration is mostly happening to people of color, blacks, indigenous people, if you have more data about those populations, this technologies would certainly not work in a fair way. Just because we have so much data about them or so much biometric data about them, quite the contrary, it probably is going to work, exacerbate and magnify the already happening, unfair and disparate discrimination that is happening to those populations in contrast to white population. So that that is the first thing that it's important to think of. But also this goes back to the reference that I wanted to do before. There's things that this technologies cannot do, like, say, for instance, skin color. You could decide like Google that you add ten skin colors to identify people, to be able to identify people. But the thing is, skin color is a continuum. You could add 50 or 60 or 200 categories and still do not encompass all the ranges of skin color in the world. And from the technological perspective, even technicians like, for instance, the Google technicians would tell you, well, let's reduce complexity, because if you put so many categories, the system will become unmanageable. You will not be able to understand what the system does. It's a specific paradox that we talk about here. The more complex you make a system, the more difficult it is to make it secure and to have control over it. But the thing is, skin color is a complex situation, so reducing its complexity, is reducing precisely what is necessary to understand, to identify people and to see people. So what you're doing at the same time when you reduce that complexity, is creating discrimination inherently, because what you're creating with those systems is a set of standards that is going to be useful for only specific types of population that happen to a be in the socio economical positive side of the power imbalance in a society or who happen to accommodate traits that are standardized in the system. All other people that are not seen in the system or that are on a social, economical, vulnerable position in society will not be benefited necessarily by the system. Quite the contrary.
Katrin Steglich
00:18:33
Now we have the situation that we have few companies that have proprietary databases containing biometric data that have gathered this data without our consent. And the first question is how was it possible that this could actually happen, that all of our data is in the hands of a few companies? And the second one is: we have governments who have demanded the deletion of this data for their citizens, but others have not yet demanded this. Do you know why this is the situation?
Lorena Jaume-Palasi
00:19:15
Well, certainly when we talk about biometric data, it's a complex situation because it depends on what do you mean with that? Do you mean pictures of faces? So, if we just focus on pictures of faces, of course, there's companies like social media companies and big search engine companies that also offer other services that have a lot of pictures. But then the question is how well structured are those data banks and how good is the quality of those pictures and how are those pictures being used? Because, of course, pictures do not use much. Most of the work that you do with biometric system is about structuring those pictures. So, you need to classify them, you need to annotate them. You need to train the specific systems for a specific question that you want to place for a specific point of optimization. This means that having a lot of data and having a lot of pictures does not necessarily equate with having a lot of knowledge and does not necessarily equate with being able to understand what populations do. Because as with every other statistical based technology, what you need is to gather information, is a good interpretation. And that only happens with sectoral knowledge of the data that you have. So this is why, for instance, you don't go with the statistic of your gynecologist to your bank accounting officer and ask him or her for interpretation of the gynecologist data, because you know that they cannot interpret that because they lack the context and the sectoral knowledge for that interpretation. And the same happens also for those companies. I am more concerned of the way how governments are, rather because they have the monopoly of violence, right? They have specific duties, but also specific privileges as those entities. And in the very end, the carceral state starts with the state and potential problematic consequences. Start with states enabling or extracting information from those big companies or forcing them to get those to get data from those companies than the other way around. So we've seen a lot of policies that have been promoted by the state demanding deletion on the one side, but on the other side demanding retention of those data for justice and criminal prosecution or for terrorist prevention strategies and the like. So in the very end, what I think is more and more terrifying is to see how once democratic states are more and more incrementing policies that are about fostering and enabling the usage of biometric data for purposes that initially were not meant to require individual ID or body governance, governance of the bodies. So, it's really interesting to see that across the pandemic, we saw how tracing apps were coming, apps looking out, mobile transactions were coming, and it was about understanding and classifying where people were going to. And in the very end, creating technologies that at first side looked very well and looked very anonymous over here in in the European Union and also in some other countries. But then when you look at the infrastructures and what was being created with many of the infrastructures and technologies that were created during the pandemic, where technologies of control and they were also based in a mix of both mobility data but also biometric data. And if we look at, for instance, at the European Union with the CIR, which is the biggest biometric data base over here in the continent with hundreds of millions of biometric data of everyone crossing the borders of the European Union. It's a huge data bank that is being created by the European Union. What worries me is to have the European Union, for instance, fostering the usage of biometric data through different types of regulation. We have four payment platforms now, the possibility of using biometric data to make payment transactions and to identify people for the purpose of making those transactions. Now we are discussing, as well as in the US and in China, about creating digital identities that are going to be tied to not only banking transactions, but also to medical transactions or medical information transactions, but also to mobility and to many other things in spaces that usually we did not enter by having to identify ourselves and start with such a type of infrastructure that goes at an international scale. What we right now have are technologies that are being implemented by private sectors but are being required by states. We have here a hybrid situation of the private sector having the manpower and having the skills and the knowledge to create computational infrastructures at a mass scale, to identify people and to gather that data and process that data while at the other side we have states and international organisations like the European Union, but also the United Nations and the like that are more and more into requiring both from a policy or a law regulatory perspective, the usage of those technologies and that combination, are sort of changing the fiber of democracies that until now based mobility in the public space, not on the basis of identifying every single individual.
Katrin Steglich
00:25:36
You mentioned the United Nations just now. It's a question that I also have, because what one sees in the NGO field is that also biometric data is used for identifying people. Now, technology is often used and tested wide scale on people in lack of defense mechanisms, either due to a lack of knowledge or a lack of resources before being implemented on other parts of the in the world or on different groups in society. What is your stance on the use of biometric identification within global aid programs requiring recipients of goods to identify themselves via biometric data and also being trackable for this reason?
Lorena Jaume-Palasi
00:26:18
Absolutely, yes. This is indeed what Virginia Eubanks called the digital poorhouses. She wrote that brilliant book about how technologies are being tested on the vulnerable ones and then introduce in the middle of society to higher classes once those technologies have been tested on people that, as you rightly said, have not the resources to resist or are being just coerced into the uses of those technologies because they want food. And if they want food, they need to get their iris scanned. I think that those technologies are highly problematic and those contexts of usages are replicating the central idea of the carceral state. They are centered on the assumption that fraud is going to happen and it is not interested on preventing fraud, but it is interested in controlling individuals thinking that that's a proxy for prevention. But that is a very short sighted form of prevention of fraud because it's only interested in the penalization of the person committing fraud without questioning why those people committed fraud. And it could be a kid that is being coerced into committing fraud by a grown up. It could be a woman coerced by a violent group or family or other person. I mean, I don't want to put stereotypes here, but it could be anyone that is being coerced through a specific power imbalance situation within a refugee context and it's a short sighted idea of prevention if you just put at the punitive center the person who committed the fraud without understanding why this is happening and how you can prevent that this happens. And for that, I think it's a very lazy idea of thinking that you can control a situation and manage a situation of distribution of goods by just focusing on fraud from that end. So, I don't see the legitimization for doing that. There are other methods that you could try to use to do that. It's also a highly privatized space where a lot of NGOs are using technologies from private entities they don't know nothing about. And those private entities are using these technologies or offering those technologies in that space because they are testing their technologies and they are optimizing their technologies until they can offer it in other spaces. And again, this is the Virginia Eubanks argument. This is a way of testing technologies of control that are not about protecting people or offering services to people, but have a primary interest in something else. And I think that this discussion should lead us not to the question of how can we create better technologies of control, but we think the idea of humanitarian aid and we think the idea of protection of refugees, for instance, or protection of children in an international context and the power imbalances, the coloniality of the ways, how aid is being provided, which are not legitimate. And that perhaps could lead us to the development of other types of technologies and other type of processes that really provide for protection and that are really about a form of care towards people in a vulnerable situation. But again, I think that this is not only a question about technology when we talk about biometrics being used in the humanitarian aid context, but this is a question about how aid is being deployed around idea of control of masses and control of populations to avoid mass immigration or to avoid other types of phenomena, and so to say, tweaking on the last link of a chain. So, thinking about the last link of a chain of processes that are about controlling populations or controlling refugees or controlling children or controlling parents of the children, right, and less children. And thinking of the last chain of the link, which is the technology that you are deploying to execute all the political decisions and all the administrative processes that lead to specific usage of our technology in a specific way, within a specific administrative process that's only a cosmetic discussion or debate, because in the very end are not questioning how the whole process in itself is deeply problematic and deeply colonialist and deeply based on an idea of control in a situation which is a sort of double punitive penalization of people that are already penalized or are being punished by the fact of having to escape war. I think humanitarian organizations should rethink the way how they are providing aid. And in the course of that this will probably lead to the usage of other technologies. And no, I don't think that biometrics is going to help them to make their humanitarian aid processes more effective or less, or that fraud will happen less because of the usage of biometrics.
Katrin Steglich
00:32:12
So as we are incapable of foreseeing the consequences on societies and individuals, in the long run, it has been called for to categorize biometric technology in line with artificial intelligence, warfare technology or genetic and genomic engineering as it is emerging experimental technology with serious impact on people. Would you agree with that? And that we need to find more ethical ways to do the tasks that we were speaking about in the humanitarian sector?
Lorena Jaume-Palasi
00:32:41
That's a great question. I would like to stress that objective that you were using: experimental. I think that when we create infrastructures, biometrics is a form of infrastructure you're creating when you create biometric technologies. It requires a specific computational infrastructure, meaning clouds. It requires data banks, it requires specific processes, and it requires long term. Right? Because biometrics is about creating big data. The technologies that we're using right now requires at least five years, the least, of structuring data. And I mean, big data means a huge amount of data and you need service as well. It's really a huge also material and energetic infrastructure that you're creating when using and deploying this technology. So, when you take that, when you create those infrastructures, there's a few things that that are important to consider. When you create infrastructure, even if you don't use it, it is there in the same way as you create streets. If you don't use them, they're still there and they can be used for other things. So there is a responsibility of creating infrastructures because once you've created them, you cannot make them disappear that quickly. It is important to think first whether we need them and if we use technologies that we don't need, what they are for. Or we create infrastructures that we don't know or understand what they could be used for and as a form of experiment and instead it becomes a fundamental problem. Because usually you create infrastructure when you want to provide for a specific enablement. Infrastructure is always about mass scale, and infrastructure is always about providing a stable channel for a form of mobility. So, infrastructure could be the streets that you create to provide a stable structure for mobility or the platforms that you create for social media. What this basically means is that because you are providing stability to things that are in a dynamic, fluid mode, you try to create them in a resilient way because you know that things that are in evolution and moving will change across time. Mobility changes across time, language changes across time. The way how we create money and exchange money changes across time. And that means that those stable channels that you have created to provide for those fluid things need to be resilient because when you create them, when you don't know what technology or what types of fluidity you're going to have once they are finished. Take, for instance, the trains. When the train rails were being created, we were assuming that trains needed to make a turn to go from one direction and back. And when the train rails were finished in Germany, for instance, the new mode of the new train technology was created with the turns not being necessary anymore. This happens with many other technologies. You are creating an infrastructure and you know that in five years from now you will be finished with infrastructure that you have created. This happens with biometrics as well. We need years until the it is finished, but probably in five years we will be using a fairly different software and very different algorithmic modeling for biometrics than the ones that we're using now. The reason why you would create infrastructure like that is experimental, it's not resilient. It's a very costly, very obsolete way of creating infrastructure. It's not a sustainable way of creating infrastructure. It's not a way that is compatible with the energy waste and the money waste that it requires, that those infrastructures require. From an ethical perspective, it is absolutely unethical to waste that much money and resources and energy for something whose usage is unclear. And that is precisely based on principles that go against many of the principles that we convened upon in democracies.
Katrin Steglich
00:37:22
What possible positive applications do you see with biometric technology that might be helpful or useful to society? <v Honestly, I don't think there is one because of the requirements that it implies. I mean, if you think that biometrics is basically a technology that requires essentialization of human beings, it requires, depending on the technology, it requires that you try to essentialize how many skin types you have in the world. And you cannot do that because it's a continuum. Skin color is a continuum. You just cannot create 50 or 40 categories and with that, think that you will be able to identify everyone. It is also a technology that is centered around the controlling and the measuring of bodies. And the question is, why do you need to do that? And in most cases, you don't need to identify or to control a body to prevent harm. There are other methods or let's put it in that way - in most cases where biometrics are being used, perhaps other approaches that are not about controlling the body, but about crime prevention or about understanding how to create spaces of cooperation would lead to fairly different forms of technology and fairly different forms of management of mass control or of societies. So I don't think that there is a positive space from a technology that was initially conceived for from a context of distrust of the citizen and of trying to control the colonized bodies and the former colonies of British Empire or the German Empire and so on. So I think it's important to remember where these technologies come from, and I think it's important to remember that these technologies are new from an historical perspective. So they are only 200 years old, but we've had societies for thousands of years and those societies were not in a permanent state of anarchy. There was and there were other forms of organization and control that were less centered on the punishing and the criminalization and on the securitisation of identity in the way how biometrics is being conceived. So I think it's rather more important to ask ourselves, do we think that the public sphere and societies need to be thought from a perspective of securitisation, or do we rather think that we need to think our societies from a perspective of fundamental rights which run contrary to that idea.
Katrin Steglich
00:40:10
I would like to conclude and ask you what three points in connection to biometric technology based on your research belong on our agenda as political actors and also as private individuals.
Lorena Jaume-Palasi
00:40:23
I think that the question of biometrics is a good question to rethink our systems of care and control. Every system of care and control that is based on the identification and the association of identity to gaits to control of the body needs a rethinking because it was actually the consition of rights to the body of the woman that got the women freed in the 19th century from many patriarchal oppressions that women had in Europe. It was the thinking of the body of women as a body equal to the male body what got the women out of the patriarchal control situation of women's bodies, which is still an effort and which is still an ongoing process of conflict and resistance. So I think that when we talk about biometrics, we need to enlarge the discussion to a to wider view of the idea of control and identification and why do we need to create all those processes in context of, for instance, care and consent, in context of mobility, to a question of control instead of a question of care and rights. Then when we talk about biometrics, also, I think it is important to talk about the material part of this technologies. We're not talking enough about the materiality of that. It costs a lot of water to cool down the servers of those technologies. It costs a lot of electricity to run those systems, and it costs a lot of rare materials to create the hardware that is needed to run these technologies that, by the way, becomes obsolete way more quickly than if you use less complex systems. So, when we create these types of infrastructures we need to be aware whether we can afford the energy and material resources that these technologies require. From a climate change perspective, I think that we need to reconsider that, also. From a third perspective on our agendas, when we talk about biometrics, we need to understand that once those infrastructures have been put in place for the vulnerable people, it's not going to stop there because it's just too late. That pen tests are for using this type of technologies on the whole society. That is what we've seen with many other technologies before. And I think that this also opens the conversation for questions like what type of society do we want to live in? In a society that is based on full security, which can never be 100% provided for, or in a society that is centered rather around care and dignity, which would lead to other types of technologies, because it does not assume that the possibility of fraud is there. But it assumes that people that commit fraud are those that commit fraud because they are not being cared for, because there are power asymmetries and puts the center of prevention in a very specific part of the process that goes beyond control of individuals and looks at power asymmetries in society.
Katrin Steglich
00:44:06
Thanks so much to you. Lorena, it was very enriching, I found, to see your perspective on this topic. Powerful words. Thanks a lot, Lorena.
Lorena Jaume-Palasi
00:44:16
Thank you so much.

Give us Feedback

Whether you'd like to give us general feedback on our Podcast or discuss a certain episode, this is the place to go. Just enter your message and select the specific episode. Thanks so much for reaching out to us!

By clicking on "Send message", you agree that we are allowed to process your contact information for the sole purpose of responding to your inquiry. The form processing is handled by our Podcast Hoster LetsCast.fm. You can find more information on their Privacy page.