Please type and press enter
Artificial intelligence can predict crises and get help to migrants who need it – but the dangers are serious
Climate change and other disasters are displacing ever more people. Could artificial intelligence help predict impending crises and where humanitarian aid will be needed? Could algorithms be used to match refugees to regions where they will have the best chance of thriving? And what happens when you take human judgement out of the process, or if data is used to exclude some migrants unjustly?
Hilary Evans Cameron (Toronto Metropolitan University) starts off the discussion with a refugee case to show that human-decision making, itself, can be dangerously unreliable. Then host Maggie Prezyna speaks with experts Ana Beduschi (Exeter University) and Tuba Birca (Vrije Universiteit Brussel), who walk us through what AI is, how it works and what are its risks, pitfalls and potential for good.
Maggie is a researcher with the Canada Excellence Research Chair (CERC) in Migration & Integration program at Toronto Metropolitan University and this new podcast is Borders & Belonging. Maggie will talk to leading experts from around the world and people with on-the-ground experience to explore the individual experiences of migrants: the difficult decisions and many challenges they face on their journeys.
She and her guests will also think through the global dimensions of migrants’ movement: the national policies, international agreements, trends of war, climate change, employment and more.
Borders & Belonging brings together hard evidence with stories of human experience to kindle new thinking in advocacy, policy and research.
Top researchers contribute articles that complement each podcast with a deeper dive into the themes discussed.
Borders & Belonging is a co-production between the Canada Excellence Research Chair in Migration & Integration at Toronto Metropolitan University and openDemocracy. The podcast was produced by LEAD Podcasting, Toronto, Ontario.
Below, you will find links to all of the research referenced by our guests, as well as other resources you may find useful.
‘A helping hand from outer space: Doctors Without Borders utilise satellite data for humanitarian missions’, by Reliefweb (5 October 2020)
‘A Robot Lawyer Is Officially Assisting With Refugee Applications’ by Dom Galeon, Futurism (3 December 2017)
‘Germany to use voice recognition to identify migrant origins’ by BBC, (17 March 2017)
‘How artificial intelligence is changing asylum seekers’ lives for the worse’ by Nicholas Keung, Toronto Star (9 November 2020)
‘Jordan: Is the UN’s biometric registration for Syrian refugees a threat to their privacy?’ by Zoe H. Robbin, Middle East Eye (23 October 2022)
‘Racial discrimination in face recognition technology’ by Alex Najibi, Harvard University (24 October 2020)
‘Refugees in Jordan are buying groceries with eye scans’ by Euronews (04 December 2019)
‘Who is making sure the A.I. machines aren’t racist?’ by Cade Metz, New York Times (15 March 2021)
‘AI-enabled identification management of the German Federal Office for Migration and Refugees (BAMF)’, Migration Data Portal
‘GeoMatch: Using artificial intelligence to improve refugee integration through data-driven assignments’, Migration Data Portal
‘Nadine Project’, European Union’s Horizon 2020 research and innovation programme
‘Project Jetson’, UNHCR
‘“Project Jetson”: An experiment for predicting movements of displaced people in Somalia using machine learning’, Migration Data Portal
‘The use of digitalisation and artificial intelligence in migration management’, European Migration Network
‘Latvia’s free self-check e-tool for citizenship applicants’ by Jānis Reiniks, Republic of Latvia (2022)
‘How AI can help us better prepare for climate migration’ by Injy Elhabrouk, World Economic Forum (10 November 2022).
Agrawal, A., Gans, J., & Goldfarb, A. (2018). ‘Prediction machines: the simple economics of artificial intelligence’. Harvard Business Press.
Salah, A.A., Korkmaz, E.E, Bircan, T. (Eds.). (Forthcoming 2022). ‘Data Science for Migration and Mobility Studies’, Oxford University Press.
Cameron, H. E. (2018). ‘Refugee law’s fact-finding crisis: Truth, risk, and the wrong mistake. Cambridge University Press.
Earney, C., & Moreno Jimenez, R. (2019). ‘Pioneering predictive analytics for decision-making in forced displacement contexts’. In, Guide to mobile data analytics in refugee scenarios. Springer.
Beduschi, A & McAuliffe, M. (2022). ‘Artificial Intelligence, migration and mobility: Implications for policy and practice’. World Migration Report.
Beduschi, A. (2019). ‘Digital identity: Contemporary challenges for data protection, privacy and non-discrimination rights’. Big Data & Society.
Beduschi, A (2022). ‘Harnessing the potential of artificial intelligence for humanitarian action: Opportunities and risks’. International Review of the Red Cross.
Beduschi, A. (2021). ‘International migration management in the age of artificial intelligence’. Migration Studies.
Beduschi, A. (2017). ‘The big data of international migration: Opportunities and challenges for states under international human rights law’. Georgetown Journal of International Law.
Bircan, T., & Korkmaz, E. E. (2021). ‘Big data for whose sake? Governing migration through artificial intelligence’. Humanities and Social Sciences Communications.
Cameron, H. E. (2008). ‘Risk theory and ‘subjective fear’: The role of risk perception, assessment, and management in refugee status determinations’. International Journal of Refugee Law.
Cameron, H. E., Goldfarb, A., & Morris, L. (2022). ‘Artificial intelligence for a reduction of false denials in refugee claims’. Journal of Refugee Studies.
Welcome to Borders & Belonging, a podcast that explores issues in global migration and aims to debunk myths about migration based on current research. This series is produced by CERC Migration and openDemocracy. I’m Maggie Perzyna, a researcher with the Canada Excellence Research Chair in Migration and Integration program at Toronto Metropolitan University. Today’s episode explores the burgeoning use of artificial intelligence, or AI, as a tool in managing migration and asylum. Two leading researchers will help us understand the risks and opportunities posed by this emerging technology. They’ll discuss how AI might affect the civil liberties of migrants, international data flows and more. But first, a former litigator will tell us about her experiences defending refugee claimants and how, theoretically, AI could be used as a force for good in asylum claims. In her 10 years of working as a litigator in Canada, there’s one case that Professor Hilary Evans Cameron will never forget.
Hilary Evans Cameron
I had a young Colombian client in a hearing who’d received threatening letters from a guerilla group. And the board member was being really hard on her in the hearing because her response was, well I just try not to think about them. I sort of tried to carry on.
To professor Cameron, it was clear that the board member, in other words, a member of the Immigration and Refugee Board of Canada (IRB), was not familiar with the cultural context that the claimant was coming from, nor was he familiar with current studies on risk response.
Hilary Evans Cameron
One of the key findings of studies of risk response is that the more familiar a risk is, the easier it is to push it to the back of your mind. Right, so we talk about car accident risk as a classic familiarized risk. So, everybody knows that car accidents are a thing that driving a car is dangerous, but it’s not hard to put that thought to the back of your mind and get in a car. At the time of my clients hearing there were as many people kidnapped for ransom in Colombia in a given year as died in car crashes in Canada. So that risk of being kidnapped for Colombians at that time was background noise. It was just something that everyone knew someone or knew someone who knew someone that it had happened to. So, you push it to the back of your mind, and you try to keep going. This was essentially her testimony, [it] was yeah of course I was scared but you know, what are you going to do?
But the board member clearly did not understand the Colombian socio-political landscape enough to understand the claimant’s response. And as it turns out, he was not the only one to misinterpret a refugee claimant’s actions.
Hilary Evans Cameron
As a refugee lawyer, I was disturbed by some of the assumptions that I saw board members making. There were assumptions that I thought were probably, you know, not very solid in light of the social science. So, for example, board members would assume, you know, if you were really at risk, you would have fled as soon as the danger arose. And we have at this point, just decades worth of studies of people from, disaster areas, earthquake warnings, floods, fires, you know, all kinds of different dangers that come up. And what is it, why, how is it, that people say, “well, you know what, I think I can ride this out”. Or “I’m gonna stay a little while longer and see if it gets better”. But we have all kinds of – you know, a deeper understanding of what is it that might explain why somebody doesn’t just up and flee at the first opportunity. So as a lawyer, I was pulling together some of this research and submitting it in my clients’ hearings and trying to convince board members to think about those assumptions.
Eventually, Professor Cameron realized that she wanted to go beyond trying to convince board members while on the job, so she got her PhD and began to deepen her research on the process of decision-making and some of the dangers that can come when decisions about refugees are made without enough information.
Hilary Evans Cameron
So, refugee hearings are this paradigm example of decision making under profound uncertainty. You know, these decision makers, not only are they interviewing somebody from a different culture with a very different sort of cultural context, you know, language issues, trauma. Claimants are often testifying through interpreters. I mean, there’s just a whole mess of reasons why there’s all kinds of potential here for miscommunication and misunderstanding. But in addition, and beyond that, at the end of the day, what the board member is being asked to do is really predict the future. I mean, they’re being asked to say, when this person goes home, what’s waiting for them there.
To come up with the optimal conditions, professor Cameron first has to lay out some of the problems with the current system. The first element is that decision-makers need to sift through a large package of information to help them understand the situation on the ground. This can be hundreds of pages.
Hilary Evans Cameron
And so, a board member doing that is going to bring their best resources to that game. And they may have plenty of good skills at their disposal, but they’re still human. And that’s still a very, very difficult task. And one thing we know about this kind of decision-making is that decision makers tend to be overly confident in the conclusions that they reach. And that the data in this case is often very poor data. So, a decision- maker is looking at this weak, sparse data and making a confident and likely poor prediction. What an AI would do is look at that same poor sparse data, be able to make a prediction but along with that prediction would come sort of an explicit statement of how unreliable it is. Because the AI is able to factor in the fact that this is not good data.
Professor Cameron is very clear to note that this is all theoretical, and that she doubts the proper legal systems would ever be in place to allow something like this to come to fruition. Still in theory, she and Goldfarb’s ideas on how AI could be used in refugee claims does provide a bit of hope.
Hilary Evans Cameron
This will be a very helpful system, because in other words, at the heart of this refugee status decision-making process should be that idea that it is better by orders of magnitude to give protection to someone who doesn’t need it, than to withhold protection from someone who does. And so, if a decision-maker has that legal framework, that normative ethical framework in place, and the AI tells them, look, this is really uncertain. In other words, the points that you’re asking us to decide or points that we don’t think you can reliably know, then that should allow more people who need protection to get it.
Hillary Evans Cameron is an assistant professor at Toronto Metropolitan University’s Lincoln Alexander School of Law. Many thanks to her for sharing her expertise with us. Now, let’s take a deeper look at the role artificial intelligence plays in managing migration. Joining me is Ana Beduschi, professor of law at Exeter University in the UK, and Tuba Bircan, research professor at the department of sociology and research coordinator of the Interface Demography Research group at Vrije University in Brussels. Thanks to you both for joining me!
AI is often thought of as robots and self-driving cars. But the truth is it’s embedded into our everyday lives. What exactly is AI?
That’s a very good question. And I would say there is no single straightforward answer to that because currently there is no internationally agreed definition of AI. But we could say that AI can be broadly understood as a collection of technologies that combine data algorithms and computing power. If we follow the definition that is given by the European Commission, for example, these technologies will consist of software, but also hardware systems that are designed by humans – and that’s an important point, that they are designed by humans – and that if they’re given a complex goal, they can act in the physical or in the digital dimension, perceive their environment through data acquisition, interpret the collected data that can be structured or unstructured and deriving from this data they will decide on best courses of actions to take to achieve a given goal. Broadly speaking, these are systems that are designed by humans, and these are technologies that we have in our daily lives. So, as I said, these are technologies that are embedded in smart assistants, in our mobile phones, or in our virtual home systems.
So, Tuba, just building on what Ana said, when we’re talking about AI, are we talking about computers understanding?
Very, very, very important question. I think, as Ana said, what we have to remember about AI is it’s a kind of an ecosystem including both software and hardware. But the overall goal is to simulate or to mimic humans and our actions. So, we can think of them like small machines which are programmed to think like us, however, they are solely based on data because they are trained on existing data. Which brings us to the important point about how they differ from a predictive analysis? So, they do predict, they can forecast things. So, AI can tell us about [the] ‘what’. But they are very poor in explaining ‘why’. Because they do not have the larger social or theoretical power to explain the causality. So, I think for understanding, they still have a way to go but they can be used for pattern recognition. Particularly [to ask] what do we see in this large dataset? At least we have to acknowledge the shortcomings about understanding.
So, when talking about AI, the term ‘big data’ comes up a lot. What exactly is big data and how does it tie in with AI?
From all my research, what I noticed is that indeed, there is this linkage between AI and big data, as Tuba rightly pointed out. AI systems are based on data they need to be trained on these datasets. And some of these datasets might be what we would qualify as big data. So, these large, complex, variable datasets that need computer power to be analyzed because of their size, their complexity, their multi-dimensional aspects, their variability. So, a human being will not be able to analyze that vast data sets, but we need a computer to do so. And that’s probably the link where we see more and more this relationship between big data and AI.
I can add that the very common definition of big data by [Andrea] De Mauro, 2006. We are talking about four specific characteristics of data. So not every data can be big data. First of all, we usually call it “the 4 Vs” – like volume, velocity, variety and veracity. But I would like to use the layman’s terms. So, we’re talking about the data set which has a lot of data. So really vast data, but which constantly generates and changes and which could be expressed in various ways. But more importantly, I think veracity is an important characteristic. It has a lot of errors. So, the big data is not the perfect data, it is a massive amount of data, which cannot be analyzed through our conventional desktops that we have. So, which requires more complex systems and I think the linkage between the AI and big data, or big data analytics is artificial intelligence is a way, a tool, to process the big data. But on the other hand, it is important that the AI the development of the algorithms are highly reliant on the big data because they are used for training the algorithm, the black box part.
So large quantities of data, but not necessarily a uniform quality of the data. So, what are some of the ways that AI is used in migration management?
Well, that’s a very important question because we’re seeing AI being increasingly used in migration management. So, if we take the migration pathways and we make it into four points. So, we have (1) predeparture, (2) entry, (3) stay, and (4) return. We have AI systems that can be used at all of these stages. So, at predeparture, for example, it’s more and more common to have visa applications, systems that are using AI systems through their e-platforms. So, several countries have started using that. The same with automated profiling and security checks that can be done pre-departure so before an individual has migrated. At the point of entry, for example, at borders, we have more and more automated identity verification at borders that use AI systems. So, for example, we have more and more of these automated gates, or smart gates, right? So those who have electronic passports, and perhaps electronic visas will be able to pass through these gates, to scan the passport and have perhaps facial recognition technologies that could be used to verify our identity before being allowed into the territory of a country. So that’s more and more common at airports or train stations in Europe, for example. And so that’s something that that’s happening more and more into the entry point. We also have many uses for AI for stay. So, when the person is in the country. For example, immigration information via chatbots. So that’s more and more used. To a lesser extent, for return. But there are more and more ways of using AI for return decision making. Notably, if you think about machine learning, and so, the analysis of – just to give you a few examples: Germany, the United States, Australia and Canada, have been using AI to confirm identity based on biometric data. And that’s also in a movement to detect fraud – fraudulent documents for example. Hungary and Lithuania use AI for facial recognition to verify identity of non-nationals at borders, for example. Chatbots are also used routinely in Australia and Canada. They can be used for, so AI powered chatbots to respond to online migration questions. So, there is a lot of uses for AI. Also, in different contexts. In Germany, for example, there is a study on the feasibility of using AI to support prediction of movement of populations. So, we see that there is a lot of uses of AI and it’s just increasing. So, it’s something that it’s not at all science fiction, and it might be part of the experiences of many people.
Can you give us some real-world examples of how AI is currently being used to manage migrant populations?
We have the UNHCR for example. So, the UN Refugee Agency has been developing a project called Jetsons, or Jetson that uses artificial intelligence and older technologies to collect a number of data and analyze that data in order to predict the movement of populations in Somalia. They will diffuse a different number of information, of data, ranging from environmental data – so that whether there will be dry weather in a certain region , information also about conflict for example, if there is some tensions in the region – and use this model to try to predict whether there will be movement of population in that area. Also, data coming from the cost of things in that area, the cost of stocks, for example, animals, how much will be the price of that in their market. And so, all of these can give insights to humanitarian organizations such as the UNHCR to better prepare, right? So, if we know that there will be movement of population in a certain area because of all this data with the support of AI systems is pointing in that direction, it’s better to allocate resources to the teams operating in that field. So that an example of how AI could be used in that context. But it’s also [in] the context of, for example, of economic migration, how we would have visa applications that could be using AI systems that would do this initial triage of applications. So have this traffic light system where you would have red, amber and green. So, the application would come in and that would be something that states are using more and more.
I’m going to give you examples from Germany and Latvia, where they use the same AI system particularly focusing on the language and dialect identification for two different reasons. But in Germany, they use this AI tool, and especially the asylum procedure to recognize different dialects. So, they make some indications, or they can get some indications about the country of origin. So, what they’re trying to do is, they are trying to, based on people’s speech, because they are lacking documents most of the time, to really understand if they’re telling the truth and which region they are from. And a very, very similar speech recognition tool is used in Latvia in a different context. What they do is they use it for citizenship applications and to verify the knowledge and language proficiency of the people who are applying for citizenship. So as a part of the citizenship test, the applicants either I think write or see how they perform an initial answer. So, this tool is an automatic speech recognition tool [which] decides on [language] proficiency level and if the person complies with requirements or not. Another example that we can give is a very famous one is from Jordan. Especially the UNHCR uses this iris scan systems in the refugee camps because refugee camps are part of migration management too. Where they use these iris scan systems is to distribute food and financial assistance to people. So it is, if I remember correctly, more than 33,000 refugees in the camp are part of this system.
We see that AI is being used quite frequently in many aspects of migration. What are some of the pitfalls and risks when using AI as part of migration policies?
Well, I will put to you that there are three main types of risks or pitfalls in these areas. The first one would be related to the quality of the data. That is a basis of training of these AI algorithms. The second one would be the existence bias in AI. And the third one concerning surveillance technologies that constantly raising concerns about the protection of migrants’ rights. So, if we think about the first one, so there are two scientific problems relating to the quality of the data used in the design and development of AI systems. They might be using data that would be of poor quality, especially if we’re thinking about big data, and it’s generally known that poor data quality used to train AI algorithms will invariably lead to equally poor outcomes in any field. And this is also true in the context of migration where errors in data sets remain, surprisingly or not, quite a common issue. For example, data on migration estimates might be conflated with the actual border crossing figures. So, let’s say someone will count for the same person crossing a border several times as if it was several people crossing. It’s also challenging, we can imagine, to collect data of good quality on population displacement, especially during conflict or natural disasters. We can think about the number of armed conflicts that are on-going. So, it’s hard to get data of good quality. So consequently, errors of the data sets can be reproduced and cascaded forward, and they can compromise the accuracy of the AI technologies and bear out. Second point is that there are significant issues arising from the fact that AI systems can reflect the biases of their human designers and developers and that biases in this case will be permeating this AI system. Bias here I’m referring to the human viewpoints, prejudices and stereotypes that we all have as humans that will then permeate the design and development of these technologies. Importantly, they could potentially lead to unfair outcomes and discrimination. For example, studies have demonstrated that several commercially available facial recognition software, were less accurate when analyzing the faces of women with darker skin types that can be due to a lack of representation in the data sets used to train these AI systems that could lead them to a situation in which an individual could be misidentified. So perhaps people from certain backgrounds would have less chance of having the software work appropriately on their faces, and that can create a lot of problems if you can’t be admitted into a country to come get a visa because these presentations are narrow in the way in which the system has worked because there is this type of bias in the system. And thirdly, we have an increased use of digital technologies and AI in particular for surveillance. And that’s more and more. So, we have more and more sensors, AI powered drones for border monitoring, for example. And that can raise concerns about migrants’ rights especially the right to privacy, but also their ability to obtain reparations in case of violation of your rights. We see that there is this sort of power imbalance between the state and the migrants. It’s becoming increasingly important with the extent of surveillance technologies that we’re seeing (notably AI technologists), this power imbalance that we have that is potentialized, that is widened by the increased use of surveillance technology including AI in this field.
I can just explain a little bit about the role of big technology companies and especially, let’s start with where Ana left off about the surveillance. Because I’ll give you a very brief example from Poland. In 2021 the border between Poland and Belarus, it was fully equipped with a lot of sensors and cameras. It costed 350 million euros. The entire goal was to control border movements, which is also related to the security. And in the EU overall, the market is expected to go beyond 130 billion euros. So, in short, these tech companies are the major service providers when we are talking about not only the borders but also the smart controls at the airports. So, I guess it is really important to mention that not only the countries but also international organizations like UN agencies, several NGOs, they are potential target group for the big tech companies because they are generating a lot of money for them. So, in short, particularly the issues surrounding privacy data protection and confidentiality. They continue to pose serious risks and challenges to migrant communities because the companies are seen as kind of service providers, and they are not accountable or responsible or seen for the services that they provide for the international organizations and the states. So yes, a lot of pitfalls, but I think it is important to underline the role of the big tech companies there too.
Are countries being transparent about their use of AI and border management and migrant screening.
Well, some countries are more transparent than others. Let’s put it this way. That is for instance, the case of Germany, which often publishes reports about how it’s using or even how it’s planning to use AI and older digital technologies in migration, and I think that’s very good. Other countries are less good at doing so, like let’s say it like that. And I would urge countries to be more transparent and disclose information about their programs that they have, or their intention to use AI in migration management in the future. It would be very good if they could do so. Also, for the sake of living in democratic societies rule of law-abiding ones. It’s always good to know that.
Countries or international organizations being transparent, does not 100% satisfy the ethical responsibility of the use of AI in a way that AI itself is a black box.
So just to clarify, when you talk about the black box problem, basically what you’re saying is that AI algorithms sift through so much data that even those who wrote the code can’t really explain how a certain decision or prediction was made.
So, we can consider the cases where for instance, automated decision-making is being used on either residence permit decisions or asylum decisions, even the users or the policy makers or the end users of this tool cannot really know what’s really going on in that algorithm. So, it is important to remember that besides the biases and the pitfalls that we mentioned, AI systems are not very easy to explain and understand because of their black box characteristics. So yes, I think more countries should be more transparent about their use. But on the other hand, the purpose of use is being questioned more often than the [actual] use of it. To give a very brief example, like after the EU Libya deal when satellite data was being used for detecting boats [crossing from Libya through the Mediterranean], it was clearly stated that Frontex is using [the technology] to see where the boats are and where they are approaching. But a lot of civil society organizations and public claimed that this tool was being used to send the boats back and prevent them from entering the Italian water borders. So, in short, the use, yes, it is important to know what type of tools have been utilized but it is also important to know more about the intentions and the content of the tool. I’d speak.
We all benefit from AI in some way as part of our everyday lives. Does AI provide any benefits or opportunities for migrants?
Well, I think it can be depending on how the systems are designed. And developed, and especially how they’re used in practice. For example, AI technologies have the potential to support humanitarian organizations, for example, to anticipate humanitarian responses by supporting the prediction of patterns of displacement. They could give crucial insights and information about migration and displacement of the population to this organization that could then act proactively and faster to help migrants in the field. A lot depends on how states themselves how they decide to use these technologies as well. On the one hand, we could have states may want to use these technologies in a good way. If having advanced knowledge of migration influxes could lead to better planning, better preparing their reception capabilities, that would help the state to give better conditions to migrants. But on the other hand, we see that states may use technologies to reinforce their existing non-entry policies. It’s the use of these AI capabilities not to support the reception of migrants, but rather to send them back to places that they may fear for their lives or for their freedoms. And that would be contrary to international refugee law. It all depends on how the states will implement.
As Ana said, especially the use of AI by the humanitarian agencies and civil society is very common and it is being really actively effective. To give a particular example, Doctors Without Borders have been working with academics from Austria, as well as some scientists to use remote sensing, so the earth observation data, to help internally displaced people after some specific disaster. So, they do know where the help is needed, when its needed, what will be the size of the resources. So, they can really use the real-life data, real time data to facilitate humanitarian aid to people who are really in need. Or Sea Watch, another civil society organization, which also has a lot of academics and scholars on board. They are also trying to help the refugee boats you know, like to reach the coast safely, by using big data and from AI technologies. But on a higher level. I think I can give two examples which are very much linked to migrant integration. So, after arrival, the stay period of migration. One of them is the Nadine project. What they try to do is [to discover] how to utilize Big Data and AI, of course, to support migrant integration. Which is, I think, I’m not sure but it might be influenced by a study at Stanford, which is called GeoMatch. What they try to do is they are using the previous data about immigrants to find the best location or locations for them which will allow their integration to be very smooth and easy for them. And it was pretty successful in specific countries. [For example] in Switzerland, with refugees and their regional allocation it worked pretty well. Of course, they considered their education, language skills, previous labor market experiences, and what they do is they also use information from different regions and areas in the country. So, they match the migrants where they can integrate in a better way. So, there are some very useful applications of different AI tools but as Ana said, it is more about the intentions and what more could be done with what we already have in hand in terms of these emerging technologies.
Looking into the future, what do you think needs to be addressed as AI evolves?
Well, I think AI can be used to support human decision making and there are a number of examples that we’re talking about [where] AI could be helpful to support human decision making. To streamline repetitive processes, notably those that are largely focused on reviewing paperwork. But and I will make this a strong point, AI should not replace decision making in migration. And even when supporting humans in their decision making, I think it’s essential to bear in mind that we humans, we also have what we call automation bias. This tendency that we humans have to trust the solutions or the outputs that are offered by the machines. Even when we have a feeling that the solution proposed by the machine is not correct. Like have you ever trusted your GPS in your car and then found yourself driving to a very narrow road where cars should not be allowed in? So that’s automation bias. You had the feeling that that is not, right? And you said oh, maybe the GPS is right, because it knows better. It’s important that we take that into consideration. So, we might have AIs for decision making, but we need to not overly place our faith in the systems because they can have bias and we also have our own automation bias. And on top of everything migration is a very complex phenomenon that is not easily managed. And the decisions that are made in the context of migration have a very direct impact on the lives of individuals. These are decisions that relates to whether someone gets a visa to a different country. It’s a completely life changing experience or if someone gets a refugee status. It’s immensely important for that individual and it’s really life changing. So, I think it’s important that we raise awareness about the risks and the opportunities about using AI in migration, and that organizations and also states are more transparent in the way that they use these systems in the context of migration management. I think that’s something that I would see for the future.
When we are talking about the direction of AI. We can of course discuss about all the ethical concerns that we have or all the potential challenges that we need to tackle. But it is also important to acknowledge that there is no way out, so we cannot avoid AI. It’s been there and digitalization is really getting in our lives every other day. So, there is no way to escape. Which brings us to the fact that we should also talk about our role as a public as human beings as migrants, how to shape or how to shape or how to say a word on the things are being done. What I’m trying to say is that when you’re talking about AI on the street with regular people, they what they say at the very beginning, as you said, they are thinking about self-driving cars, very complex robots. But AI is everywhere in our daily lives. And they are produced mostly by computer scientists and developed by the data scientists. So, I think it is really important to have more social scientists involved in the development and implementation of AI in different areas not only for migration, because AI is mimicking us humans, so the human involvement cannot be overlooked in the development and implementation of AI. So, I think we will need more social scientists but also different stakeholders at the table because we are in fact, you know, like a bit scared maybe with the oldest potential misuses or the biases or the ethical concerns, but we will be the ones who will solve it. So, in short, I’ll say that we need more stakeholder involvement in AI evolution use and I think social scientists should have more to say.
Thanks to professor Beduschi and professor Bircan for joining me today and thank you for listening. This is a CERC Migration and openDemocracy podcast produced in collaboration with Lead Podcasting. If you enjoyed the episode, subscribe to Borders & Belonging on Apple, Spotify, or wherever you get your podcasts. For more information on the use of AI and migration management, please visit the show notes. I’m Maggie Perzyna. Thanks for listening!
Whatever you're interested in, there's a free openDemocracy newsletter for you.
Get our weekly email
This article is published under a Creative Commons Attribution-NonCommercial 4.0 International licence. If you have any queries about republishing please contact us. Please check individual images for licensing details.