The blog has been co-authored with Alina Glaubitz. More information about Alina is available at the end of this blog.

Covid–19 has brought immediate changes to the rights of free movement in the European Union as each Member State has struggled to control the spread of the virus. Temporary travel restrictions were introduced, and Brexit or not, this affects us here in the UK.
Though the European Commission now encourages Member States to lift the restrictions imposed to meet the pandemic, it continues to recognise some conditions must stay, such as 14-day quarantine rules, social distancing and limitations on social functions. Our Foreign Office also still advises against all but essential travel, though this is slowly changing for countries such as Greece and Italy.
Above all, it is clear that either side of the Channel, identifying and tracking visitors as effectively as possible is now essential. In this blog we look at some of the data protection and fundamental rights issues this monitoring process has thrown up.
How has the EU been planning to make this happen? The short answer is through using as much clever technology as possible. The longer answer is by using a system that has already been trialled in Hungary, Latvia, and Greece called iBorderCtrl.
This blog considers the legal issues that arise when monitoring is undertaken through AI systems at borders.
Smart Borders
Even before the pandemic the EU had developed a plan is to have a system of “Smart Borders”.
Smart Borders aim to make border-crossing quick and easy for those who are not welcome, and difficult and time consuming for those who are not.
The EU’s Smart Border proposal is the European Travel Information and Authorisation System (ETIAS).[1] This broad policy proposal is intended to introduce a largely automated IT system which identified risks (ranging from security to epidemic risks) posed by visa-exempt visitors travelling to the Schengen States.
iBorderCtrl, which stands for Intelligent Portable Border Control System is the name of the technology which is being explored as a means of implementing the ETIAS.
There is much to like about the proposal for ETIAS and the EU’s intentions in developing iBorderCtrl. Business needs to move goods as fast and as cheaply as possible. Workers want the shortest commuting time. Holiday makers want no delay. Human traffickers need to be identified and stopped. Technology ought to be able to help achieve all these aims.
The question however is whether the iBorderCtrl system has any downsides: would it be lawful and effective, if, as is at least likely, it is placed at the heart of the ETIAS?
As it will use Artificial Intelligence (AI), Machine Learning (ML) and probably Automated Decision Making (ADM),[2] this question goes to the heart of the kinds of problem with which the European Commission will have to grapple in deciding whether and, if so how, AI should be further regulated.[3]
Tell me more about iBorderCtrl…
Strictly speaking iBorderCtrl is still only a project. The project has been designed to monitor and control the Schengen zone’s land borders by facilitating thorough checks for third country nationals intending to cross EU borders. That will almost certainly mean UK citizens. The project’s stated objectives are to improve the efficacy, accuracy, and speed of border control, while reducing costs.
How will this be done? The iBorderCtrl website says –
iBorderCtrl solution has been designed as a highly customised system which offers a wide range of capabilities such as: New two-stage procedure for border crossing (pre-registration and check at the Border Crossing Point (BCP))Provide early-warning capabilities due to risk assessmentEasily expandable with new technologies or functionalities.Adaptability to the requirements of each client.Three different interfaces, one for each type of user of the system (Travellers, Border Guards, Border Managers)Document validation capabilitiesFace recognition capabilitiesPortable unit with Commercial Off-The-Shelf technologiesHidden human detection capabilitiesData Analytics and statistics |
A key part of this system is its two-stage procedure. The aim is to get the checking process started before reaching the border to be crossed. Last year journalists looked at the way this would work. Their findings were –
Prior to your arrival at the airport, using your own computer, you log on to a website, upload an image of your passport, and are greeted by an avatar of a brown-haired man wearing a navy blue uniform. “What is your surname?” he asks. “What is your citizenship and the purpose of your trip?” You provide your answers verbally to those and other questions, and the virtual policeman uses your webcam to scan your face and eye movements for signs of lying. At the end of the interview, the system provides you with a QR code that you have to show to a guard when you arrive at the border. The guard scans the code using a handheld tablet device, takes your fingerprints, and reviews the facial image captured by the avatar to check if it corresponds with your passport. The guard’s tablet displays a score out of 100, telling him whether the machine has judged you to be truthful or not. A person judged to have tried to deceive the system is categorized as “high risk” or “medium risk,” dependent on the number of questions they are found to have falsely answered. Our reporter — the first journalist to test the system before crossing the Serbian-Hungarian border earlier this year — provided honest responses to all questions but was deemed to be a liar by the machine, with four false answers out of 16 and a score of 48. The Hungarian policeman who assessed our reporter’s lie detector results said the system suggested that she should be subject to further checks, though these were not carried out. Travelers who are deemed dangerous can be denied entry, though in most cases they would never know if the avatar test had contributed to such a decision. The results of the test are not usually disclosed to the traveler. |
Should we be concerned?
Using AI lie-detectors raises concerns as to whether an algorithm can account for trauma, mental health conditions, disability, or cultural differences in communication. To discrimination lawyers the obvious concern is that the AI system may discriminate in so far as factors such as appearance, voice and presentation are taken into account. Certainly, there is plenty of evidence in relation to other AI systems that black people, women and the disabled can be disadvantaged where biometric data such as appearance and voice is assessed by a machine. [4]
Not only are there question marks as to the impact which Smart Borders will have on people from the perspective of the principle of non-discrimination, there is also a concern as to accuracy.
The journalists who, in July of 2019, checked the AI system used to evaluate whether a prospective traveller is telling the truth through analysis of facial expressions, gaze, and posture, found a 25% error rate. Perhaps this can be reduced but it seems likely that this system will have a significant error rate for the foreseeable future. If the system gives rise of prima facie discrimination and it is inaccurate, any justification defence is likely to fail.
Moreover, an inaccurate system which processes sensitive personal data gives rise to really significant data protection and fundamental rights issues.
Would the use of iBorderCtrl be lawful under the GDPR?
iBorderCtrl must of course also comply with GDPR provisions. This raises some important issues. The GDPR says that informed consent must be freely given by travellers before
using iBorderCtrl to process their data (see Article 7 and Article 9 GDPR), but will this really be the case if using this system becomes a precondition to entry? In order to ensure that consent is freely given it seems likely that travellers should have the right to opt for an evaluation by a human border officer rather than by a machine. In any event, travellers have a right to rectification (Article 16 GDPR), and for that right to meaningful, again there will need to be a system of fair human reassessment at the border in the event of a “rejection” by iBorderCtrl.
Moreover, iBorderCtrl must be constructed in a way which is compatible with the right not to be subjected to automated individual decision-making, including profiling, where there are legal effects or similarly significant legal effects for the traveller (Article 22 GDPR).
Is iBorderCtrl being challenged?
Patrick Breyer MEP is a committed activist for digital freedoms and for the protection of fundamental rights. He already has a series of European Court cases to his name. Breyer argues against lie detector bots saying, “… for stressed, nervous or tired people, such a suspicion-generator can easily become a nightmare. In Germany lie detectors are not admissible as evidence in court precisely because they do not work.”[5]
The EU research agency REA has so far refused to disclose iBorderCtrl’s detailed information about its system essentially on the basis that it is commercial sensitive which will have an impact on competitiveness.[6]
On 15 March 2019, Patrick Breyer brought an action against the European Commission before the Court of Justice of the European Union (CJEU), on the basis the Commission was responsible for the REA[7], seeking disclosure of all the documents concerning the approval and execution of iBorderCtrl project: Case T-158/19 Breyer v. Commission.
Conclusion
Should the CJEU rule in favour of the REA and maintain that iBorderCtrl’s documents remain undisclosed, norms of algorithmic transparency will be undermined. Without access to documents detailing the application of AI to travellers entering the Schengen zone, it will not be possible, or at least be much more difficult to challenge automated decision-making and for travellers to maintain their rights under the GDPR which includes a right to transparency.
Conversely, should the Court rule in favour of Mr Breyer, this might have a chilling effect on the development of bot-based border control systems which may alleviate the administrative and personnel burden of border security.
That said, it seems likely that the documents requested by Mr Breyer could well help ensure the transparency of AI systems and uphold the protection of the rights of individuals entering the Schengen zone. Only through disclosure can travellers appeal or request redress for harms caused by automated border control systems, particularly if these lead to discriminatory outcomes.
In short, getting Smart Borders right is really important. Any system must fully engage with the GDPR regulatory regime and it cannot operate in a discriminatory way that undermines the principle of equal treatment.
13 July 2020

Alina is pursuing a joint Bachelor and Master’s Degree in Political Science at Yale University, after which she aims to attend Law School in the US and pursue a career at the intersection of law and Artificial Intelligence. Aside from her studies, Alina volunteers as a paralegal at a law firm and a legal aid association in New Haven, and served as Editor-in-Chief of the Yale Human Rights Journal. Professionally, she has interned at international courts (ICC and ICTY), Allen & Overy, and a legal clinic in London (ATLEU). Alina has conducted research on the application of AI across migration, criminal justice, leadership, and electoral settings, and is writing her Master’s thesis on how the law could remedy harms caused by AI, looking into different biases in training datasets.
Alina can be reached at alina.glaubitz@yale.edu
[1] https://ec.europa.eu/home-affairs/what-we-do/policies/borders-and-visas/smart-borders/etias_en
[2] We refer to these generically as AI in this blog.
[3] A guide to the proposals currently being considered by the European Commission is available here: https://ai-lawhub.com/response-to-the-european-commission-papers-on-ai/
[4] See the blogs on our website www.ai-lawhub.com “Covid-19: Facial Recognition Technology in the workspace: The answer to social distancing or discriminatory? and AI in Recruitment.
[5] https://www.patrick-breyer.de/?p=590076&lang=en
[6]https://www.asktheeu.org/en/request/6091/response/19436/attach/html/2/Reply%20to%20initial%20application.pdf.html
[7] The case originally named the European Commission as the Respondent but by order of the General Court of 12 November 2019 the Research Executive Agency was substituted as Respondent.