Abstracts of key note talks and posters


Manuella van der Put: Artificial Intelligence in the process of judicial decision-making.

The Study I did in order to a PHD-dissertation focuses on: to what intent can artificial intelligence play a role in the process of judicial decision-making and what requirements apply to this use? The study contained a literature study, sociological-empirical study and a practical study. The results of the various studies were brought together to confirm conclusions, several design criteria that AI systems in the administration of justice should meet as well as in questions for further discussion within the field of administration of justice with judges and data science experts.

The main conclusion is that AI can play a role in various fields in the process. For certain cases the computer can make decisions autonomously and for almost all cases, it can play an important supporting role in terms of efficiency and quality. The study also shows in which respects the computer is not (yet) able tot take over the work of judges. There will remain a clear role and added value for the individual judge but at the same time AI can play an important role for example to meet or continue to meet the desired timeliness and quality. It is therefore the responsibility of the administration of justice to embrace AI.

Giovanni Sartor: Predictive Justice.

Suzanne Flynn, Predicting crimes or presuming guilt? The potential impact of AI in predictive policing in the context of criminal trials in the EU and on the rights of the accused (poster)

Predictive policing has become a common tactic of law enforcement to gain insight into potential future offenders and their paths towards crime. AI software such as ProKid, POL-INTEL, PET-INTEL and RADAR-iTE have been used in three EU Member States for this purpose. As subjects of these predictive policing measures go about their daily lives, in the event they find themselves as defendants in criminal trials, questions are raised regarding the use of knowledge produced from AI-based predictive policing software at trial stage. Although predictive policing software may be beneficial in supporting judicial decision-making, associated risks regarding defence rights in particular should also be considered. This poster will look at the following examples of predictive policing AI software: ProKid, POL-Intel and PET-Intel and RADAR-ITE and how the use of this AI-generated insight at trial stage could potentially impact criminal trials and the rights of the accused. In addition, the recent IMCO-LIBE report (on behalf of the European Parliament) on the AI Act, in favour of prohibiting predictive policing against individuals, will be discussed.

Saar Hoek, Explainability and the GDPR (poster)

Over the last couple of years, terms like ‘transparency’, ‘interpretability’ and ‘explainability’ have become ubiquitous in artificial intelligence (AI) research. This can partly be attributed to the growing concerns, scandals and media coverage about bias and trust in algorithms, and perhaps also partly to the implementation of the General Data Protection Regulation (GDPR) in the European Union (EU). Since its enter into effect, there has been much discussion about the consequences of the GDPR for AI. Particularly art. 22, which grants data-subjects the right not to be subject to automated decision-making, and art. 13-15, which provide that a data-subject has the right to ‘meaningful information about the logic involved’ when automated decision-making occurs, could have far reaching consequences for developers and users alike. Whether the intent of these articles is to provide a ‘right to an explanation’ to a data-subject and, if so, how such an explanation should be defined, is less than clear. The purpose of this paper is to examine the nature of explanations in the context of AI and the GDPR as well as analyse potential implementations of explanations and how well they adhere to both the teleological and concrete nature of the GDPR.

Daphne Odekerken, Transparent human-in-the-loop classification of fraudulent web shops (poster)

Every year, the Dutch police receives thousands of complaints on online trade fraud, many of which concern reports on web shops that do not deliver goods. However, in many cases, the customer fell victim to malfunctioning delivery service, rather than fraud. The Dutch police has a national centre for counteracting online trade fraud, where analysts manually check suspicious web shops. This is a combination of routine work (that could be automated) and more detailed investigation (that should be done by humans). Given the high number of suspicious web shops and the necessity to act quickly, the police experiments with using artificial intelligence (AI) to speed up the process. Naturally this should happen in a responsible way: the AI techniques should be able to adapt to a dynamic environment, keep the human analysts in the loop and explain decisions. To account for this, we introduce an agent architecture for web shop classification that relies on static and dynamic algorithms for both rule-based and case-based reasoning. By combining dynamic argumentation with legal case-based reasoning, we create an agent that is able to explain its decisions at various levels of detail, adapts to new situations and keeps the analyst in the loop.