Abstracts of key note talks


Nadia Purtova: When cognitive legal dreams and syntactic information machines collide

Information is at the heart of any human interaction. Therefore, the law has regulated information for as long as it has regulated human behaviour. Today, algorithms and specifically AI are in the spotlight both as a blessing and a curse. The regulatory agendas marked AI and algorithms as high priority, and we face a new wave of laws that regulate data, digital information, and the machines that process it. The trouble is that the law still tries to control the impact of information processed by machines using rules based on human-centric ideas of information: how humans deal with information, how humans think. Regulating the information machines based on how humans deal with information is a misdirected and futile effort. The talk will illustrate this with an example of the data protection right to explanation of automated decisions and the recent relevant case law of the EU Court of Justice.

Kate Vredenburg: What should we explain with explainable AI?

Explaining opaque models is important to serve various practical or moral ends. XAI - both the practice and the philosophy thereof - seems to be out of step with best practice in science communication and evidence-based policy. While these fields aim to explain some aspect of the world using scientific models, debates on XAI assume that the model itself is, first and foremost, the correct target of the explanation. In this talk, we examine that assumption, using the example of explanations of lending decisions. Consumer credit is a fruitful case study for the moral importance of explanation because has one of the best regulated explanatory requirements across domains, and the ways in which those explanations fall short are instructive. We argue that, in the domain of consumer lending, XAI is correct to focus on explanations of how the model works, due to the instrumental value of credit. The argument for this conclusion sets up an overlooked issue about its right to explanation: its distributional consequences. The right to explanation seems to be objectionably inegalitarian, at least in the case of consumer credit.

Wijnand van Woerkom: Assessing recidivism risk prediction with case-based reasoning

Case-based reasoning is a central theme of AI and law research, providing formal models to assess case-base consistency. I present four theorems describing how statistical modelling techniques, feature modifications, and data binning affect case-base consistency. Primarily, we demonstrate that generalized linear models necessarily yield consistent decisions. We further show that adding input features increases consistency, while removing features decreases it; similarly, output binning increases consistency, whereas input binning decreases it. Each of these theorems is showcased through a consistency analysis of the COMPAS program - a widely used recidivism risk prediction tool which has been at the center of debates on fairness and interpretability for many years.