TopicSetupProgramme

AI in Court

A UNA Europa Digitalized! Master class, Krakow (Poland), 29 August - 2 September


Lecturer

Henry Prakken, Department of Information and Computing Sciences, Utrecht University and Faculty of Law, University of Groningen and Department of Law, European University Institute, Florence.

Master Class Topic

Driven by the spectacular progress in AI in recent years, especially in machine learning and natural-language processing, there has been increasing attention for the question how artificial intelligence (AI) can support judicial decision-making. In particular algorithmic case outcome predictors have received much attention. Some hope that supporting judges with such algorithms can increase the efficiency, predictability and consistency of judicial decision-making. It has even been suggested that such algorithms can be used to automate decision-making in routine cases so that judges have more time for complex cases.

Others argue that the claimed benefits of such algorithms are based on misunderstandings concerning their nature and that for supporting or automating judicial decision-making a very different kind of AI system is needed, namely, knowledge-based algorithms that can apply legal knowledge to justify legal decisions. A more general concern with AI support for judges, whether data-driven or knowledge-based, is that this would force mechanical application of the law without room for individual justice and for creative interpretation of the law.

Another type of predictive algorithm are so-called algorithmic experts, which inform judges about a matter of fact relevant to a decision. For instance, algorithms that predict the probability of recidivism are already in use for informing decisions about requests for bail or early release from prison. While some claim that the use of algorithmic experts can lead to more accurate decisions of fact, others fear that such algorithms may be biased against minorities.

More generally, data-driven approaches have led to increasing concern about the fairness and explainability of algorithmic (support for) decision-making. Data collection may be biased, data may be outdated, incomplete, incorrect or sensitive, and the models learned from the data may be hard to inspect and explain.

The aim of this masterclass is to discuss these issues from a legal-theoretical and philosophical perspective.

Master Class setup

On Day 1 I will, after a welcome and getting-to-know-each-other session, give a two-hour introduction to AI & Law (including a 20 minutes break), followed by a plenary discussion.

On Days 2-4 there will be a two-hour lecture by me (including 20 minutes break) before lunch and then an interactive two-hour session + 20 minutes break after lunch with discussion about the morning lecture (but I will allow for discussion in the morning too so the division into these two parts might be a bit blurry). For each of these four days there are two or three papers primary reading and a few papers secondary reading. I am expecting that the students will at least have read the primary reading in advance. The secondary reading can be used as inspiration for the student presentations of Day 5. The reading lists can still be extended before and during the masterclass.

Day 5 (Friday Sept 2) will consist of student presentations (in pairs) + discussion about a topic of their choice related to the master class. So during the week the students will have to work on their presentation in pairs.

Programme


Monday, 29 August: Introduction to master class and AI & Law

12:00-13:00: Lunch

13.00-13.30: Welcome, getting to know each other

13.30-14.20 and 14.40-15.30: Lecture on Introduction to AI & Law.

15.30-16.00: Discussion.

Primary reading: Secondary reading: Powerpoint slides (see also the notes).

19:00-23:00: Master class dinner (details to be provided).


Tuesday, 30 August: Knowledge-based approaches


10:30-11:20 and 11:40-12:30: Lecture on Knowledge-based approaches

12:30-13:30: Lunch

13.30-14.20 and 14.40-15.30: Discussion.

Primary reading: Secondary reading:
  • S. Zouridis, M. van Eck & M. Bovens, Automated discretion. In P. Hupe & T. Evens (eds): Handbook on Discretion: the Quest for Controlled Freedom, pp. 313-329. Palgrave Macmillan, London 2020.
Powerpoint slides (see also the notes).


Wednesday, 31 August: Machine-learning approaches


10:30-11:20 and 11:40-12:30: Lecture on Machine-learning approaches

12:30-13:30: Lunch

13.30-14.20 and 14.40-15.30: Discussion.

Primary reading: Secondary reading:
  • T.J.M. Bench-Capon, The need for good old fashioned AI and law. In W. Hotzendorfer, C. Tschohl and F. Kummer (eds): International Trends in Legal Informatics: A Festschrift for Erich Schweighofer. Editions Weblaw: Bern. Pages 23-36.
Powerpoint slides (see also the notes).


Thursday, 1 September: Responsible and explainable AI & law


10:30-11:20: Lecture on algorithmic bias
11:40-12:30: Discussion
12:30-13:30: Lunch

13:30-14:20 Lecture on explaining algorithmic legal decision making
14.40-15.30: Discussion

Primary reading: Secondary reading: Powerpoint slides (see also the notes).


Friday, 2 September: student presentations


10:30-11:20: Team 1
11:40-12:30: Team 2

12:30-13:30: Lunch

13:30-14:20 Team 3
14.40-15.30: Concluding discussion