Personal

Silja Renooij

Team members

Aligning designed and learning systems

for responsible hybrid intelligence

This project is defined within the scope of the Hybrid Intelligence: Augmenting Human Intellect project. It is focused on Responsible Hybrid Intelligence and is a collaboration between partners from Utrecht University and the University of Groningen.

What is our aim?

A core puzzle in today’s artificial intelligence is how knowledge, reasoning and data are connected. To what extent can knowledge used in reasoning be recovered from data with implicit structure? Can such knowledge be correctly recovered from data? To what extent does knowledge determine the structure of data that results from reasoning with the knowledge? Can knowledge be selected such that the output data generated by reasoning with the knowledge has desired properties?

By investigating the relations between knowledge, reasoning and data, we aim to develop mechanisms for the verification and evaluation of hybrid systems that combine manual knowledge-based design and learning from data. The focus will be on structures used in reasoning and decision-making, in particular logical and probabilistic relations (as in Bayesian networks) and reasons (pro and con) and exceptions (as in argument-based decision making).

Why is this important?

In Responsible HI, there is a need to ensure that the behaviour of HI systems is aligned with legal and moral considerations. So the input and output of a system must meet a given set of principles. Verification, evaluation and interaction mechanisms. are needed that ensure such alignment, that show to what extent alignment has been achieved, and that help improve the alignment.

How will we approach this?

In a series of experiments, we aim to investigate to what extent the structure in artificial data sets with known structure can be correctly recovered automatically. Here we build on initial research that showed that structure can indeed be recovered to some extent, but that good performance of a learning system can also be based on an unsound rationale, i.e., by following an alternative structure that leads to the good performance [Steging et al, 2019]. Different kinds of structures will be investigated (logical and probabilistic relations, reasons (pro and con) and exceptions). Building on the results of these experiments, we aim to develop interactive alignment protocols that allow the human user to guide the system towards correct knowledge grounded in data and output data with desired properties (e.g., for handling biased data). Case studies (initially in the legal domain) will be used as testing ground.