XLoKR 2021

Explainable Logic-Based Knowledge Representation

News

XLOKR recordings

The recordings of XLOKR2021 are available on youtube. Enjoy!

Invited Speaker Change

Due to illness, Sheila McIlraith will not be able to deliver her talk. Pietro Baroni has agreed to step in (very last-minute) to talk about Explanations from Gods of Olympus to black-box models: a layman perspective.

Preliminary Program available

The latest version of the programme can be found here

Accepted papers online

A list of all accepted papers can be found below

Keynote speakers confirmed

Sheila McIlraith and Joe Halpern will deliver the keynotes at XLoKR21!

About

XLoKR 2021 is the second workshop in the XLoKR series. It is co-located with the 18th International Conference on Principles of Knowledge Representation and Reasoning, which will take place virtually in Hanoi, Vietnam

Motivation and Topic:

Embedded or cyber-physical systems that interact autonomously with the real world, or with users they are supposed to support, must continuously make decisions based on sensor data, user input, knowledge they have acquired during runtime as well as knowledge provided during design-time. To make the behavior of such systems comprehensible, they need to be able to explain their decisions to the user or, after something has gone wrong, to an accident investigator.

While systems that use Machine Learning (ML) to interpret sensor data are very fast and usually quite accurate, their decisions are notoriously hard to explain, though huge efforts are currently been made to overcome this problem. In contrast, decisions made by reasoning about symbolically represented knowledge are in principle easy to explain. For example, if the knowledge is represented in (some fragment of) first-order logic, and a decision is made based on the result of a first-order reasoning process, then one can in principle use a formal proof in an appropriate calculus to explain a positive reasoning result, and a countermodel to explain a negative one. In practice, however, things are not so easy also in the symbolic KR setting. For example, proofs and counter-models may be very large, and thus it may be hard to comprehend why they demonstrate a positive or negative reasoning result, in particular for users that are not experts in logic. Thus, to leverage explainability as an advantage of symbolic KR over ML-based approaches, one needs to ensure that explanations can really be given in a way that is comprehensible to different classes of users (from knowledge engineers to laypersons).

The problem of explaining why a consequence does or doesn't follows from a given set of axioms has been considered for full first-order theorem proving since at least 40 years, but there usually with mathematicians as users in mind. In knowledge representation and reasoning, efforts in this direction are more recent, and were usually restricted to sub-areas of KR such as AI planning and description logics. The purpose of this workshop is to bring together researchers from different sub-areas of KR and automated deduction that are working on explainability in their respective fields, with the goal of exchanging experiences and approaches. A non-exhaustive list of areas to be covered by the workshop are the following:

  • AI planning
  • Answer Set programming
  • Argumentation frameworks
  • Automated reasoning
  • Description logics
  • Non-monotonic reasoning
  • Probabilistic representation and reasoning
  • Constraint Programming
  • Causal Reasoning

Accepted Papers

Important dates


Paper registration (easychair): July 2, 2021 July 8, 2021

Paper submission: July 2, 2021 July 12, 2021

Notification: August 6, 2021 August 13, 2021

Camera-ready papers : September 6, 2021

Workshop dates: November 3-5, 2021 (exact date TBD)

Early-bird registration deadline: TBD

Paper Submission

Submit your challenges or solutions!

Researchers interested in participating in the workshop should submit extended abstracts of 2-5 pages (excluding references) on topics related to explanation in logic-based KR. The papers should be formatted in Springer LNCS Style and must be submitted via our EasyChair submission page. The workshop will have informal proceedings, and thus, in addition to new work, also papers covering results that have recently been published or will be published at other venues are welcome.

Submissions will mainly be evaluated on the contribution they bring to the workshop, e.g., on their potential to evoke interesting discussions. Next to more traditional solution papers representing methods for explainable knowledge representation, we also strongly encourage authors to submit papers describing concrete challenges for the community.

Organization

Organizing Committee

Program Committee

  • Franz Baader
  • Sander Beckers
  • Bart Bogaerts
  • Alex Borgida
  • Stefan Borgwardt
  • Gerhard Brewka
  • Tathagata Chakraborti
  • Jorge Fandinno
  • Sarah Alice Gaggl
  • Ruth Hoffmann
  • Joerg Hoffmann
  • Thomas Lukasiewicz
  • Pierre Marquis
  • Cristian Molinaro
  • Rafael Peñaloza
  • Nico Potyka
  • Francesco Ricca
  • Zeynep G. Saribatur
  • Stefan Schlobach
  • Mohan Sridharan
  • Francesca Toni