TAL Journal: Explainability of NLP models (64-3)

TAL Journal: Explainability of NLP models

Neural models build language representations without direct supervision, and have contributed to the important advances of language and speech processing systems in recent years. The use of the representations built by pre-trained language models (such as BERT) allows the development of systems for many languages and domains. However, model decisions are not interpretable. It is often not possible to know why a system makes a specific decision, and the reasons behind the good performance of state-of-the-art models remain mostly unknown. An increasing number of studies attempt to provide answers to these questions by addressing the explicability of NLP systems, and exploring questions related to:

  • model analysis and interpretability, aimed at identifying the information encoded in  neural representations [1] ;
  • the development of methods aimed at describing and justifying the reasoning steps that allowed systems to reach a specific answer, for example with chain-of-thought prompting (model explicability) [2] ;
  • the development of "safe" systems, capable of self-justification (model accountability).


This special issue of the TAL journal aims to address explanation and analysis methods that have been proposed for NLP systems, as well as our current understanding of the linguistic capabilities of neural models and their limitations.

We welcome submissions on the following topics (non-exclusively):

  • Methods aimed at explaining the decisions of neural networks (identification of important elements, explanation in textual format, …), and specifically methods that allow to establish causality relations between a prediction and the input (or part of the input);
  • Probing methods aimed at identifying linguistic and world knowledge encoded in neural representations;
  • Methods that allow to distinguish information that is encoded by neural networks, from information that is actually used by the models for making a decision (correlation versus causality);
  • Methods inspired by analysis methodology from related domains (experimental linguistics, computer vision, psychology, …);
  • Bias identification in neural language models;
  • Prompting methods to generate explanations with text [2] or structured representations [4];
  • Study of artificial languages or of linguistically motivated examples;
  • Evaluation of explanation methods and especially methods aimed at evaluating the faithfulness of explanations.

[1] Interpretability and Analysis in Neural NLP (Belinkov et al., ACL 2020)
[2] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (Wei et al., NeurIPS 2022)
[3] Towards Faithful Model Explanation in NLP: A Survey (Lyu et al., arxiv 2023)
[4] Causal Reasoning of Entities and Events in Procedural Texts (Zhang et al., EACL Findings 2023)

 

   

TO NOTE

IMPORTANT DATES

  • Submission deadline: 15 october 2023 15 november 2023
  • Notification to the authors after first review: january 2024
  • Notification to the authors after second review: april 2024
  • Publication : sept 2024

THE JOURNAL

TAL (Traitement Automatique des Langues / Natural Language Processing) is an international journal published by ATALA (French Association for Natural Language Processing, http://www.atala.org) since 1959 with the support of CNRS (National Centre for Scientific Research). It has moved to an electronic mode of publication, with printing on demand.

Online user: 2 Privacy
Loading...