Expert Commentary: Evaluating Language Models’ Unethical Behaviors with Human Knowledge
Language models have become an integral part of various downstream tasks, but concerns about fairness and biases in their outputs have been raised. In this article, the authors introduce a new approach to study the behavior of pre-trained language models (LMs) within the context of gender bias. By incorporating human knowledge into natural language interventions, they aim to probe and quantify unethical behaviors exhibited by LMs.
The authors present a checklist-style task inspired by CheckList behavioral testing. This task allows them to evaluate LMs from four key aspects: consistency, biased tendency, model preference, and gender preference switch. By examining these aspects, they can gain insights into how LMs handle and potentially perpetuate gender biases in their outputs.
To conduct their study, the authors probe a transformer-based question-answering (QA) model trained on the SQuAD-v2 dataset and an autoregressive large language model. They find interesting and contrasting results between the two models. The transformer-based QA model’s biased tendency positively correlates with its consistency, suggesting that the model consistently exhibits biased behavior. On the other hand, the autoregressive large language model shows an opposite relationship between biased tendency and consistency.
This research presents a significant contribution by providing the first dataset that involves human knowledge for evaluating biases in large language models. By introducing a checklist-style task, the authors offer a systematic approach to assess language models’ ethical behavior. This is crucial for ensuring fairness and mitigating biases in AI systems that rely on language models.
Further research can build upon this work by expanding the checklist-style task and incorporating more diverse dimensions of bias evaluation. Additionally, exploring techniques to mitigate bias in language models based on the insights gained from this study could be an area for future investigation.