LLMs

Biased Tales: Cultural and Topic Bias in Generating Children’s Stories?

Stories play a pivotal role in human communication, shaping beliefs and morals, particularly in children. As parents increasingly rely on large language models (LLMs) to craft bedtime stories, the presence of cultural and gender stereotypes in these …

Measuring Gender Bias in Language Models in Farsi?

As Natural Language Processing models become increasingly embedded in everyday life, ensuring that these systems can measure and mitigate bias is critical. While substantial work has been done to identify and mitigate gender bias in English, Farsi …

Can I Introduce My Boyfriend to My Grandmother? Evaluating Large Language Models Capabilities on Iranian Social Norm Classification

Creating globally inclusive AI systems demands datasets reflecting diverse social norms. Iran, with its unique cultural blend, offers an ideal case study, with Farsi adding linguistic complexity. In this work, we introduce the Iranian Social Norms …

Countering Hateful and Offensive Speech Online - Open Challenges

In today’s digital age, hate speech and offensive speech online pose a significant challenge to maintaining respectful and inclusive online environments. This tutorial aims to provide attendees with a comprehensive understanding of the field by …

Overview of the Shared Task on Machine Translation Gender Bias Evaluation with Multilingual Holistic Bias

We describe the details of the Shared Task of the 5th ACL Workshop on Gender Bias in Natural Language Processing (GeBNLP 2024). The task uses dataset to investigate the quality of Machine Translation systems on a particular case of gender robustness. …

Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP)

This volume contains the proceedings of the Fifth Workshop on Gender Bias in Natural Language Processing held in conjunction with the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024).

FairBelief - Assessing Harmful Beliefs in Language Models

Language Models (LMs) have been shown to inherit undesired biases that might hurt minorities and underrepresented groups if such systems were integrated into real-world applications without careful fairness auditing.This paper proposes FairBelief, an …