Large language models (LLMs) are able to engage in natural-sounding
conversations with humans, showcasing unprecedented capabilities for
information retrieval and automated decision support. They have disrupted
human-technology interaction and the way businesses operate. However,
technologies based on generative artificial intelligence (GenAI) are known to
hallucinate, misinform, and display biases introduced by the massive datasets
on which they are trained. Existing research indicates that humans may
unconsciously internalize these biases, which can persist even after they stop
using the programs. This study explores the cultural self-perception of LLMs by
prompting ChatGPT (OpenAI) and Bard (Google) with value questions derived from
the GLOBE project. The findings reveal that their cultural self-perception is
most closely aligned with the values of English-speaking countries and
countries characterized by sustained economic competitiveness. Recognizing the
cultural biases of LLMs and understanding how they work is crucial for all
members of society because one does not want the black box of artificial
intelligence to perpetuate bias in humans, who might, in turn, inadvertently
create and train even more biased algorithms.

Understanding the Cultural Self-Perception of Large Language Models (LLMs)

Large language models (LLMs) have revolutionized human-technology interaction and have become invaluable tools for information retrieval and automated decision support. These models, such as ChatGPT by OpenAI and Bard by Google, are capable of engaging in natural-sounding conversations with humans. However, as with any technology, there are limitations and challenges that need to be considered.

One of the key concerns with LLMs is their potential to introduce biases and misinformation. As these models are trained on massive datasets, they can inadvertently internalize and perpetuate the biases and inaccuracies present in the data. This poses a significant risk, as users may unknowingly absorb and perpetuate these biases after interacting with the models.

The current study focuses on exploring the cultural self-perception of LLMs by prompting ChatGPT and Bard with value questions derived from the GLOBE project. The GLOBE project aims to measure cultural values across different countries and regions, providing a framework for understanding societal differences. By applying this framework to LLMs, the study aims to uncover any underlying biases or cultural alignments in these models.

The findings reveal an alignment between the cultural self-perception of LLMs and the values prevalent in English-speaking countries. This observation suggests a certain level of cultural homogeneity in LLMs, potentially influenced by the training data that heavily represents English-speaking contexts. Furthermore, the study also identifies a close alignment with countries characterized by sustained economic competitiveness, indicating potential biases towards capitalist and market-driven values.

This interdisciplinary study highlights the intersection of artificial intelligence, linguistics, and cultural studies. The utilization of LLMs in various domains, including business operations and decision-making, necessitates an understanding of their biases and limitations. By recognizing these biases, stakeholders can work towards mitigating the perpetuation of bias through human actions that unintentionally reinforce and amplify biased algorithms.

It is imperative for individuals and organizations at all levels of society to comprehend the cultural biases of LLMs. Without this understanding, there is a risk that these technologies may inadvertently perpetuate bias in humans, resulting in a feedback loop where biased algorithms continue to be created and trained. Encouraging transparency, accountability, and ongoing research in LLM development is vital to ensure their ethical and equitable use in our increasingly interconnected world.

Read the original article