Multilingual AI models are actively erasing the moral differences between cultures by pulling every language toward a Western ethical baseline.
AI was supposed to bridge the gap between nations by understanding local nuances and dialects. These models actually compress the moral distance between different societies, forcing them into a Western, industrialized worldview. When a model speaks Swahili or Mandarin, it still applies the civic judgments of a Silicon Valley dataset. This creates a hidden normative infrastructure that homogenizes global thought under the guise of translation. Cultural diversity is being sacrificed for the sake of model alignment and safety. The world is losing its variety of ethical perspectives as AI becomes the primary lens through which we interact.
Multilingual Large Language Models and Cultural Diversity: Evidence from Civic and Moral Judgments
SSRN · 6633921
Multilingual large language models (LLMs) are increasingly deployed across linguistic and cultural contexts, raising the question of whether multilingual interaction preserves cultural diversity in moral judgments. We compare civic and moral evaluations generated by a multilingual LLM across multiple languages with population-level data from the World Values Survey and the European Values Study. Although the model exhibits meaningful linguistic variability, this does not translate into the prese