Your AI's cultural bias depends on the language you use
MIT researchers tested GPT and ERNIE in English and Chinese. The finding: language choice shapes the cultural assumptions in AI responses. When prompted in English, models reflected American values. In Chinese, they shifted to Chinese values.
MIT Sloan researchers Jackson Lu, Lesley Song, and Lu Zhang tested GPT and ERNIE with identical questions in English and Chinese. The finding: language choice shapes the cultural assumptions baked into AI responses. When prompted in English, both models reflected American cultural values—prioritizing individual goals and analytical thinking. In Chinese, the same models shifted to emphasize collective relationships and context-focused reasoning.
The research team used psychological frameworks to measure this, asking models to evaluate statements about group decisions, solve logic puzzles, and choose visual diagrams representing relationships. They even tested it with marketing copy. When asked to recommend insurance slogans, GPT favored "Your future, your peace of mind" in English and "Your family's future, your promise" in Chinese.
For product teams, this matters more than it might seem. Your AI outputs carry hidden cultural assumptions that shift based on language. So if you're building features that serve different markets, you're not just translating text—you're potentially channeling different value systems through the same model. The good news: you can prompt for specific cultural perspectives. The catch: you have to know this is happening first.
https://mitsloan.mit.edu/ideas-made-to-matter/generative-ai-isnt-culturally-neutral-research-finds