ÎçÒ¹¸£Àû1000¼¯ºÏ

Technology

AI trained on novels tracks how racist and sexist biases have evolved

Questioning a chatbot that has been trained on bestselling books from a particular decade can give researchers a measure of the social biases of that era

By Matthew Sparkes

20 February 2025

Books can document the cultural biases of the era when they were published

Ann Taylor/Alamy

Artificial intelligences picking up sexist and racist biases is a well-known and persistent problem, but researchers are now turning this to their advantage to analyse social attitudes through history. Training AI models on novels from a certain decade can instil them with the prejudices of that era, offering a new way to study how cultural biases have evolved over time.

Large language models (LLMs) such as ChatGPT learn by analysing large collections of text. They tend to inherit the biases found within their training data:…

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox. We'll also keep you up to date with New Scientist events and special offers.

Sign up

To continue reading, today with our introductory offers

or

Existing subscribers

Sign in to your account
Piano Exit Overlay Banner Mobile Piano Exit Overlay Banner Desktop