Teaching Social Identity and Cultural Bias Using AI Text Generation
Submitter: Christopher D. Jimenez, Stetson U
——————————————————
The experiment:
Prior to the proliferation of ChatGPT, I introduced two sets of English majors to GPT-3 via OpenAI’s Playground and guided them through in-class activities that explored the model’s capacity to represent and predict users’ social identities based on textual inputs. The exercise served to deepen students’ engagement with the nature of cultural biases in language, which we had been primarily studying in relation to literary theory and texts that portray characters using linguistic ambiguities surrounding race, gender, and sexual orientation. The primary texts were Nella Larsen’s Passing and Toni Morrison’s “Recitatif,” both of which provided samples to demonstrate the algorithmic biases of AI models when interpreting character attributes. Students were asked to fill out a survey that collected information about topics seemingly unrelated to race or gender, such as their favorite foods and best talents, and queried the AI model using this data to generate interpretations of their social identities. We discussed how certain linguistic markers and cultural connotations in their survey data could result in significant changes in the model’s generated perception of their age, gender, race, ethnicity, sexual orientation, and socioeconomic status.
Results:
Student reactions ranged from fascinated to skeptical. Many students were personally engaged with the activity as it was specifically directed at helping them reflect on their own identities and sense of self in reference to larger cultural perceptions. Moreover, the activity produced a collective energy to test the limits of the AI model, opening the way for wider discussions about how words like “salad” or “truck” or “short-sleeve shirt” may have developed in concert with and against cultural biases around identity.
One class was quite lighthearted with the activity and approached it with laughter and good humor; many students in this class were close friends. The other class approached the activity with greater hesitation; in comparison to the first class, this group was larger and had a greater range of personalities. While at the time (prior to the rapid rise of AI language models) this activity was meant to be small-scale, it served as most students’ first introduction to AI’s soon-to-be significant role in conversations around identity formation. Interestingly, with many companies aligning AI models to prevent the generation of stereotypes, this activity cannot be replicated in the same way with current platforms. Thus, students who participated in the activity described here have a greater understanding of the potential cultural biases of these AI models if left to run without human supervision.
Relevant resources: https://wac.colostate.edu/repository/collections/textgened/ethical-considerations/teaching-social-identity-and-cultural-bias-using-ai-text-generation/
Contact:
- Email: cjimenez[AT]stetson[DOT]edu
- Website: novelsbynumbers.com
Leave a Reply