B1397
Title: Gender bias in text: Automated detection and mitigation system
Authors: Jad Doughman - American University of Beirut (Lebanon) [presenting]
Abstract: Given that language is the primary tool used to convey our perceptions, then any form of biased misrepresentation has the potential to change how an entity is portrayed in our minds. The source of bias in language can be traced to an androcentric worldview that was prevalent among 18th-century grammarians and was centered around the belief that: ``human beings were to be considered male unless proven otherwise''. Given that there is clear evidence of gender bias in most languages and its direct contribution to reinforcing and socializing sexist thinking, then there is a need to detect and highlight these manifestations in the ever-growing repertoire of textual content on the internet alongside printed writings such as educational textbooks. Previously, most proposed solutions to detect gender bias in texts were based on the frequency of gendered words and pronouns, in contrast, our feature-based approach would focus on capturing contextual and semantic queues in its classification process. The underlying motivation is to enable the technical community to combat gender bias in text and halt its propagation using ML and NLP techniques.