Zeerak Waseem

It ain’t all good”: Machinic Abuse Detection and Marginalisation in Machine Learning

Thinking about problems and ourselves can help us understand the problem.

Online abusive language and toxicity has been given increasing prominence academically, regulatory, and socially as a societal problem over the past few years as people are increasingly communicating on online platforms. Aiming to address these issues, NLP has proposed several datasets and models to address this issue. Many of the proposed approaches and resources have been rightfully criticised for participating in the marginalisation that they seek to address through disproportionately moderating affecting marginalised communities. In this talk, I revisit the tasks of abuse detection critically, examining the politics of the notion of “toxicity” and the classifiers constructed to address this. I then focus on how we can rethink modelling of online abuse to both include and exclude contexts. Finally, I turn to how we, as NLP practitioners and researchers are complicit in reproducing and obscuring hegemonic ideals in the modelling pipeline.


Zeerak is a PhD student at the University of Sheffield and incoming Post-doc at the Digital Democracies Institute at Simon Fraser University. Zeerak’s work deals with online abuse detection, from a computational and a social scientific perspective, and how processes of marginalisation are embedded into different aspects of machine learning.

Presentation Materials

Talk Video
Paper
Paper