It ain’t all good”: Machinic Abuse Detection and Marginalisation in Machine Learning
Thinking about problems and ourselves can help us understand the problem.
Online abusive language and toxicity has been given increasing prominence academically, regulatory, and socially as a societal problem over the past few years as people are increasingly communicating on online platforms. Aiming to address these issues, NLP has proposed several datasets and models to address this issue. Many of the proposed approaches and resources have been rightfully criticised for participating in the marginalisation that they seek to address through disproportionately moderating affecting marginalised communities. In this talk, I revisit the tasks of abuse detection critically, examining the politics of the notion of “toxicity” and the classifiers constructed to address this. I then focus on how we can rethink modelling of online abuse to both include and exclude contexts. Finally, I turn to how we, as NLP practitioners and researchers are complicit in reproducing and obscuring hegemonic ideals in the modelling pipeline.