Saliha Muradoǧlu

Do transformer models do phonology like a linguist?

Testing the linguistic understanding of transformer generalizations

Neural sequence-to-sequence models have been very successful at tasks in phonology and morphology that seemingly require a capacity for intricate linguistic generalizations. Despite their success, the opaque nature of neural models renders the task of analysing and evaluating the generalisations produced difficult. To compare the generalisations generated by these models with those of linguistic tradition, we experiment with phonological processes on a constructed language. We establish that the models are capable of learning 29 different phonological processes with varying degrees of complexity. We explore whether the models generalise over linguistic categories such as vowels and consonants, whether they learn a representation of internal word structures and finally, more complex phonological processes such as rule ordering.


Saliha is a Linguistics PhD student at the Australian National University. Her research focuses on using neural networks in aid of language documentation, with particular focus on low-resource settings.

Presentation Materials

Talk Video