Tom McCoy

Universal Linguistic Inductive Biases via Meta-Learning

I never meta learning I didn’t like.

Despite their impressive scores on NLP leaderboards, current neural models fall short of humans in two major ways: They require massive amounts of training data, and they generalize poorly to novel types of examples. To address these problems, we propose an approach for giving linguistic inductive biases to a model, where inductive biases are factors that affect how a learner generalizes. Our approach imparts inductive biases using meta-learning, a procedure through which the model discovers how to acquire new languages more quickly via exposure to many possible languages. By controlling the properties of the languages used during meta-learning, we can control the inductive biases that meta-learning imparts. We demonstrate the effectiveness of this approach using a case study from phonology.


Tom McCoy is a PhD student at Johns Hopkins. He studies the linguistic abilities of neural networks, focusing on inductive biases and representations of compositional structure.

Presentation Materials

Talk Video
Preliminary Slides
Blog-post-slash-demo
Paper