Gail Weiss

Thinking Like Transformers

Transformer-encoders are kind of like a block of code. Uh, also, the code is in a specific horrible language. But that’s on you for using attention.

Transformers - the purely attention based NN architecture - have emerged as a powerful tool in sequence processing. But how does a transformer think? When we discuss the computational power of RNNs, or consider a problem that they have solved, it is easy for us to think in terms of automata and their variants (such as counter machines and pushdown automata). But when it comes to transformers, no such intuitive model is available.

In this talk I will present a programming language, RASP (Restricted Access Sequence Processing), which we hope will serve the same purpose for transformers as finite state machines do for RNNs. In particular, we will identify the base computations of a transformer and abstract them into a small number of primitives, which are composed into a small programming language. We will go through some example programs in the language, and discuss how a given RASP program relates to the transformer architecture.


Gail Weiss is a PhD student at the Technion in Israel, working under the advice of professors Eran Yahav and Yoav Goldberg. Her research focuses on the application of formal language theory to deep learning techniques, particularly those used in NLP.

Presentation Materials

Talk Video