All past recordings: Youtube
Thinking about problems and ourselves can help us understand the problem.
Everyone is trying to improve language models by having them look at more words, we show that we can improve them by giving them less words
You think you understood your model, but you really didn’t :)
Learning goes faster when you’re curious!
Instead of learning to talk by reading about other talking, why not just try talking?
Talking about the importance of text simplification using complex academic language.
Ever heard about inter-research group agreement? Neither have I.
Explaining hard tasks is hard
Exact search: “Are we there yet?”Beam search: “Let’s take the scenic route!”
ELECTRA annoyed BERT because she understood implicit causality bias.
Humans perceive speech while listening, but how about machines?
The pretraining elephant in the room.
Is my sentence simple? Well, that’s a difficult question.
Through the lens of our taxonomy, you can see mountains of technical progress, but also thousands of languages on the ground without a way to climb them.
A classifier and a language model are put head-to-head in a ferocious face-off, distributional robustness ensues.
Talking with sociolinguists can help us avoid marginalizing people
If you want to understand how machines work, ask how they learn.
Explicitly collecting implicit questions
Commonsense Knowledge is crucial for AI systems. But wait, what is commonsense exactly?
“Everyone can build a model for an African language, no one can evaluate it like Masakhane can!” – Jade Abbott
A clarifying way to think about (and improve) probing neural networks for linguistic properties
A story of anger-driven development: yes, you can compare perplexities, no, not like that.
A graph is worth a thousand numbers.
Automated hate speech detection backfires, but people are awful online so maybe we should just cancel the internet.
I never meta learning I didn’t like.
What happens when relationships between Alice and Bob have gone too far.