This is really interesting for many reasons, but one I'm especially curious about is just how central language is to the OOD Problem. Like, to humans and also to other animals known to engage in creative games to learn about the world. (There are quite a lot of them, really. Various birds and mammals in general tend to be at various stages of a wide spectrum when it comes to creativity and challenging the bounds of your own abilities...) If language is fundamentally necessary for this, it certainly explains why humans are as good at such open-ended tasks as we are. And it certainly works to some extent with other species too: We keep learning of more and more animals that they have some sort of language. That there is rhyme and reason to the sounds they share, and that this can go quite a bit beyond just, like, basic warnings of danger, aggression, or affection. However, it's difficult to see how far this reaches. Do octopuses have a language? They certainly are incredibly smart, creative, resourceful, and adaptable. It's necessarily not gonna be a verbal language but that's presumably not a real requirement. Like, you chose human-intelligible language built as a simplified fragment from English because you are familiar with that and it's an easy starting point. Grammars of light pulses or any other sense may work just as well, really. This is no criticism by the way. I'm just genuinely curious about that. - One answer may simply be that, no, this is not fundamentally necessary, and there are other approaches out there that might work just as well or better for OOD sampling, but this one happens to do the trick for now. Would be cool if there was a way to handle much broader object classes and a much larger space of potential tasks. Something where you then could hook up a state of the art language model and have that describe what's going on. Seeing how much such an approach has helped with image recognition and generation (CLIP / DallE-2 and other recent multimodal works), I'd imagine that may actually boost what an agent like this could learn by a *lot.* - Would presumably have to also learn what might be impossible to accomplish though, because a language model like that isn't gonna necessarily care about what actually is realizable. Either way, I love open-ended learning and this is really cool work! Looking forward to more of this in the future