Рет қаралды 108
Genie is the first generative interactive environment model trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundation world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model. Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain-specific requirements typically found in the world model literature. Further the resulting learned latent action space facilitates training agents to imitate behaviors from unseen videos, opening the path for training generalist agents of the future.
In this video, I talk about the following: What can the Genie model do? How is Genie trained? How is Genie used for inference?
For more details, please look at openreview.net... and sites.google.c...
Bruce, Jake, Michael D. Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai et al. "Genie: Generative interactive environments." In Forty-first International Conference on Machine Learning. 2024.