Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents (+Author)

  Рет қаралды 16,105

Yannic Kilcher

Yannic Kilcher

Күн бұрын

Пікірлер: 41
@YannicKilcher
@YannicKilcher 2 жыл бұрын
OUTLINE: 0:00 - Intro & Overview 2:45 - The VirtualHome environment 6:25 - The problem of plan evaluation 8:40 - Contributions of this paper 16:40 - Start of interview 24:00 - How to use language models with environments? 34:00 - What does model size matter? 40:00 - How to fix the large models' outputs? 55:00 - Possible improvements to the translation procedure 59:00 - Why does Codex perform so well? 1:02:15 - Diving into experimental results 1:14:15 - Future outlook Paper: arxiv.org/abs/2201.07207 Website: wenlong.page/language-planner/ Code: github.com/huangwl18/language-planner
@DeadtomGCthe2nd
@DeadtomGCthe2nd 2 жыл бұрын
these conversations are great. BUT please do the regular paper analysis too. I really value your explanations, drawings, and simplification of the dense technical jargon.
@florianhonicke5448
@florianhonicke5448 2 жыл бұрын
I really like the interview format. From my perspective, interview alone is not a substitute of your traditional videos. But since you put the summary at the start, it is a great ensemble of simple on-point description, deepdive and reasoning of the authors.
@brll5733
@brll5733 2 жыл бұрын
Finally we see someone brideging the gap between language models and embodied agents. The most important next step, imo, will be to get this multi-modal with visual input.
@marilysedevoyault465
@marilysedevoyault465 2 жыл бұрын
I am impressed ! Can you imagine what it will be in five years! Thank you Yannick for sharing and thank you to that team!
@timothy-ul9wp
@timothy-ul9wp 2 жыл бұрын
combining this result and previous discovery in decision transformer, I start to speculate that language model are somehow well suited for RL / decision-making giving that basically every decode token are in fact a discrete dicision itself Combime with the current multi-model trend, I can see the hype coming
@daniellawson9894
@daniellawson9894 2 жыл бұрын
This new format of explanation plus interview is really good. Keep it up!
@changtimwu
@changtimwu Жыл бұрын
Revisit the research in the GPT-4/LLaMA moment. It would be interesting if we apply the same techniques on today's small LMs(LLaMA, Alpaca, Vicuna).
@forcanadaru
@forcanadaru 2 жыл бұрын
That is great, hank you, hope they will continue
@robottinkeracademy
@robottinkeracademy 2 жыл бұрын
I did something similar but much more simple with an understanding of context and priority for interjecting new commands. Excellent work Yannic, keep these coming. Understanding why and what considerations were made is part of the journey.
@shengyaozhuang3748
@shengyaozhuang3748 2 жыл бұрын
I think this thing can serve as a meta planner for other classic RL agents. Maybe it's easier to let other agents conditioned on the meta states generated by this.
@ixion2001kx76
@ixion2001kx76 2 жыл бұрын
This raises a lot of possibilities. Basically language models produces something that interacts and evaluates text like a human, encoding a large part of human judgement. So this paper really says anything that needs human-like input can be done with a language model. Can court juries be automated by GPT-3, and in so doing be made more fair and consistent? Can copy editing and constructive criticism of writing be used to teach and check the quality of writing? Can it act like a speech writer, turning badly written text, or even a sketch of ideas, into better quality, more eloquent text? In modern warfare, the slowest part in an air strike is getting approval. Could this give legally accurate automated decisions on fire approval?
@clementdato6328
@clementdato6328 2 жыл бұрын
It feels like much of the constraints of the executability that is repeatedly highlighted is but an artifact of the VirtualHome env. I feel this is more like how well LLM does in planning, with the unsatisfactory fact that the evaluation of the plans is dwarfed by a rather limited env. To put it in another way, the bottleneck of these “performance” is largely from the “stupidity” of virtualhome rather than the limitation of LLMs themselves.
@Niels1234321
@Niels1234321 2 жыл бұрын
Instead of the translation model, could we not simply restrict the sampling step to tokens that correspond to an admissible action in the first language model?
@YannicKilcher
@YannicKilcher 2 жыл бұрын
We discuss exactly this in the interview. First, it's not really possible with an API like gpt3, and second it really hurts the output quality
@senadkurtisi2930
@senadkurtisi2930 2 жыл бұрын
Is there a possibility for the existence of some subtle error in their translation system that they overlooked? By that I mean the way they translate the embedding of the language model into verbs/objects.
@andrewluo6088
@andrewluo6088 2 жыл бұрын
More these kinds of videos
@DamianReloaded
@DamianReloaded 2 жыл бұрын
They should make the model play Maniac Mansion ^_^
@SimonJackson13
@SimonJackson13 2 жыл бұрын
Love long time.
@robottinkeracademy
@robottinkeracademy 2 жыл бұрын
Also I would say that humans will be trained to give commands that are required just as we do today with Alexa, Siri and Cortana.
@fiNitEarth
@fiNitEarth 2 жыл бұрын
Maybe try "find sota" instead of sofa next time 🤓
@norik1616
@norik1616 2 жыл бұрын
I miss the full-depth reviews that poke in the holes in the paper. It feels like you are mild, if you talk to the authors.
@EricAboussouan
@EricAboussouan Жыл бұрын
Why not use logit bias rather than doing this clunky weighted neighbors search?
@McSwey
@McSwey 2 жыл бұрын
more +author pls
@laurenpinschannels
@laurenpinschannels 2 жыл бұрын
curious if any mturkers who participated in this project will ever see this video
@G12GilbertProduction
@G12GilbertProduction 2 жыл бұрын
Wen is feel like stressed on this paper conceptual talk, maybe he evidently knows only a half-part Codex language libraries he teach to the interview? :)
@teckyify
@teckyify 2 жыл бұрын
I hope we find soon something new, Deep Learning is starting to get old
@doppelrutsch9540
@doppelrutsch9540 2 жыл бұрын
As long as the test loss goes down it does not get old yet.
@carlotonydaristotile7420
@carlotonydaristotile7420 2 жыл бұрын
Wenlong Huang is going to work for Tesla.
@idrisabdi1397
@idrisabdi1397 2 жыл бұрын
Tesla Bot : 👀👀
@oxide9717
@oxide9717 2 жыл бұрын
Was thinking the exact same thing . What even happen with elon and OpenAI never hear him talk about it
@oxide9717
@oxide9717 2 жыл бұрын
Was thinking the exact same thing . What even happen with elon and OpenAI never hear him talk about it
@oxiigen
@oxiigen 2 жыл бұрын
Humans have smaller brains than some other animals but we have different geometry of the brains.
@SimonJackson13
@SimonJackson13 2 жыл бұрын
Net spend dollar spunky?
@alexandrsoldiernetizen162
@alexandrsoldiernetizen162 2 жыл бұрын
Not to belabor the obvious, but why have a robot smearing lotion, shaving and getting little cups of milk from the fridge? How about something you want a robot to really do, like turning bolts, welding, changing tires and digging babies out of slagged down reactor cores?
@drdca8263
@drdca8263 2 жыл бұрын
The corpus GPT3 was trained on presumably had more stuff that is informative about the tasks about e.g. getting milk etc. than it does info about how to Well, I’m not sure that a baby in a nuclear reactor would live long enough to be rescued regardless, but still, than those other things you mention. Note that here they aren’t really training new models?
@alexandrsoldiernetizen162
@alexandrsoldiernetizen162 2 жыл бұрын
@@drdca8263 I know its data cutoff was in 2019 so it doesnt know about the Kung Flu or St. Floyd of Fentanyl, but presumably knows about bolts and tires. The data reflects the bias of mechanical turk workers and the state of our effeminized and infantilized society more than anything.
@drdca8263
@drdca8263 2 жыл бұрын
@@alexandrsoldiernetizen162 First off: politics is the mindkiller. Now that we’ve got that out of the way... Obviously it has some info about bolts and such, and, I would imagine that it probably has a great deal of technical information about bolts and such. But something important is not just whether something was in the training corpus at all, but the relative proportions. And, not just “is there enough information about bolts”, but “are there as many step by step instructions about doing things with bolts, in the style of a person trying to break down common tasks, as there are for more common human tasks?”. Also, this set of actions in the simulator is, I think, not designed by the authors of this paper. Also, the Mechanical Turk responses were just used for evaluation; they didn’t influence how the model worked. Duh? Oh, maybe you are thinking of the part about how the other people who designed the simulator came up with the word list? [edit: ah, you meant the choice of what tasks it is evaluated on, not what tasks it can give plans for] Though, really, the first line of my reply is the only one that is necessary.
@alexandrsoldiernetizen162
@alexandrsoldiernetizen162 2 жыл бұрын
@@drdca8263 They said mechanical turk people came up with the tasks, hence were responsible for the enumerated scenarios. Presumably the model would have worked with other data, had it been given said data. dur. I presume you are more oriented to lubing with lotion than turning a wrench, so lets leave it there.
@drdca8263
@drdca8263 2 жыл бұрын
@@alexandrsoldiernetizen162 I care neither for lotions nor wrenches. I prefer abstractions. Wait, you said “lubing” and “orientation”; was that you calling me gay? Heh. There are like, 2 people I’ve called gay, one was someone being super racist on twitter, and the other was someone on youtube who was insisting that the protagonist of “bee movie” was “trans” (from which I correctly inferred that the person making that claim was gay. They confirmed this in their response.) Anyway, you are presumably a bit trigger happy on that particular accusation. It seems you interpreted my objection to you inserting politics into things as disagreement with your politics? No, that isn’t the reason for my response to it. I do the same regardless of the partisanship in question. Partisanship latches on to people’s brains and makes them say... ...well, let’s just say it degrades the quality of what they say. I don’t mean that people shouldn’t have political opinions, or even a preferred political party. But thinking *too* much about opposing parties being bad will melt anyone’s brain, resulting in doing things like bringing it up in a youtube comment section above machine learning.
@444haluk
@444haluk 2 жыл бұрын
This is the most stupid method I have ever heard of.
Can Wikipedia Help Offline Reinforcement Learning? (Paper Explained)
38:35
مسبح السرير #قصير
00:19
سكتشات وحركات
Рет қаралды 11 МЛН
At the end of the video, deadpool did this #harleyquinn #deadpool3 #wolverin #shorts
00:15
Anastasyia Prichinina. Actress. Cosplayer.
Рет қаралды 16 МЛН
У ГОРДЕЯ ПОЖАР в ОФИСЕ!
01:01
Дима Гордей
Рет қаралды 7 МЛН
PEDRO PEDRO INSIDEOUT
00:10
MOOMOO STUDIO [무무 스튜디오]
Рет қаралды 26 МЛН
#84 LAURA RUIS - Large language models are not zero-shot communicators [NEURIPS UNPLUGGED]
27:48
Google PaLM-E: An Embodied Multimodal Language Model
39:53
Data Science Gems
Рет қаралды 822
LLaMA: Open and Efficient Foundation Language
55:32
YanAITalk
Рет қаралды 277
Dynamic Inference with Neural Interpreters (w/ author interview)
1:22:37
GPT3: An Even Bigger Language Model - Computerphile
25:57
Computerphile
Рет қаралды 434 М.
Why More People Dont Use Linux
18:51
ThePrimeTime
Рет қаралды 182 М.
مسبح السرير #قصير
00:19
سكتشات وحركات
Рет қаралды 11 МЛН