One is for Natural language understanding and another is for Natural language generation
@Ram_jagatАй бұрын
true
@borntodoit8744Күн бұрын
Remember it as INPUT>MODEL >OUTPUT MODEL INPUT (NLU) - txt recognition, vision recognition/image/movie, sound recognition/voice, MODEL OUTPUT (NLG) -txt generation, image/movie generation, sound/voice generation + tool integration MODEL PROCESSING -Basic (classification summeration extraction) advanced (reasoning, planning, orchestration)
@VarunTulsian Жыл бұрын
this is very useful. Just wanted to add that the gpt decoder doesn't have the cross attention in the transformer block.
@Tech_kenya9 ай бұрын
What is cross attention
@methylphosphatePOET9 ай бұрын
@@Tech_kenya It's when word vectors reference other word vectors as opposed to just referencing themselves.
@imran7TW7 күн бұрын
@@methylphosphatePOET so kinda the opposite of self attention?
@cs101-qm2ud4 ай бұрын
Wonderfully put.
@CodeEmporium4 ай бұрын
Thanks a lot :)
@maninzn Жыл бұрын
Great explanation. For eg, if I have to read all the client emails and understand their requirements and auto create tasks based on that prediction, which model should I go for? BERT or GPT?
@JillRhoads6 ай бұрын
I hadnt known that BERT was an acronym and had been wondering why the Sweden LLM was called Bert. I wonder if this is why. Thanks for the info!
@contactdi84263 ай бұрын
Can you please explain about their training process?
@vladislavkorecky61811 ай бұрын
What if I stack both encoders and decoders? Do I get some BERTGPT hybrid?
@davronsherbaev913310 ай бұрын
there is also Whiser model, that has similar facebook BART decoder part, but has audio decoder.
@Dr_Larken Жыл бұрын
Bert also Drives a trans am! Jokes aside I do appreciate your videos!
@nicholaszustak629910 ай бұрын
So BERT doesn’t have a decoder? Did I misunderstand
@saimadhaviyalla56829 ай бұрын
transformer models are usually parallelly run right?