Large Language Models Are Zero Shot Reasoners

  Рет қаралды 34,823

IBM Technology

IBM Technology

Күн бұрын

Пікірлер: 13
@EvanBoyar
@EvanBoyar 10 ай бұрын
There's such a strange, uncanny-valley feeling watching someone who's been inverted (flipped along the vertical axis like a mirror appears to do)
@arijitgoswami3652
@arijitgoswami3652 Жыл бұрын
Thanks for this video! Would love to see the next video on Tree of Thoughts method of prompting.
@Zulu369
@Zulu369 Жыл бұрын
Excellent explanation. I suggest that next time you add a little history at the beginning of your video about where the term is coming from (see original publication where the term was first coined).
@manomancan
@manomancan Жыл бұрын
Thank you for this video! Though, wasn't this video already published? I could even remember the beats of the first lines?
@IBMTechnology
@IBMTechnology Жыл бұрын
You're right. On occasion we have to republish to fix an issue found after publishing. We hope you will enjoy some of truly new content.
@enthanna
@enthanna Жыл бұрын
Great video. Thank you Can you make a video about the current state of LLMs in the market place? There are lots of claims out there of capable models like GPT but it’s really hard to separate fact from fiction. Thanks again
@yuchentuan7011
@yuchentuan7011 Жыл бұрын
Thanks. I have one question. To do prompt tuning on a foundation model, how to choose data sets which are for general public domain (not for specific domain) and under which circumstances, we should train with few-shot prompts and zero-shot prompts? thanks
@michaeldausmann6066
@michaeldausmann6066 11 ай бұрын
Good video, is there an established way to provide step by step examples to the llm? E.g. will check get better results if I explicitly number my stems and provide enumerated examples, can I use arrows to indicate example-> step -> final ?
@fredericc2184
@fredericc2184 Жыл бұрын
I just try the same direct prompt right now to gpt4 and correct answer !
@amparoconsuelo9451
@amparoconsuelo9451 Жыл бұрын
Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model? Can you modify a GPT model?
@sirk3v
@sirk3v Жыл бұрын
Has this been reuploaded or do I just have a really bad case of deja Vu? I'm 100% sure I have watched this video again before and it wasn't anywhere within the past 18hrs
@cyberpunk2978
@cyberpunk2978 Жыл бұрын
It's hallucination
Why Large Language Models Hallucinate
9:38
IBM Technology
Рет қаралды 215 М.
What Makes Large Language Models Expensive?
19:20
IBM Technology
Рет қаралды 77 М.
I'VE MADE A CUTE FLYING LOLLIPOP FOR MY KID #SHORTS
0:48
A Plus School
Рет қаралды 20 МЛН
Should You Use Open Source Large Language Models?
6:40
IBM Technology
Рет қаралды 367 М.
Large Language Models (LLMs) - Everything You NEED To Know
25:20
Matthew Berman
Рет қаралды 143 М.
Introduction to large language models
15:46
Google Cloud Tech
Рет қаралды 768 М.
Large Language Models from scratch
8:25
Graphics in 5 Minutes
Рет қаралды 354 М.
Prompt Engineering 101 - Crash Course & Tips
14:00
AssemblyAI
Рет қаралды 167 М.
Few Shot Learning - EXPLAINED!
10:01
CodeEmporium
Рет қаралды 36 М.
Attention in transformers, visually explained | DL6
26:10
3Blue1Brown
Рет қаралды 2 МЛН
Llama: The Open-Source AI Model that's Changing How We Think About AI
8:46
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,2 МЛН