What is Prompt Tuning?

  Рет қаралды 218,404

IBM Technology

IBM Technology

Күн бұрын

Пікірлер: 72
@dominikzmudziak8340
@dominikzmudziak8340 8 ай бұрын
Im stunned how Martin is able to write backwards on this board so efficiently
@pradachan
@pradachan 7 ай бұрын
they just mirror the whole recording
@aidakostikova6889
@aidakostikova6889 4 ай бұрын
haters will say that they just mirror the whole recording
@subusrable
@subusrable 3 ай бұрын
seems you need to have that skill if you want to work at IBM
@dixit-publice
@dixit-publice 2 ай бұрын
@@subusrable Almost right. What Martin is showing here is just the entry level. You actually have to be able to write in any direction. At IBM we call this 360-degree scribbling. And in any color, of course! (Patent pending - but we're considering to open-source the technology.)
@dixit-publice
@dixit-publice 2 ай бұрын
At IBM Research we are even working on writing in n-dimensional space. Stay tuned. Agility and flexibility are key!
@Gordin508
@Gordin508 Жыл бұрын
Really like these summarization videos on this channel. While they do not go into depth, I appreciate the overarching concepts being outlined and put into context in a clean way without throwing overly specific stuff in the mix.
@johndong4754
@johndong4754 Жыл бұрын
Which channels would you recommend that go into more depth?
@WeiweiCheng
@WeiweiCheng 11 ай бұрын
Awesome content. Thanks for uploading. It's great that the video calls out the differences between soft prompting and hard prompting. While soft prompts offer more opportunities for performance tuning, practitioners often face the following issues: - Choosing between hard prompting with a more advanced, but closed, LLM versus soft prompting with an open-sourced LLM that is typically inferior in performance. - Soft prompting is model dependent, and hard prompting is less so.
@dharamindia563
@dharamindia563 Жыл бұрын
Excellent broad explanation of complex AI topics. One can then deep dive once a basic understanding is achieved ! Thank you
@datagovernor
@datagovernor Жыл бұрын
More important question, what type of smart/whiteboard are you using?? I love it!
@IBMTechnology
@IBMTechnology Жыл бұрын
See ibm.biz/write-backwards
@SCP-GPT
@SCP-GPT Жыл бұрын
You should make a guide on FlowGPT / Poe that delves into operators, delimiters, markdown, formatting, and syntax. I've been experimenting on these sites for a while, and the things they can do with prompts are mind-blowing.
@Atmatan
@Atmatan 4 ай бұрын
Can you give some examples? I have yet to be impressed, but im notably hard to impress.
@XavierPerales-zm4xx
@XavierPerales-zm4xx 10 ай бұрын
Excellent job explaining key AI terms!
@maxjesch
@maxjesch Жыл бұрын
So how do I get to those "soft prompts"? Do you have to use prelabeled examples for that?
@cyberstorm45
@cyberstorm45 4 ай бұрын
Soft Prompt example: I want to make a certain image in Stable Diffusion, but i don't know the exact prompt i need to type to generate that image, so i ask ChatGPT to generate that prompt for me (describing the characteristics of that image to be generated). ChatGPT outputs the prompt, in this case my Stable Diffusion soft prompt.
@Asgardinho
@Asgardinho Жыл бұрын
how do you get the AI to generate that tunable soft prompt?
@pensiveintrovert4318
@pensiveintrovert4318 4 ай бұрын
Why isn't this more popular if it actually works? All I see is LORAs and RL methods.
@marc-oliviergiguere3290
@marc-oliviergiguere3290 Жыл бұрын
Very concise and information, but tell me, what technology do you use to write backwards so fast? Do you flip the board in post-production?
@IBMTechnology
@IBMTechnology Жыл бұрын
Yes, see ibm.biz/write-backwards for details
@apoorvvallabh2976
@apoorvvallabh2976 Жыл бұрын
What data set for supervised learning is used in prompt tuning
@RobertoNascimento-kw6gy
@RobertoNascimento-kw6gy 7 ай бұрын
Excelente video, bom trabalho
@Tititototo
@Tititototo Жыл бұрын
Hi, nice talk by the way, but what about some examples of soft turning, i understand is human unreadable, but how exactly you achieve that ? by writing some code ? extra tools ? plugins ? thanks a lot for your reply :)
@sheepcraft7555
@sheepcraft7555 Жыл бұрын
These are learnable parameters added on top the base language models. This is called soft tuning one of the example is prefix tuning. These parameters are learned.
@scifithoughts3611
@scifithoughts3611 10 ай бұрын
Could you explain labeling done in fine tuning and prompt tuning?
@8eck
@8eck Жыл бұрын
This that soft prompt is basically a trainable parameters, which also undergoing backpropagation and its weights are updated? Just like LoRA method, where you attach new trainable parameters to the model and train only those new parameters.
@Abishek_B
@Abishek_B 4 ай бұрын
I'm doing a project where I need to categorise the transaction details from transactional SMS to be output in JSON type. Can I prompt tuning or prompt engr with hard prompt?
@azadehesmaeili4402
@azadehesmaeili4402 11 ай бұрын
Could you please outline the advantages and disadvantages of fine-tuning versus prompting in the context of large language models?
@uniqueavi91
@uniqueavi91 Жыл бұрын
crisp and informative
@BigBandoonthebeat
@BigBandoonthebeat 5 ай бұрын
How do you make these soft prompts ?
@mikegioia9289
@mikegioia9289 Жыл бұрын
How do you discover the correct soft prompts?
@tsunghan_yu
@tsunghan_yu 5 ай бұрын
Why can't we use a decoder to convert the soft prompt to text and this ways it's interpretable? I don't quite understand
@neail5466
@neail5466 Жыл бұрын
Could you please explain a little detail about the strings of numbers how those are indexed? Are those some sort of abstraction that we fully understand! Very informative lecture is this one... Probably everyone should have a little expertise in prompt engineering skill in near future.
@Chris-se3nc
@Chris-se3nc Жыл бұрын
There are other embedding models that can take strings of concepts and transform them into embedding vectors (string of numbers). You can store those in a number of vector databases.
@ZyboroTown
@ZyboroTown Жыл бұрын
What is unfancy design prompt?
@johndevan3505
@johndevan3505 Жыл бұрын
A lot to unpack here. Great job explaining. I have one question about the difference between incontext learning and prompt tuning with hard prompts. Are they synonymous?
@TimProvencio
@TimProvencio Жыл бұрын
Does anyone know how they do these videos where it appears that they are writing on the screen. That is so neat!
@IBMTechnology
@IBMTechnology Жыл бұрын
See ibm.biz/write-backwards
@russell_goodman
@russell_goodman 4 ай бұрын
So is prompt engineering still a viable career (only because we’re in the infancy stages of widespread “commercial” use)…..of LLMs like ChatGPt.
@badlaamaurukehu
@badlaamaurukehu 11 ай бұрын
Nomenclature is it's own problem.
@yt-sh
@yt-sh Жыл бұрын
funny & informative 👏👏👏
@itdataandprocessanalysis3202
@itdataandprocessanalysis3202 Жыл бұрын
A joke by ChatGPT: Why did the Large Language Model (LLM) turn down a job as a DJ? Because it thought "Prompt Tuning" meant it would have to constantly change the music!
@arpitqw1
@arpitqw1 11 ай бұрын
not fully understood except- prompt tuning-prompt engineering- hard tuning-soft tuning. :P
@mohslimani5716
@mohslimani5716 Жыл бұрын
Thanks for the explanation, but still how could someone succeed in prompt engineering practically
@fredrikt6980
@fredrikt6980 5 ай бұрын
Really like all of Martins videos but this one only explains what prompt-tuning is not.
@rongarza9488
@rongarza9488 9 ай бұрын
I learned Python in 2 months, great language. Then, I learned the SQLs that Python plays well with. Then, it hit me: AI is doing most of this work! So what is there for me and you to do? "My career may be over before it's begun". Yes, indeed UNLESS we can start using Python for regular business processing, like Accounts Receivable/Payable, Inventory Management, Order Processing, etc. In other words, we can't all be doing AI, especially when it, itself, is doing AI, cheaper, faster, and better.
@Atmatan
@Atmatan 4 ай бұрын
God no. Please grow up soon so you can comprehend that python is killing the internet.
@Betty__8a8v
@Betty__8a8v 4 ай бұрын
Hello, I have some splendid news that will bring a smile to your face!
@BigBandoonthebeat
@BigBandoonthebeat 5 ай бұрын
Why don’t I see this anywhere if it’s better than normal prompts
@manojr4598
@manojr4598 Жыл бұрын
We are trying to create a chatbot using OpenAI API and the response should be limited to the specific topic and it should not respond to the user queries which are not related to the topic. What is the best way to achieve this ? Prompt engineering or prompt tuning ?
@indianmanhere
@indianmanhere Жыл бұрын
Fine tuning
@Atmatan
@Atmatan 4 ай бұрын
Use a better LLM.
@brcpimenta
@brcpimenta 4 ай бұрын
Chuning
@darkashes9953
@darkashes9953 Жыл бұрын
IBM could go for the plunge and make a Quantum computer with 10 million Quantum computer chips with 1000 Qubits and optical circuits instead of just one chip.
@rajucmita
@rajucmita Жыл бұрын
As a newbee how come I be pro in propmt engineering
@iramkumar78
@iramkumar78 4 ай бұрын
I agree. AI soft prompts are not readable
@avinashpradhan5030
@avinashpradhan5030 8 ай бұрын
🙂
@YT-yt-yt-3
@YT-yt-yt-3 9 ай бұрын
Soft peompring is confusing
@DK-ox7ze
@DK-ox7ze Жыл бұрын
This is too abstract. Some concrete examples would have helped.
@samgoodwin89
@samgoodwin89 Жыл бұрын
Is he writing backwards
@IBMTechnology
@IBMTechnology Жыл бұрын
See ibm.biz/write-backwards
@kaiskermani3724
@kaiskermani3724 7 ай бұрын
"A string of numbers is worth a thousand words" tf does that even mean?
@generichuman_
@generichuman_ Жыл бұрын
Wow, you managed to make an 8 minute video on prompt tuning without actually talking about what it is or how one would even begin to implement it. All I gleaned from this is that it has something to do with embeddings... Do better IBM...
@scifithoughts3611
@scifithoughts3611 10 ай бұрын
I agree it’s a little obscure. I gave this a second watch through because your comment made me realize that I too wasn’t clear. Here is what I’ve noted: First step: Model creation: A model is created by training it from tons of data (very expensive to do) Because a model alone doesn’t work consistently at this point (racist, errors, hallucinations, toxic,…) it needs more work to be ready for the public. To make it ready one of the three strategies are used: fine tuning, prompt engineering, or prompt tuning with soft prompts. (All three could be used as well, I’ve read papers about such cases.) Fine tuning : Give you have a model, now you create examples about the domain the LLM will represent. The examples are labeled to help the model know what’s is going on. This strategy is labor intensive. (Labeling is another whole area to read up on.) Prompt engineering: Humans design prompts in a human language (explain to the model how to behave). Example: when I tell you a word in English, you respond with the word in French. Prompt tuning using soft prompts: Soft prompts are created by the AI using fine tuning data. These prompts are encoded (not human readable) into a vector. The above is the first six minutes of the video. Next the lecturer show these three applications by adding them to the box picture. This is confusing because it seems like he is applying all three strategies but then concludes that prompt tuning gets the best results. So I guess he is saying use prompt tuning. Since AiML is a new field, I think people will be applying many different strategies in order to get their models to work properly. And this is just scratching the surface. Every few months, people will come up with other strategies that improve the situation. 10 years from now a bunch of these strategies will be discarded and there will be other new ones. The field of ML is defining their design patterns. Pattern books will be written as solutions mature. Prompt engineering and prompt tuning are the two patterns he talks about. I hope that helps. Thinking this through has certainly helped me so thanks for the prompt. 😊
@chavruta2000
@chavruta2000 9 ай бұрын
yes. this is incredibly generic and communicates very little considering this is supposed to be from a communication theory expert.
@RajatKumar-oy9mw
@RajatKumar-oy9mw 8 ай бұрын
Totally agrees..
@pensiveintrovert4318
@pensiveintrovert4318 4 ай бұрын
I got one useful tidbit. That I have to stick the soft prompt into the embedding layer. How? Also unclear.
@DJZG
@DJZG 5 ай бұрын
Shame not a single real-world example of prompt tuning isn't provided. I guess this video isn't about that kind of detail?
@bongimusprime7981
@bongimusprime7981 Жыл бұрын
ChatGPT is not an LLM lol
@fabrzy3784
@fabrzy3784 Жыл бұрын
yes it is
Why Large Language Models Hallucinate
9:38
IBM Technology
Рет қаралды 206 М.
4 Methods of Prompt Engineering
12:42
IBM Technology
Рет қаралды 157 М.
The Singing Challenge #joker #Harriet Quinn
00:35
佐助与鸣人
Рет қаралды 36 МЛН
Amazing remote control#devil  #lilith #funny #shorts
00:30
Devil Lilith
Рет қаралды 15 МЛН
Happy birthday to you by Secret Vlog
00:12
Secret Vlog
Рет қаралды 6 МЛН
RAG vs. Fine Tuning
8:57
IBM Technology
Рет қаралды 54 М.
AI, Machine Learning, Deep Learning and Generative AI Explained
10:01
IBM Technology
Рет қаралды 525 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
Llama: The Open-Source AI Model that's Changing How We Think About AI
8:46
What Makes Large Language Models Expensive?
19:20
IBM Technology
Рет қаралды 74 М.
Risks of Large Language Models (LLM)
8:26
IBM Technology
Рет қаралды 106 М.
Prompt Engineering 101 - Crash Course & Tips
14:00
AssemblyAI
Рет қаралды 163 М.
Prompt Engineering Tutorial - Master ChatGPT and LLM Responses
41:36
freeCodeCamp.org
Рет қаралды 1,6 МЛН
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 353 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 3,6 МЛН
The Singing Challenge #joker #Harriet Quinn
00:35
佐助与鸣人
Рет қаралды 36 МЛН