what happens if you give claude's system prompt to llama3...

  Рет қаралды 5,205

Chris Hay

Chris Hay

Күн бұрын

Пікірлер: 23
@tvwithtiffani
@tvwithtiffani 2 ай бұрын
Thank you for doing this work 🙌🏾This was the first thing I speculated about when the system prompt was released officially. Now someone should test it on 13b tokens.
@davidmills9653
@davidmills9653 2 ай бұрын
Great demo and analogies … thanks !
@chrishayuk
@chrishayuk 2 ай бұрын
Glad it was useful
@toadlguy
@toadlguy 2 ай бұрын
I too have been playing with Anthropic's prompt on llama3.1, but much more interesting is Anthropic's Artifact system prompt (liberated by Pliny The Liberator on X) as it shows excellent use of sections, examples, and replies to different kinds of queries. It should be noted that the Prompt in these models are correlated both to the user interface (where not all of the response is displayed the same way - or at all, for instance or ) and to the Fine Tuning that has many examples of responses that it expects from this System Prompt. However the prompts provide an excellent resource to how to attack these problems as well.
@chrishayuk
@chrishayuk 2 ай бұрын
agreed, i purposely kept to the publicly released prompt in this video, and kept away from antartifact. i did cover antthinking in one of my other videos. i cover the fine tuning point towards the end of the video. the models are fine tuned towards the system prompt, however as shown in the video, it doesn't mean the prompt isn't useful when you bring it to another model
@toadlguy
@toadlguy 2 ай бұрын
@@chrishayuk Yes - exactly. I am rather amazed what you can do with just the system prompt. I expect fine tuning will provide greater fidelity (and accuracy as I usually only provide one or two examples in the prompt). It is kind of a grey area about discussing the artifact prompt, but it isn't really "hacking" since it was divulged by the model itself. It also provides just good prompt engineering beyond how it uses , but I specifically didn't provide the link, however it is easily found. BTW, great Chanel. I went back and viewed some of your other videos (and subbed) 👍
@chrishayuk
@chrishayuk 2 ай бұрын
tbh, It could be a good test to see how llama does with the artifact prompt, to see how much of it is fine tuning. could be a good video, thanks. and thanks for the sub, glad you find the channel useful
@eliseulucenabarros3920
@eliseulucenabarros3920 Ай бұрын
Your glasses are so well made, aren't they... they're beautiful
@chrishayuk
@chrishayuk 27 күн бұрын
thank you, they're from swanwick
@rambapat588
@rambapat588 14 күн бұрын
Amazing video. Can you make one more video where you try the same user prompts between claude with system prompts (on the website), claude through api (no system prompts), llama vanilla and lamma with claude's system prompt
@Alex-rg1rz
@Alex-rg1rz 2 ай бұрын
Thanks that's intersting ! Thanks
@chrishayuk
@chrishayuk 2 ай бұрын
Glad you liked it!
@Yipper64
@Yipper64 2 ай бұрын
I am curious in a sense to if LLMs are *better* or *worse* almost purely on their system prompt. Obviously, there's some aspects that do require a more powerful model but the system prompt also seems to play a huge role.
@dimosdennis
@dimosdennis 2 ай бұрын
Very good video, thanks for that. That is fast for a local 70b model. What machine are you running it on?
@chrishayuk
@chrishayuk 2 ай бұрын
It’s an M3 Max with 128GB of memory
@vertigoz
@vertigoz 2 ай бұрын
There's no problems regarding rhe amount of tokens of the system prompt?
@roseblack6089
@roseblack6089 2 ай бұрын
it is suprot claude dev vscode ext?
@waneyvin
@waneyvin 2 ай бұрын
what kind of computer are you playing? it seems that llama 3-70B is running smoothly!
@chrishayuk
@chrishayuk 2 ай бұрын
Yeah, if you want to replicate using llama3:8b. I used a larger model as I wanted it to be a bit smarter. My machine is an M3 Max with 128GB of memory
@waneyvin
@waneyvin 2 ай бұрын
@@chrishayuk I found that smaller models less than 10B are not good in reasoning, including reAct or function calling. I guess it might be because that neural network is not deep enough. maybe the deeper network is more capable to abstraction than smaller models.
@chrishayuk
@chrishayuk 2 ай бұрын
yep, the first useful model i've found for reasoning that is small is the google gemini 9b model, which isn't bad at reasoning
@chrishayuk
@chrishayuk 2 ай бұрын
i also have a video on using patterns with ReAct. check that video out, as i use a patterns technique which works really well for getting the mistral models to perform well
I love small and awesome models
11:43
Matt Williams
Рет қаралды 27 М.
Getting Started with OLLAMA - the docker of ai!!!
18:19
Chris Hay
Рет қаралды 12 М.
Long Nails 💅🏻 #shorts
00:50
Mr DegrEE
Рет қаралды 16 МЛН
風船をキャッチしろ!🎈 Balloon catch Challenges
00:57
はじめしゃちょー(hajime)
Рет қаралды 98 МЛН
Anthropic’s Claude System Prompt Revealed: Key Takeaways for Developers!
18:52
Nemotron-4 is BIG in More Ways than One
10:02
AI Master Group
Рет қаралды 850
Qwen Just Casually Started the Local AI Revolution
16:05
Cole Medin
Рет қаралды 87 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,3 МЛН
We Put 7 Uber Drivers in One Room. What We Found Will Shock You.
12:03
More Perfect Union
Рет қаралды 4,4 МЛН
Long Nails 💅🏻 #shorts
00:50
Mr DegrEE
Рет қаралды 16 МЛН