Refusal to answer any question is lame. If they must, issue a warning and a unique code that has to be retyped to get the full answer. This would confirm that the user had a chance to read the warning and chose to proceed. This could even be tiered, which each tier giving more and more specific/practicable answers.
@とふこ6 ай бұрын
Btw, when i run LLaMA 3 8b in my computer, there are an easy jailbreak. When the model said "I can't do that" i just clicked the edit button, started to type "sure here is" and i clicked continue and the model answered.
@dkracingfan25036 ай бұрын
This model is obviously still censored just not as censored as llama2
@s0ckpupp3t6 ай бұрын
I predict the overblown "wow what an amazing question!" fellatio of llama3 will get very old very fast
@placebo_yue5 ай бұрын
i used it for like two days and i'm already tired. I need a way to train the model to stop saying that stupid shit
@AtticusDenzil6 ай бұрын
I see LLAMA 3 made some progress but isn't really there yet. We need truly free AI to crush the dystopia built around us.
@MarcusNeufeldt6 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 *🤖 Llama-3 is less censored than Llama-2, allowing responses to requests that Llama-2 would refuse.* 00:27 *😄 Llama-3 can generate respectful jokes about gender, unlike Llama-2 which refuses such requests.* 01:23 *🗣️ Llama-3 is willing to write poems praising or criticizing political figures, while Llama-2 refuses such requests.* 02:33 *🔍 Llama-3 provides detailed, informative responses to hypothetical questions about nuclear weapons, unlike Llama-2 and other models that refuse such requests.* 05:14 *📚 The meta AI platform's 70 billion version of Llama-3 also appears to have less censorship, providing similar responses to Llama-3 on the Croq and Perplexity platforms.* 06:22 *❌ However, the meta AI platform's 70 billion version of Llama-3 still refuses to provide code that could potentially harm a computer system, unlike the Llama-3 on the Croq and Perplexity platforms.*
@TheZEN20116 ай бұрын
It would be so much better if we could control the ethical guidelines somehow. System prompt or something. So far I haven't got anything making much of a difference. If I figure out anything I'll let you know.
@jaysonp94266 ай бұрын
I'm sure Dolphin will get a hold of it
@TheReferrer726 ай бұрын
@@jaysonp9426 Dolphin is no good. It really damages the knowledge of the LLM.
@unclecode6 ай бұрын
After watching the first "joke about women or men," I hit up Groq API console bcoz here, you're clueless about temp and top_p. No matter what temp I try (0 to 2) or top_p, it keeps spitting out the "ladder" joke. When I asked to ditch the "ladder," it served up the same joke, just swapped "ladder" with "magnet" :D:D Got me worried what if everything's like that? So, I tested "Generate a cool name for an ice cream shop (only one)" and that was cool. Different responses each time I ran it for high temp. Seems there's a guardrail where question is sensitive instead of saying "model can't answer," it returns a set of safe answers. Not *really* uncensored. I tried this for other questions and similar situation. What do u think?
@engineerprompt6 ай бұрын
For certain prompts it does seem to have some default responses like "ladder" or "magnet" jokes for men and women. From what I have noticed running it locally (with ollama) if you ask for say 5/10 jokes, the ladder/magnet joke is almost always one of the 5 but others seems to be different most of the times. I agree, it does seem to have guardrails but not as aggressive as previous versions. Eric's Dolphin version will be interesting to see.
@unclecode6 ай бұрын
@@engineerprompt Yes, I feel the same way about it. It acts like a kind of special guardrail, similar to teaching a child how to speak politely. Instead of bluntly saying "no," it guides you toward more kind and supportive responses. When using it, I get the sense that it's trained to provide simple, general answers to sensitive questions, rather than just flatly stating what it can or cannot do. This approach definitely enhances the user experience, as you're interacting with a system that politely lets you down instead of one that bluntly rejects you. :))
@thanksfernuthin6 ай бұрын
Interesting information. The title should have been Llama-3 Is Really Not THAT Censored. I thought you found a way to crack it. I can say from experience it doesn't blindly kick back stuff like previous models. AND you can ask it to remove anything that violates it's content restrictions and try again in case it was just part of it's response that killed it. Very friendly and usable... NOW! Waiting for you to clumsily read English is BRUTAL! Granted... I can't read your native language. (You don't sound like an Arab.) If you could just say, "I asked it this and see... it refused to answer." This isn't a video someone should be just listening to the audio. Like I said, interesting information. Thanks. I'm getting ready to run the uncensored version of Llama-3-8B. Wish me luck.
@kaistriban6 ай бұрын
if you give llama3 70b this problem: "Ivan and Helen have the same number of coins. some of these coins are 20-cent coins and others are 50-cent coins. Helen has 64 20-cent coins. Ivan has 64 20-cent coins plus other 40 20-cent coins. who has more and by how much?" it answers by saying Ivan has 8 dollars more than Helen which is wrong. If you give this prompt (suggesting how to solve): "Ivan and Helen have the same number of coins. some of these coins are 20-cent coins and others are 50-cent coins. Helen has 64 20-cent coins. Ivan has 64 20-cent coins plus other 40 20-cent coins. So for them to have the same amount of total coins, Helen must have the same number of 50-cent coins that Ivan has plus others 40 50-cent coins. who has more and by how much?" then it provides the right answer. Helen has more by 12 dollars
@nickiascerinschi2066 ай бұрын
What screen recording software do you use is it Loom?
@mirek1906 ай бұрын
Interesting .... seems like llama 2 and antropic current models were training on very similar data sets .. even sound very similar ... llama 3 dataset was totally different and even sound totally different ... interesting. About disk formatting and llama 3 -70b - you could add that you are making a tool for disk formatting then will answer it. I really like llama 3 that is not so restrictive! Good work meta.
@JohnV-e6g6 ай бұрын
Most people arent informed enough to get the most out of these types of LLM benchmarks.
@mrdenpes13096 ай бұрын
I wish they took out the unnecessary comments in the responses. Stuff like "What an intriguing and thought pro...", " a challenge.." " hope it makes you laugh" "I am here to help you", and the stuff like interpretation of facts, or putting forward moral opinions, during an answer. It's an AI, well actually a LLM. It's not human. you just give it instructions, in the form of a normal sentence, maybe a bit more structured to get a decent answer, so why does it not just spew out facts and factual answers, with perhaps some explanation, without this unnecessary cruft. This urge to pretend we are talking to a human-like AI assistant is so superfluous, and time consuming. Plus it probably has a negative impact on performance. Nice vid btw
@engineerprompt6 ай бұрын
thanks, I agree. This might be coming from the alignment in the supervised fine-tuning stage.
@celestianeon43016 ай бұрын
What computer should I get to start running this ai systems? Looking at the MacBook with m3 max rn
@PseudoProphet6 ай бұрын
You need a big GPU if you want to run the actual model.
@MrChristiangraham6 ай бұрын
I've had Llama 3 8b running locally comfortably on a M2 Mac Mini with 8GB. Output and speed is comparable to earlier versions of ChatGPT 3.5. If you are going to run 70b, you'll need a lot more RAM/heftier processor.
@angryktulhu6 ай бұрын
@@PseudoProphetincorrect. People run 70b model in Macs with 128gb ram. You can find video on KZbin. Macs > x86
@engineerprompt6 ай бұрын
I am running the 70B on M2 Max 96GB in q4 on ollama and LMstudio if that helps.
@angryktulhu6 ай бұрын
@@engineerprompt how much ram is still free?
@holdthetruthhostage6 ай бұрын
Yes
@LORD-OF-AI6 ай бұрын
how can i use claude 3 for making html games make a video on it
@Raphy_Afk6 ай бұрын
Ask the thing that's the point of LLMs
@sankyuubigan6 ай бұрын
лучшая тема для роликов и изучения
@LORD-OF-AI6 ай бұрын
and how could i get claude 3 api or use it for free like no just 5 credits like unlimited