Mistral 7B -The Most Powerful 7B Model Yet 🚀 🚀

  Рет қаралды 29,711

Prompt Engineering

Prompt Engineering

Күн бұрын

Пікірлер: 33
@s0ckpupp3t
@s0ckpupp3t 10 ай бұрын
this thing is super powerful, I'm getting extremely good results
@gustephens111
@gustephens111 8 ай бұрын
Would you be willing to copy and paste any examples? (Prompt -> output)
@hamdanuk2
@hamdanuk2 10 ай бұрын
I like your videos! no bull$$, and you jump into the core stuff. Great content. Thank you
@Vermino
@Vermino 10 ай бұрын
<a href="#" class="seekto" data-time="209">3:29</a> - oh no, not an uncensored model! That sounds like a feature to me.
@engineerprompt
@engineerprompt 10 ай бұрын
:)
@michaelmitchell2213
@michaelmitchell2213 7 ай бұрын
Hi, could you explain what the value is in having GPT4 not issue a disclaimer of neutrality?
@Vermino
@Vermino 7 ай бұрын
@@michaelmitchell2213 A value of a disclaimer (if it is neutral) would be to create more context or help provide resources to further a user's inquiry. However, a uncensored model isn't about disclaimers. It's about data is suppressed from the user for supposedly "The Greater Good". Example: You ask Dall-e to generate an image in the artstyle of a famous artist (to save on tokens) with other keywords to make it unique. The response is the LLM actually lying to you that it can't generate it due to [y] and not [x], where [x] is the real reason but [y] is the lie.
@BionicAnimations
@BionicAnimations 6 ай бұрын
Please make a video on OpenHermes Neural 7B Q4. It's even better to me. 😍
@Nihilvs
@Nihilvs 10 ай бұрын
very nice ! looking forward to using this one !
@girrajjangid4681
@girrajjangid4681 10 ай бұрын
I tried it out. Results are good.
@giovanith
@giovanith 10 ай бұрын
I did the same question (CEO of Twitter), and here is the answer: "The current CEO of Twitter is Jack Dorsey. He has been the CEO of Twitter since 2015, and has also served as the company's CEO from 2006 to 2008. Prior to his time as CEO, Dorsey was the co-founder and CTO of Twitter. He has also been the CEO of Square, a mobile payments company."
@NoidoDev
@NoidoDev 10 ай бұрын
I think it could be a problem, if you test it with the same old questions, since they might have used that for training and optimized for it.
@modolief
@modolief 9 ай бұрын
Thanks!!
@debatradas1597
@debatradas1597 10 ай бұрын
thanks
@-someone-.
@-someone-. 10 ай бұрын
How long before y’all realised it wasn’t bugs and mosquitos on ya screen😅
@s0ckpupp3t
@s0ckpupp3t 10 ай бұрын
totally!
@warsin8641
@warsin8641 10 ай бұрын
Yipee!!!❤
@jondo7680
@jondo7680 10 ай бұрын
I'm curious if other finetuns like vicuna or wizard can bring more out of it or if the "instruct" model is already using all what's possible. I don't understand why they called it instruct because if I understand it right it's a chat model?
@caiyu538
@caiyu538 10 ай бұрын
Great
@Apps_Guide
@Apps_Guide 10 ай бұрын
Can this answer to analytics questions as well from tabular data.
@REplayer001
@REplayer001 8 ай бұрын
Question: how they measure the model against the others on why it's better?
@engineerprompt
@engineerprompt 8 ай бұрын
Usually people use benchmark datasets. But, I think old benchmarks are not a good way of evaluating this models.
@REplayer001
@REplayer001 8 ай бұрын
@@engineerprompt I tried this model out, it works fast and gives correct info. However do you know what it is designed for? It seemed like if I try to ask anything other than very technical questions it always gave me "As an Ai model I cannot.. etc etc etc.." One of the things I asked it to give me was a joke, it just couldn't.
@valentinfontanger4962
@valentinfontanger4962 10 ай бұрын
Would that fit in a 3080 rtx 10 gb ?
@amortalbeing
@amortalbeing 10 ай бұрын
isthere any difference between the quantized model and the raw model?
@engineerprompt
@engineerprompt 10 ай бұрын
Is the virtual env able to detect the gpu drivers? that might the issue.
@godned74
@godned74 10 ай бұрын
If the word "push" is displayed on an actual mirror, the reflective surface would serve no purpose for people on the opposite side. In such a case, the logical action would be to push the door. the example should use a window and not a mirror.
@therealsharpie
@therealsharpie 10 ай бұрын
Completely agree. The hypothetical itself needs a bit of work.
@user-pk4hn1uz1k
@user-pk4hn1uz1k 5 ай бұрын
Total noob here could someone explain what a "model" is supposed to produce that is actually useful. I see that there is design pattern called RAG that I guess you would use for a "model"?
@christiansroy
@christiansroy 10 ай бұрын
How many 7B models fact checks? How many 13B models do fact checks? How many 34B parameters model do fact checking ? It’s my understanding that By design, no LLM can do fact checks because they don’t have access to the Internet. I would think that In order for them to fact check, they need access to the Internet, which then becomes an actual application that uses the LLM. The model itself is just weights. Right?
@a22024
@a22024 10 ай бұрын
How would checking to see if (certain) internet sources agree constitute checking a fact?
@RedAnimus
@RedAnimus 10 ай бұрын
It's not like there is a database of facts on the internet. Facts can be reliant on the context of the observer. So, if we asked the AI to tell us if we revolve around the sun, the model would have to assume the time period of the observer and retrieve the relevant fact. If we want it to base the fact on medieval astronomy, then that fact changes based on context. What about competing views? One side or many might see their viewpoint as "fact" and assume the model is flawed because there are studies or evidence that support their specific view. Scientific studies and evidence can be contradicting. Which side is fact? Just saying all this to point out that facts are a difficult thing to pin down. Access to internet solves nothing. It just provides more information. Information that in itself is often biased by those reporting the results in ways that lie about the data. Most use cases likely only need "good enough" reliability, just look at the way we make shortcuts ourselves by using categories, assumptions, estimates, and create stories about how things work even if we know nothing about the systems we interact with on a daily basis. Human minds are terrible at precision. For those use cases, we rely on mathematics, and even then, we have equations in physics and other areas, which are sometimes best guesses, and those still work quite well. Doesn't mean it is fact.
@viangelo4z595
@viangelo4z595 10 ай бұрын
Samantha Mistral-7B: Does Fine-tuning Impact the Performance
13:27
Prompt Engineering
Рет қаралды 16 М.
ПОМОГЛА НАЗЫВАЕТСЯ😂
00:20
Chapitosiki
Рет қаралды 28 МЛН
My Cheetos🍕PIZZA #cooking #shorts
00:43
BANKII
Рет қаралды 26 МЛН
لااا! هذه البرتقالة مزعجة جدًا #قصير
00:15
One More Arabic
Рет қаралды 51 МЛН
Mistral 7B’s secrets: a talk by Guillaume Lample
42:27
Photoroom
Рет қаралды 3 М.
Understanding Embeddings in RAG and How to use them - Llama-Index
16:19
Prompt Engineering
Рет қаралды 35 М.
$0 Embeddings (OpenAI vs. free & open source)
1:24:42
Rabbit Hole Syndrome
Рет қаралды 257 М.
LM Studio: The Easiest and Best Way to Run Local LLMs
10:42
Prompt Engineering
Рет қаралды 28 М.
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Computerphile
Рет қаралды 195 М.
ПОМОГЛА НАЗЫВАЕТСЯ😂
00:20
Chapitosiki
Рет қаралды 28 МЛН