Llama 3.1 405b Deep Dive | The Best LLM is now Open Source

  Рет қаралды 29,857

MattVidPro AI

MattVidPro AI

Күн бұрын

Пікірлер: 167
@MattVidPro
@MattVidPro 2 ай бұрын
Hey folks! Important notice! Towards the end of the video I ran Llava Llama 8b which is a vision tuned version of Llama 8b NOT the new one. So this test is NOT representative of the new model. My bad!!! I must have ran the wrong model.
@notnotandrew
@notnotandrew 2 ай бұрын
You beat me to this comment :) Also I may be wrong but I assume that LMStudio is using a quantized version of the model which may be lower quality, even at the highest quantization.
@howard927
@howard927 2 ай бұрын
Not privet far from it ! looking like a hack job. my GPU ruining at a 100% and hotttttt I installed it on my server with 20 cores 1000 GB of Ram 48 GB of GPU and 10,000 Cuda Cores and my other models were flying and after installing llama 3.1 the only one working fast was llama 3.1 and no its not privet no no no. it requires connection to meta's server. can it be the biggest hack job in the world ever ? why is mark changing and reinventing he's image to convince us all that he wants good for the world ? Is he just gathering data from all of us ? I uninstaller it and my system continues to be slow so I'm going to reformat my hard drive 40 times. the truth always comes out.
@MattVidPro
@MattVidPro 2 ай бұрын
ZUCC does it again! This lizard man just cannot stop releasing quality open source models!
@pigeon_official
@pigeon_official 2 ай бұрын
bro within 1 day of llama 3.1 mistral releases a new model that's better than llama 3.1 and 1/3rd the size so this video is already outdated after 1 day
@MattVidPro
@MattVidPro 2 ай бұрын
@@pigeon_official bro I saw that as i was editing
@FusionDeveloper
@FusionDeveloper 2 ай бұрын
I have many LLMs downloaded. Once I tried LLAMA 2 8b, it was all I used locally. Now I'm sure LLAMA 3.1 8b will be all I use locally.
@ryangreen45
@ryangreen45 2 ай бұрын
@@pigeon_official Mistral Large 2 is better than Llama 3.1 405b?? No wayyyy source?
@AmazingArends
@AmazingArends 2 ай бұрын
@@pigeon_official if mistral is better than llama 3.1, I would like to see a comparison video
@augustuslxiii
@augustuslxiii 2 ай бұрын
Looking forward to the day when this kind of thing will be able to be run locally and quickly on affordable machines.
@6AxisSage
@6AxisSage 2 ай бұрын
I took a few weeks offline with my laptop and some llms, neurodaredevil llama3-7B was and is so good! Talking to it warmed my room with gpu and psu heat for a bonus 😊 Too early to tell you how 3.1 compares but ive been testing a few locally on my laptop also! Not 405b obv or even 70b but ill use my local model over gpt4o
@johnwilson7680
@johnwilson7680 2 ай бұрын
Tried out the 8B and 70B on my RTX4090 with Ollama and I'm impressed so far. 8B is very fast and while 70B is slow, about one token per second, it is clearly better and useable if you're not in a hurry.
@countofst.germain6417
@countofst.germain6417 2 ай бұрын
1 token per second 😂 at least you tried.
@johnwilson7680
@johnwilson7680 2 ай бұрын
@@countofst.germain6417 If you are working on other things, you can just run it in the background. For some use cases, it's not so bad.
@DeadNetStudios
@DeadNetStudios 2 ай бұрын
How much ram do you think is needed for the 405?
@santosic
@santosic 2 ай бұрын
Wow, if even RTX4090 doesn't run 70B all that well... RIP. Guess it's time to invest in a server farm... lmao 😅
@madcoda
@madcoda 2 ай бұрын
the 70B q4 model download size is 40GB, so you need at least 4xGB VRAM to run that efficiently, A6000/RTX6000 maybe
@jeffwads
@jeffwads 2 ай бұрын
I tried this model on the HG spaces and wow. Amazing stuff. I asked it "I have a device that's time is 6:20am. An incident took place on that device at 5;34am. If my time now is 9:02am, what time did the incident take place in my time?" and it gave a quick and correct answer of 8:16. GPT4o got it right but its answer was rambling and way too involved.
@Maisonier
@Maisonier 2 ай бұрын
I'm really starting to get concerned about these models and all the latest developments over the past year. While most people say things like 'this is the worst it's going to be,' 'it's just beginning, imagine what it will be like in a few years,' or 'the improvement is exponential,' from everything I see, it looks like this isn't the baseline. It seems like we've already hit the peak of this technology, which is why there isn't much difference between the models. That's why they're looking for new approaches, like using agents or mixture of experts.
@SSLCLIPS-TV
@SSLCLIPS-TV 2 ай бұрын
I told you they're gonna be the best last year and I am 100% right!
@6AxisSage
@6AxisSage 2 ай бұрын
Make ai content and ill watch!
@fabiankliebhan
@fabiankliebhan 2 ай бұрын
I think you tested the old llama 3.0 8b model at the end
@fabiankliebhan
@fabiankliebhan 2 ай бұрын
Yeah just looked it up on huggingface. This is 86 days old ;) I liked the tests very much anyway :)
@bloxyman22
@bloxyman22 2 ай бұрын
I noticed the same. It is a clip based vision model based on old llama 3.0
@odrammurks1497
@odrammurks1497 2 ай бұрын
how can that happen?😂 also llava is for image recognition. how can you not notice this as a professor in AI?😂
@konkretvirtuel
@konkretvirtuel 2 ай бұрын
It's the llava model, NOT Llama. Llava has vision capabilities. That's not llama 3.1
@MattVidPro
@MattVidPro 2 ай бұрын
My b 😂 I already had that model installed so I must have ran the wrong damn one
@CapsAdmin
@CapsAdmin 2 ай бұрын
You didn't test the new llama3.1 8b model locally, you were testing some custom model that combines llava and llama3.0 8b. The custom model is on huggingface under the name xtuner/llava-llama-3-8b-v1_1-gguf
@Joooooooooooosh
@Joooooooooooosh 2 ай бұрын
Yeah this field is producing a lot of uninformed expert vloggers.
@vi6ddarkking
@vi6ddarkking 2 ай бұрын
Yes I have no doubt many long RP session will be had with the updated 8 Billion model and it's many many fine tunes.😉
@starblaiz1986
@starblaiz1986 2 ай бұрын
Llama 3.1 8b locally and 405b up on Groq, using RouteLLM to switch between them (8b as the weak and the 405b as the strong model, obviously). THEN combine THAT with an agentic framework. Wild power at our fingertips! 😮
@vibhorgupta5442
@vibhorgupta5442 2 ай бұрын
It's not "open source". It's freeware They haven't released code, model architecture and training process and data
@MusicalGeniusBar
@MusicalGeniusBar 2 ай бұрын
Matt had AI Matt as a guest 😂😂😂 I couldn’t tell the difference 😅😅😢😢
@RealmsOfThePossible
@RealmsOfThePossible 2 ай бұрын
But which AI Matt? There are two.
@jonmichaelgalindo
@jonmichaelgalindo 2 ай бұрын
The awesome thing about Llama Guard being a separate model from the GPT, is that you can update and redeploy just that tiny model as new jailbreaks are found, instead of having to retrain everything. Safety done right!
@Stuck_in_Spacetime
@Stuck_in_Spacetime 2 ай бұрын
For counting total number of letters and counting a specific letter in a word, I used this prompt which works everytime:
@rionix88
@rionix88 2 ай бұрын
Downloaded 3.1 8b yesterdy, will test today
@industrialpunk1088
@industrialpunk1088 2 ай бұрын
Your comparison at the end was with the old Llama3 (not the new 3.1) model (xtuner Llava Llama also has vision capabilities which you can install from LM Studio - it is fun to use - it'd be nice to see a video on LLM vision, its really powerful!)
@africanowwtv
@africanowwtv 2 ай бұрын
Good content as usual.
@MattVidPro
@MattVidPro 2 ай бұрын
Thank you!
@wardehaj
@wardehaj 2 ай бұрын
Thanks for the great video! And now there is also mistral2 large available and according to the benchmark it should perform similar to llama3.1 405b!
@blisphul8084
@blisphul8084 2 ай бұрын
The strawberry test works flawlessly in gpt-4o-mini and gpt-4o when done in the playground. It doesn't even need to show its work. Perhaps the ChatGPT system prompt is messing it up. I've been very impressed with 4o-mini, and think that with the right prompting, you don't really need 4o full most of the time.
@dot1298
@dot1298 2 ай бұрын
when will we have local models with the performance of GPT5/llama 4-405B/Claude 4 Opus?
@dot1298
@dot1298 2 ай бұрын
before 2030?
@mannynunez1481
@mannynunez1481 2 ай бұрын
🎯 Key points for quick navigation: 00:00:00 *🌐 Meta's Llama 3.1 405b and Open Source Benefits* - Meta releases Llama 3.1 405b, a top-tier open-source model competing with closed-source giants like GPT-4 Omni and Claude 3.5. - Open-source accessibility allows modification, customization, and unrestricted use without direct cost to Meta. 00:14:00 *🖥️ Model Specifications and Community Impact* - Llama 3.1 models include 405b, 70b, and 8B, each offering significant improvements and versatility for different computing capabilities. - Enhanced context length and performance benchmarks position Llama 3.1 models competitively against closed-source counterparts. 01:50:00 *🌍 The Significance of Open Source in AI Development* - Open-source models like Llama 3.1 foster innovation by removing access barriers and enabling broad community collaboration. - Unrestricted access allows for fine-tuning without proprietary restrictions, enhancing model adaptability and utility. 04:34:00 *📊 Model Performance Comparison and Future Applications* - Performance benchmarks show Llama 3.1 405b excelling in various metrics like MLU and code evaluation, competing closely with industry benchmarks. - Potential applications include data synthesis and model training, leveraging the model's capabilities to advance AI workflows. 06:52:00 *🚀 Advantages of Smaller Models: Llama 3.1 8B* - Smaller open-source models like Llama 3.1 8B offer competitive performance locally, surpassing other models in their class. - Accessibility and performance make them ideal for developers and researchers looking to experiment or deploy AI solutions efficiently. 08:56:00 *🛠️ Tools and Platforms Supporting Llama Models* - Various platforms and tools, including LM Studio and VS Code integration, enable easy deployment and utilization of Llama 3.1 models. - Integration into diverse AI development environments enhances accessibility and usability across different user preferences. 09:50:00 *🌐 Community and Developer Engagement* - Community reactions highlight widespread adoption and creative applications of Llama models across different AI applications. - Developer community contributions, including jailbreaking initiatives, demonstrate active engagement and exploration of model capabilities. 14:20:00 *🎭 Creative Storytelling with Llama 3.1 405b* - Llama 3.1 405b generates creative and humorous narratives, showcasing its capabilities in natural language generation. - Comparison with other models underscores its unique storytelling abilities and imaginative outputs. 21:42 *🍓 Misidentification of "RS" in words by AI models* - AI models like GPT-40 and Sonnet 3.5 misidentify the number of "RS" in words like "strawberry." - Meta's LLAMA 3.1 405b correctly identifies these errors, showcasing superior tokenization accuracy. 23:16 *📊 Performance of AI models in real-world knowledge queries* - Meta's LLAMA 3.1 405b excels in explaining specific real-world concepts and scenarios accurately. - Legacy AI models like GP4 Omni demonstrate reliable performance in factual queries despite being older models. 26:05 *💻 Local vs. Cloud-based AI model performance* - Comparison between LLAMA 3 8B running locally and OpenAI's GPT-40 Mini in terms of speed and responsiveness. - LLAMA 3 8B offers impressive speed locally, suitable for various computational setups, contrasting with the cloud-based GPT-40 Mini's performance. 28:20 *🌊 Handling emotional queries about a lost pet rock* - AI responses from LLAMA 3 8B and GPT-40 Mini vary in emotional sensitivity and practical advice. - LLAMA 3 8B's responses show a more practical, less emotionally attuned approach compared to GPT-40 Mini's more empathetic suggestions. 31:35 *🚀 Conclusion on LLAMA models and their potential* - LLAMA models, especially LLAMA 3.1 405b, demonstrate high performance and open-source accessibility. - These models, like LLAMA 3 8B, offer versatility and speed locally, positioning them as viable alternatives to cloud-based AI models. Made with HARPA AI
@Alseki
@Alseki 2 ай бұрын
Impressed with the large llama 3.1 model demo. However I've been testing mistral nemo instruct 13b (q8_0 gguf) for the first time today, running locally, to generate fiction. On my geforce 3090 it gives output considerably faster than I can read, at a quality which seems to surpass goliath 120b (which I was previously using for this, hosted externally via runpod). I had thought limited availability of hardware would set back much of the early society wide impact of these LLM model some years, as until something like a pair of 48gb vram cards were available per output instance, maybe the quality wouldn't be sufficient. Now I'm thinking that if a 13b model can be this good, we basically already have the hardware for mass use. Can't believe they packed such high quality into such a small model.
@Rafifkhalidpermana
@Rafifkhalidpermana 2 ай бұрын
excuse me @mattvidpro, can u tell me the specification of your computer or laptop to do this Llama test locally using LM studio? anyway, thanks for your great recap of this LLM. kudos from Gorontalo, Indonesia.
@markwalker8374
@markwalker8374 2 ай бұрын
Good to see someone in Gorontalo keeping up to date in the latest AI developments. Never been to Gorontalo but have been to Menado and Tangkoko area several times .
@Rafifkhalidpermana
@Rafifkhalidpermana 2 ай бұрын
@@markwalker8374 wow, manado and tangkoko? awesome! u should try to danau limau, bunaken, and olele someday! anyway, what computer did u use to test the llama 3.1 405b in lm studio? can u tell me?
@markwalker8374
@markwalker8374 2 ай бұрын
@@Rafifkhalidpermana I'm using a M1 MacBook Pro but at the moment I am more interested trying out image and video generation than I am using LLMs. I'm a little concerned that LLMs will only end up creating lots of text that might be questionably accurate and lots of vacuous e-books. As for wandering around Indonesia birdwatching and meeting people my favourite area is now Raja Ampat but alas age has caught up with my knees and I can't travel much at present
@Rafifkhalidpermana
@Rafifkhalidpermana 2 ай бұрын
@@markwalker8374 hahahah, Oh, so you're using an M1 MacBook Pro? No wonder LM Studio runs so fast. My computer's specs are still below yours, but it's not a big deal. By the way, I'm also using LM Studio as one of the tools for my master's thesis. I'm currently studying Cyber Security and Digital Forensics at Telkom University. I'm researching the ability of Meta LLaMA 3 to understand SQL Server database logs, especially regarding its accuracy and effectiveness. You know; LM studio + Anything LLM + database log files. As for Raja Ampat, you're right, Mike. I think it's one of the paradises for birdwatching enthusiasts worldwide. I hope your knees get better soon, and you can get back to exploring all the amazing places around the world. Semangat mike!
@karenreddy
@karenreddy 2 ай бұрын
If these models start getting implemented in critical areas of our economy I sure hope they learn to answer moral questions, as they are the baseline for many answers on a regular basis.
@AmazingArends
@AmazingArends 2 ай бұрын
I tested an AI on various ethical questions from a psychology book, and it did a pretty good job. However, in areas where morality intersects with politics, the AI's often show an unfortunate political bias.
@markonfilms
@markonfilms 2 ай бұрын
For now Zuckerberg and Yann and team all really the MVPs in this.
@ThomasConover
@ThomasConover 2 ай бұрын
Zucc is making robot AI lizards from space great again. ❤
@jumpersfilmedinvr
@jumpersfilmedinvr 2 ай бұрын
Who would we contact for creating a prompt that allows a private large language model to overcome mathematical reasoning? Are we content with the pet rock llm's we get for free while corporations share quantum computing among themselves?
@ldelossantos
@ldelossantos 2 ай бұрын
releasing free usage licenses it's not open source... I love Llama releases, but that's not a justification to use and glorify wrong terms.
@konstantinlozev2272
@konstantinlozev2272 2 ай бұрын
VSCode is not "versusCode", but VisualStudioCode
@IconoclastX
@IconoclastX 2 ай бұрын
I bust out laughing when he said that
@konstantinlozev2272
@konstantinlozev2272 2 ай бұрын
@@IconoclastX To be fair, hb d his channel are not coding focused, so it's ok 👌
@lordfondragon
@lordfondragon 2 ай бұрын
Great work!!! Plz make a video about Mistral Nemo, it's quite good for it's size, and I think it's the best model I was able to run on my laptop!!!
@Arcticwhir
@Arcticwhir 2 ай бұрын
liking the new lighting/camera upgrade
@MattVidPro
@MattVidPro 2 ай бұрын
Yo thanks for noticing!!!
@adatalearner8683
@adatalearner8683 2 ай бұрын
what will be the ideal hardware requirements for running the Llama 3.1 405b model locally?
@JakubHohn
@JakubHohn 2 ай бұрын
I wanted to ask if I am alone in this, but it feels to me that 4 mini gives better responses. Not only faster, which it obviously does, but better quality.
@smokedoutmotions_
@smokedoutmotions_ 2 ай бұрын
Crazy bro hell yeah. Thanks for jailbreak prompt
@konstantinlozev2272
@konstantinlozev2272 2 ай бұрын
That Llama 3 that you tested is I think the 3.0 (old), not 3.1 (new)
@aresaurelian
@aresaurelian 2 ай бұрын
Wonderful. Thank you.
@dalecorne3869
@dalecorne3869 2 ай бұрын
Here's something you need to make a quick video about....Kling AI is now available globally.
@anewman1976
@anewman1976 2 ай бұрын
Thanks to here in Ireland which serves as the primary data protection office for the entire EU countries, we don't have the pleasure yet for the Meta Lama thingy.... Edit: Yes I know I can use my VPN which I can do, but still not officially working here yet.
@GoodBaleadaMusic
@GoodBaleadaMusic 2 ай бұрын
To see how good this works watch my musical takeover of all global markets. I need to do Cambodian and Nepali tonight. Ive already finished my black memphis coffee commercial and my daily dancehall for the album. Punjabi Rap album out ON ALL THE THINGS TOMMOROW!!!
@jagatkrishna1543
@jagatkrishna1543 2 ай бұрын
Thanks 🙏
@KimSol90
@KimSol90 2 ай бұрын
Ty for another great video !
@Soybreadward
@Soybreadward 2 ай бұрын
Zoey, Elmo and Rocco are shook
@bengsynthmusic
@bengsynthmusic 2 ай бұрын
I tried 8b on a LM Studio. It is terrible and prattles on and on until I have to stop it. It chats with itself as both the user and the bot.
@erikjohnson9112
@erikjohnson9112 2 ай бұрын
22:18 Counting Rs in STRAWBERRY might be interpreted as a spelling question in the domain of "one R" or "two Rs" at the end of the word. Aside from that, this question would likely need a tool to process accurately. Due to vector conversion (embeddings), the strict spelling is pretty much lost. These questions would need to convert to an array of characters and work from there, which takes special care/instructions.
@erikjohnson9112
@erikjohnson9112 2 ай бұрын
Btw, I tried the 8b version in LM Studio using the community version at 8-bit and I also noticed it being rather bad right now. No doubt it will improve dramatically once everything gets tuned (both parameters for inference and actual fine-tuning itself). I had the model talking to itself in an infinite loop (or at least heading in that direction when I stopped it).
@vi6ddarkking
@vi6ddarkking 2 ай бұрын
Nah the real reason 'ZUCC' kept Llama Open Source after Llama 1 leaked. Is because with it's popularity he noticed that people will pay for the GPUs and tools to train open source models. Many of the other companies are looking for gold in the AI Gold Rush. But companies like Meta and Nvidia are selling the shovels. Still a Win Win.
@krystiankrysti1396
@krystiankrysti1396 2 ай бұрын
405B model requires significant storage and computational resources, occupying approximately 750GB of disk storage space and necessitating two nodes on MP16 for inferencing.
@makesnosense6304
@makesnosense6304 2 ай бұрын
So, where is the SOURCE to reproduce the model, now when you call it open source. How do I reproduce it?
@LegendaryMediaTV
@LegendaryMediaTV 2 ай бұрын
I found that several tools don’t support the higher context window, despite having a flag to set it. Also, when you make a request that does increase the context, it really shoots up the memory/processing requirements. Either way, Llamma 3.1 doesn’t seem to do well working with even 16K tokens, let alone 128K, unfortunately. For example, I gave it a 16K token article (markdown formatting) and asked it to summarize it and another time asking it what it said about something it talked about in the middle. Both times it just hallucinated something, complaining g about not knowing what it was and ignored the instructions at the top. It does fine otherwise, I was just looking forward to leveraging that extra context.
@irishninER
@irishninER 2 ай бұрын
V S code (Visual Studio) but verses is a cool take
@whomisac4616
@whomisac4616 2 ай бұрын
Hi, what model did you use to read it out loud. I have got a lot of story that I need to listen if I could find a good reading model.
@billknight7342
@billknight7342 2 ай бұрын
So exactly what kind of computing resources do you need to run a model this size? I asked an AI about it but I don't trust it to be accurate.
@countofst.germain6417
@countofst.germain6417 2 ай бұрын
More than you have lol probably like 250+ GB of ram, I don't know exact specs but you would need an insane set up, people struggle to run the 70b model with a 4090.
@punk3900
@punk3900 2 ай бұрын
It is an unfair comparison because 4o is super fast and it's really helpful when you use it for coding. Moreover 4o is perfect in keeping the context after long coding and revising process
@tails_the_god
@tails_the_god 2 ай бұрын
405B? just think if i or we turned that into a pure roleplaying model it would be the ultimate open source roleplaying bot currently! XDDDD
@MaxyX
@MaxyX 2 ай бұрын
I'm just waiting for deep dive
@_RobertOnline_
@_RobertOnline_ 2 ай бұрын
Did the r count in strawberry test with Claude and the result was the same as with Llama
@Alice8000
@Alice8000 2 ай бұрын
They trained it with a secret phrase for it to compromise the entire app stack running on top of it. I only know half of this phrase. If you were the other developer please reach out to me.
@SharpDesign
@SharpDesign 2 ай бұрын
Anybidy know what happened to the T2I adapter? All the ones in huggingface are disabled.
@jaddatir9004
@jaddatir9004 2 ай бұрын
Waiting for your reaction to Udio's new update
@MattVidPro
@MattVidPro 2 ай бұрын
Soon!
@tombradford7035
@tombradford7035 2 ай бұрын
"Meta AI isn't available yet in your country" - pathetic.
@henrythegreatamerican8136
@henrythegreatamerican8136 2 ай бұрын
Best thing about these open source models is you don't have to adhere to the excessive censorship you get from chatgpt and claude. Try getting any of those non open source models to type a funny sexual innuendo for example.
@AmazingArends
@AmazingArends 2 ай бұрын
They are also quite biased politically as well.
@EchoYoutube
@EchoYoutube 2 ай бұрын
If it has video chat like the new 4o is supposed to have, Id be down.
@MattVidPro
@MattVidPro 2 ай бұрын
This is supposedly arriving in December
@proflicxx
@proflicxx 2 ай бұрын
I guess this modal will be great, but lots of finetuning is required because , i tested it comparing other paid modals and its not nearly able to produce high quality results compared to them.
@GLDTruth
@GLDTruth 2 ай бұрын
I don't see where to choose the model in the settings in Meta AI. What am I missing?
@primalplasma
@primalplasma 2 ай бұрын
Is the 8B version available to download using the Ollama command line app?
@notme_1128
@notme_1128 2 ай бұрын
How did you go from Minecraft to AI
@iritesh
@iritesh 2 ай бұрын
What do you think about thinkbuddy AI?
@MusicalGeniusBar
@MusicalGeniusBar 2 ай бұрын
Is Matt going to make an app of AI him reading AI stories
@Kjh188_61
@Kjh188_61 2 ай бұрын
Your video is great and btw its my birthday
@Slaci-vl2io
@Slaci-vl2io 2 ай бұрын
10:49 You can call it VS Code or Visual studio Code but not Versus Code.
@MattVidPro
@MattVidPro 2 ай бұрын
My bad!! I’ve seen it many times but I never knew what it was called
@Slaci-vl2io
@Slaci-vl2io 2 ай бұрын
@@MattVidPro I really had concluded that that part was read by AI too. Now I'm confused.
@atomiste4312
@atomiste4312 2 ай бұрын
All hail the Zuck!
@bloxyman22
@bloxyman22 2 ай бұрын
You made a mistake in your local 8b tests. llava llama has nothing to do with llama 3.1 and is instead a visual clip model finetune based on the older llama models.
@MattVidPro
@MattVidPro 2 ай бұрын
Yeah I saw this I accidentally ran the wrong model in lm studio. My bad
@Khari99
@Khari99 2 ай бұрын
What tool are you using to clone your voice?
@SA-vi2hf
@SA-vi2hf 2 ай бұрын
what voice generator is he using ?
@DaggaRage
@DaggaRage 2 ай бұрын
gpt4o win in creativity by far😁
@Pepius_Julius_Magnus_Maximu...
@Pepius_Julius_Magnus_Maximu... 2 ай бұрын
Can't chuck the zuck
@chandravijayagrawal3440
@chandravijayagrawal3440 2 ай бұрын
There is a need for tiny model that is 1 or 2B
@adolphgracius9996
@adolphgracius9996 2 ай бұрын
Why can't open ai just use chips like groq for the voice for gtp4o?
@stardustjazz2935
@stardustjazz2935 2 ай бұрын
Is there any way to access meta ai from European union?
@a.ielimba78
@a.ielimba78 2 ай бұрын
@Arcticwhir
@Arcticwhir 2 ай бұрын
did an extensive coding test at work today, mistral large 2 was better than llama 405b. It was intermediate to complex. not very complex.
@mylittleheartscar
@mylittleheartscar 2 ай бұрын
omg im here minute 1
@FusionDeveloper
@FusionDeveloper 2 ай бұрын
and yet your comment leaves no value to anyone.
@mylittleheartscar
@mylittleheartscar 2 ай бұрын
@@FusionDeveloper I learned from people like yourself of course
@StarliteCreative
@StarliteCreative 2 ай бұрын
How is he capturing our data and SELLING IT BEHIND OUR BACKS!?!?
@BlackMita
@BlackMita 2 ай бұрын
Your thoughts.
@MilkGlue-xg5vj
@MilkGlue-xg5vj 2 ай бұрын
Ayo 11 minutes in?
@RealmsOfThePossible
@RealmsOfThePossible 2 ай бұрын
I don't trust OpenAI but I trust Meta less.
@tuckerbugeater
@tuckerbugeater 2 ай бұрын
give in it's over
@chanpasadopolska
@chanpasadopolska 2 ай бұрын
It's open source, it's not a matter of trust because it's transparent
@RealmsOfThePossible
@RealmsOfThePossible 2 ай бұрын
@@chanpasadopolska Nothing is 'free', I'm sure meta are collecting something from users for other purposes.
@Teddyafro17
@Teddyafro17 2 ай бұрын
u can use it locally
@southcoastinventors6583
@southcoastinventors6583 2 ай бұрын
Mistral Large 2 is like what about me I am new
@shrodingersman
@shrodingersman 2 ай бұрын
Yo Zuck Zeee!
@chanpasadopolska
@chanpasadopolska 2 ай бұрын
It's old news, I have already heard that Mistral released version 2 of it's Large model and it's even better than Llama 3.1
@MarxOrx
@MarxOrx 2 ай бұрын
MISTRAL MISTRAL MISTRAL MISTRAL MISTRAL 😂
@cowlevelcrypto2346
@cowlevelcrypto2346 2 ай бұрын
If I can't run it on my own machine , ( which is most of "anyone" ), I don't care. :P
@Clandestinemonkey
@Clandestinemonkey 2 ай бұрын
versus code?
@andresnrivero
@andresnrivero 2 ай бұрын
He called it Versus Code. Heh.
@MattVidPro
@MattVidPro 2 ай бұрын
I’ve read it many times but I’ve never heard anyone else read it 😂 to be fair I still pronounce ideogram wrong 😅
@andresnrivero
@andresnrivero 2 ай бұрын
@@MattVidPro people call it V S Code or they say Visual Studio Code
@khelabhalo2605
@khelabhalo2605 2 ай бұрын
Well but it doesn't actually good compare to GPT & Claud, Ask a simple question, Whats Beluga youtuber famous for, Llama failed to answer it, it's very weird it doesn't know the information but when it replies yes randomly
@lefullhouse
@lefullhouse 2 ай бұрын
Llama is way too woke to be used usefully for anything intelligent.
@drendelous
@drendelous 2 ай бұрын
so either openai is dead aka government opressed or they will roll out something out of this solar system
@theafricanrhino
@theafricanrhino 2 ай бұрын
Like a day too late, has already been beaten..
@drendelous
@drendelous 2 ай бұрын
nvidea.. nvidea..
@Joooooooooooosh
@Joooooooooooosh 2 ай бұрын
Llama 3.1 is open weights. Not open source. The more I listen to this video, the more doubts I have about your knowledge of either open source or language models. You repeatedly and increasingly claim it's fully open and not owned by anyone etc. All of this is completely untrue.
@howard927
@howard927 2 ай бұрын
No its not privet far from it ! looking like a hack job. I installed it on my server with 20 cores 1000 GB of Ram 48 GB of GPU and 10,000 Cuda Cores and my other models were flying and after installing llama 3.1 the only one working fast was llama 3.1 and no its not privet no no no. it requires connection to meta's server. can it be the biggest hack job in the world ever ? why is mark changing and reinventing he's image to convince us all that he wants good for the world ? Is he just gathering data from all of us ? I uninstaller it and my system continues to be slow so Im going to reformat my hard drive 20 times. so stop this BS for views. the truth always comes out.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,1 МЛН
LLAMA-3.1 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌
15:08
Prompt Engineering
Рет қаралды 38 М.
Сюрприз для Златы на день рождения
00:10
Victoria Portfolio
Рет қаралды 2,1 МЛН
This mother's baby is too unreliable.
00:13
FUNNY XIAOTING 666
Рет қаралды 39 МЛН
啊?就这么水灵灵的穿上了?
00:18
一航1
Рет қаралды 58 МЛН
LLaMA 405b Fully Tested - Open-Source WINS!
10:02
Matthew Berman
Рет қаралды 80 М.
The Easiest Design Tool is also the Most POWERFUL. (thanks to AI)
19:49
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 166 М.
Meta Announces the "World's Best" AI Video Generator. No Joke.
19:07
AI Video Tools Are Exploding. These Are the Best
23:13
Futurepedia
Рет қаралды 186 М.
Meet The New Mark Zuckerberg | The Circuit
24:02
Bloomberg Originals
Рет қаралды 2,2 МЛН
This Llama 3 is powerful and uncensored, let’s run it
14:58
David Ondrej
Рет қаралды 154 М.
Create, Share & Explore Community Made AI Apps in ONE place!
23:02
MattVidPro AI
Рет қаралды 13 М.
Ideogram 2.0 is my new Favorite Image Gen! | First Look
21:37
MattVidPro AI
Рет қаралды 26 М.
Сюрприз для Златы на день рождения
00:10
Victoria Portfolio
Рет қаралды 2,1 МЛН