CODE LLAMA - THE BEST CODING MODEL IS HERE!

  Рет қаралды 28,063

Prompt Engineering

Prompt Engineering

Күн бұрын

In this video, we are going to explore the newly released coding model from Meta, Code-Llama. Code Llama is an AI model built on top of Llama 2, fine-tuned for generating and discussing code. It’s free for research and commercial use.
#codellama #llama2 #metaai
▬▬▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Support my work on Patreon: Patreon.com/PromptEngineering
🦾 Discord: / discord
▶️️ Subscribe: www.youtube.com/@engineerprom...
📧 Business Contact: engineerprompt@gmail.com
💼Consulting: calendly.com/engineerprompt/c...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
LINKS:
Blogpost: about. news/2023/08/cod...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tu...

Пікірлер: 45
@engineerprompt
@engineerprompt 9 ай бұрын
Want to connect? 💼Consulting: calendly.com/engineerprompt/consulting-call 🦾 Discord: discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: ko-fi.com/promptengineering |🔴 Join Patreon: Patreon.com/PromptEngineering
@baraka99
@baraka99 9 ай бұрын
Please make more videos explaining how to use Llama2 for coding. I'm a designer but not a developer, and that would help me understand it's capacities much better.
@othmanaljbory3649
@othmanaljbory3649 9 ай бұрын
Write in KZbin, the method will be shown to you in detail in KZbin
@engineerprompt
@engineerprompt 9 ай бұрын
Check out the latest video, more to come 😀
@harisjaved1379
@harisjaved1379 9 ай бұрын
Thank you so much my friend, one question though, does the model remember, or will it start losing cont I t after a few prompts? Thank you
@adriantang5811
@adriantang5811 9 ай бұрын
Great sharing again with thanks 👍
@00042
@00042 9 ай бұрын
Regarding the question someone asked in the comment section here, "Does the model remember, or will it start losing context after a few prompts?" In my experience with transformers like the text encoder/decoders in txt2img models (where it tends to become more evident), they retain context (e.g attention mechanism) on a session basis. However, abrupt context switches, like leaving and returning to a webpage when using a generation service, interestingly enough, can cause noticeable changes in output despite identical parameters.
@sanketmaheshwari1110
@sanketmaheshwari1110 9 ай бұрын
Thanks for such a great content.
@engineerprompt
@engineerprompt 9 ай бұрын
Thanks for watching!
@RickySupriyadi
@RickySupriyadi 9 ай бұрын
is the job for programer closer and closer to end? replaced by copy paster programer?
@loicbaconnier9150
@loicbaconnier9150 9 ай бұрын
Is the instruct version 33b will become the best model for standard instruct not only for coding ?
@emmanuelgoldstein3682
@emmanuelgoldstein3682 9 ай бұрын
Imagine - we're only years off from having the LLM interact with terminal directly. Imagine completely natural speech engines writing extensions for your specific OS, anything you can think of... o.o
@gustavstressemann7817
@gustavstressemann7817 9 ай бұрын
This is so fantastic... all over the world there is only bad news about war, climate change and racism. And Meta drops something so awesome. Thanks for your video, it helps in such hard times :D
@RichardGetzPhotography
@RichardGetzPhotography 9 ай бұрын
Thanks! Very interested on how to run this locally, and on Hugging Face. Which of TheBloke's 841 models do you choose from!
@engineerprompt
@engineerprompt 9 ай бұрын
Thanks for the support! Will be creating videos on this soon. I personally like the wizard-vicuña variations or even the original vicuña model. Those seem to get the job done pretty nicely
@Zivafgin
@Zivafgin 9 ай бұрын
Great content!
@engineerprompt
@engineerprompt 9 ай бұрын
Thank you
@dharmendra_kumar_yadav
@dharmendra_kumar_yadav 9 ай бұрын
Laptop with 8gb ram 4gb gtx grapics card and i5 11th h gen processor can run this 13b model
@user-jq1gc8lt7s
@user-jq1gc8lt7s 9 ай бұрын
GREAT JOB
@jacoballessio5706
@jacoballessio5706 9 ай бұрын
Whoah, 100,000 tokens is crazy
@MCDlusiv
@MCDlusiv 9 ай бұрын
Could a laptop with 12gb 4080 run these? Could it do the larger sizes? Great work btw
@MultiMojo
@MultiMojo 9 ай бұрын
Nope, 32B parameter is going to need ~40 GB VRAM. Based on my experience, 7B parameter model takes ~14 GB VRAM, 13B parameter is ~23 GB VRAM. GGML version might work on less GPU memory though, try it out.
@boris---
@boris--- 9 ай бұрын
@@MultiMojo Sooo average gaming joe with nvidia 8gb vram cant run any "good" model locally?
@michaelmccoubrey4211
@michaelmccoubrey4211 9 ай бұрын
If you use gptq for 4 bit quantisation you can comfortably run the 13B model on a 12gb gpu and the performance will barely degrade.
@EfeDursun125
@EfeDursun125 9 ай бұрын
@@michaelmccoubrey4211 yes, it runs fine on my rtx 3060
@Kevin-jc1fx
@Kevin-jc1fx 9 ай бұрын
@@MultiMojo You can run it on CPU and use RAM instead. I run 7B models locally with around 6 GB of RAM via gpt4all.
@my_codingchannel7479
@my_codingchannel7479 9 ай бұрын
how do you access it
@strikerstrikerson8570
@strikerstrikerson8570 9 ай бұрын
Hi, this Llm only for Python coding?
@engineerprompt
@engineerprompt 9 ай бұрын
The code llama and code llama instruct versions seems to support multiple languages, code llama python is specific to Python
@strikerstrikerson8570
@strikerstrikerson8570 9 ай бұрын
@@engineerprompt Thx!
@EfeDursun125
@EfeDursun125 9 ай бұрын
what's the difference between normal and instruct?
@jirikosek3714
@jirikosek3714 9 ай бұрын
The normal model (=base) is trained on a large corpus of data where the training objective is predicting next word in a sentence. The instruct model is trained on question answer pairs, where the goal is to write an instruction. The base model learns how the language works, and then the instruct model learns how to provide helpful information.
@EfeDursun125
@EfeDursun125 9 ай бұрын
@@jirikosek3714 thanks
@marconeves9018
@marconeves9018 9 ай бұрын
Now that openAI announced fine-tuning for for gpt-3.5 I wonder what kind of benchmarks it would achieve it somebody fine tuned it for code like meta did here would it surpass gpt-4? What about when they release fine tuning for gpt-4 and it goes through this code fine-tuning, would it crack 90% on the human eval? This is getting very interesting.
@engineerprompt
@engineerprompt 9 ай бұрын
These are going to be very interesting experiments and I would suspect, they will achieve really good results. The interesting thing is going to be the dataset that you use for fine tuning. I don’t think (I might be wrong), meta actually released the training data
@marconeves9018
@marconeves9018 9 ай бұрын
@@engineerprompt I read the CodeLlama paper and they talk about the dataset "recipe" on pg.5 but the actual dataset is proprietary, sadly.
@Igorltdz
@Igorltdz 9 ай бұрын
What are the requirements? Is 7 to 8GB? Is 13 to 12GB? Is 34 to 24GB? What is the minimum for each one?"
@noobking5056
@noobking5056 9 ай бұрын
This is why you need two GPU to code
@after1001
@after1001 9 ай бұрын
"Except GPT4" that's all I need to hear
@TransactionalBoy
@TransactionalBoy 9 ай бұрын
Thanks for the video, but calling Code-LlamA the "best coding model" is wrong (It's still GPT-4, by far) and hurts your credibility.
@MrJimthespider
@MrJimthespider 9 ай бұрын
Except this is free. It's coding specific and it's local so you can use it without internet.
@uwu-dm7vb
@uwu-dm7vb 9 ай бұрын
@@MrJimthespider yeah but you will require heavy Vram for this not everyone has that better to let it run on cloud
@Kevin-jc1fx
@Kevin-jc1fx 9 ай бұрын
Maybe he implied open source model. He should try to specify.
@AldoInza
@AldoInza 7 ай бұрын
Maybe it's an alignment issue because the weight for opensource Is overly positive in his internal model.
CODE-LLAMA: Is it Actually Good?
13:21
Prompt Engineering
Рет қаралды 29 М.
NEW WizardCoder-34B - THE BEST CODING LLM
8:23
Prompt Engineering
Рет қаралды 24 М.
OMG🤪 #tiktok #shorts #potapova_blog
00:50
Potapova_blog
Рет қаралды 4,8 МЛН
How to bring sweets anywhere 😋🍰🍫
00:32
TooTool
Рет қаралды 49 МЛН
🍟Best French Fries Homemade #cooking #shorts
00:42
BANKII
Рет қаралды 64 МЛН
Sprinting with More and More Money
00:29
MrBeast
Рет қаралды 181 МЛН
How To Connect Local LLMs to CrewAI [Ollama, Llama2, Mistral]
25:07
codewithbrandon
Рет қаралды 59 М.
AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
23:47
Marker: This Open-Source Tool will make your PDFs LLM Ready
14:11
Prompt Engineering
Рет қаралды 34 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
Llama - EXPLAINED!
11:44
CodeEmporium
Рет қаралды 25 М.
Code Llama Unlocked: The New Code Generation Model
21:58
Meta Developers
Рет қаралды 3,5 М.
LLama 2: Best Open Source Chatbot in GPT4All
17:15
The Morpheus Tutorials
Рет қаралды 22 М.
CODE-LLAMA For Talking to Code Base and Documentation
14:49
Prompt Engineering
Рет қаралды 20 М.
How to Create Custom Datasets To Train Llama-2
17:11
Prompt Engineering
Рет қаралды 92 М.
5 НЕЛЕГАЛЬНЫХ гаджетов, за которые вас посадят
0:59
Кибер Андерсон
Рет қаралды 1,6 МЛН
i like you subscriber ♥️♥️ #trending #iphone #apple #iphonefold
0:14
#miniphone
0:16
Miniphone
Рет қаралды 3,1 МЛН
Mi primera placa con dios
0:12
Eyal mewing
Рет қаралды 719 М.
Дени против умной колонки😁
0:40
Deni & Mani
Рет қаралды 12 МЛН
How charged your battery?
0:14
V.A. show / Магика
Рет қаралды 6 МЛН