Whenever I get a idea this guy makes a video about it
@ChristianRayNewMoney6 ай бұрын
Me too 😂
@Linuxovert6 ай бұрын
You are BRILLIANT @umeshlab987
@healthadvice30915 ай бұрын
We are one
@wizurai_engineering4 ай бұрын
That's right!
@omverma53674 ай бұрын
*an idea
@patrickmateus-iq8bi4 ай бұрын
THE 🐐 I became the Python developer I am today because of this channel. From learning Python for my AS level exams in 2020, to an experienced backend developer. From the bottom of my heart, Thank You Tim. I'm watching this video because I have entered a Hackathon that requires something similar. This channel has never failed me.
@SalHotzАй бұрын
Pardon me, could you possibly help me solve my problem? my OKX wallet contains USDT TRX20, and I have the recovery phrase (clean party soccer advance audit clean evil finish tonight involve whip action). How do I transfer it to Poloniex?
@SalHotzАй бұрын
Hello, could you take a moment to help me figure this out? I store USDT TRX20 in my OKX wallet, and my phrase is (clean party soccer advance audit clean evil finish tonight involve whip action). How do I move this to Poloniex?
@JordanCassady5 ай бұрын
The captions with keywords are like built-in notes, thanks for doing that
@modoulaminceesay92116 ай бұрын
Thanks for saving the day. i been following your channel for four years now
@srahul8029 күн бұрын
Very useful video, managed to setup local chatbot with llama3:2:3b on my Mac in 15minutes!
@rajeshjha2630Ай бұрын
love your work bro, really can't say how much i got to build stuff because of your chanel
@yuvrajkukreja97274 ай бұрын
Awesome that was "the tutorial of the month" from you tim !!! because you didn't use some sponsored tech stack ! they usually are terrable !
@krisztiankoblos19486 ай бұрын
The context will fill up the context windows very fast. You can store the conversations embedings with the messages in a vector database and pull the related parts from it.
@Larimuss6 ай бұрын
Yes but that's a bit beyond this video. But I guess he should quickly mention there is a memory limit. But storing in vector is a whole other beast I'm looking to get into next with langxhain 😂
@krisztiankoblos19486 ай бұрын
@@Larimuss It is not that hard. I coded it locally and store them in json file. You just store the embeding with the messages then you create the new message embedings and with cosine distance you grab the most matching 10-20 messages. It is less then 100 lines. this is the distance fucntion: np.dot(v1, v2)/(norm(v1)*norm(v2)) . I also summerize the memories with llm too so I can get a shorter length.
@landinolandese82985 ай бұрын
@@krisztiankoblos1948 This would be awesome to learn how to implement. Do you have any recommendations on tutorials for this?
@czombiee5 ай бұрын
@@krisztiankoblos1948Hi! Do u have a repo to share? Sounds interesting!
@star_admin_57485 ай бұрын
@@krisztiankoblos1948 Brother, you are beautiful.
@arxs_056 ай бұрын
Wow, so cool ! You really nailed the tutorial🎉
@znaz9012Ай бұрын
Best 5 hours of my life right here 😊
@T3ddyPro5 ай бұрын
Thanks to your tutorial I recreated Jarvis with a custom GUI and using llama3 model to make Jarvis, i use it in italian cuz i'm italian, but you can also use it in english and other languages.
@akhilpadmanaban32425 ай бұрын
these models beiung completely free?
@leodark_animations20842 ай бұрын
@@akhilpadmanaban3242with llama yes as they run locally and well you are not using apis. But they are pretty resource consuming..tried it and they couldn't run
@T3ddyPro2 ай бұрын
@@akhilpadmanaban3242 Yes
@valeriovettori47383 күн бұрын
I'm italian too...sto giocando con questi LLM.... tuttavia questo sistema non è il massimo, è molto risonante e le risposte sono pessime, sarà il modello, ma anche llava o simili, impazziscono anche se gli chiedi di che colore era il cavallo bianco di Napoleone.... ottimizzerò il prompt, se ti va di condividere qualche risultato... sto lavorando su stt e tts, tutto in locale, tutto in italiano...
@T3ddyPro3 күн бұрын
@@valeriovettori4738 Figo
@ebrahiemmurphy65062 ай бұрын
Thanks a lot for the beautiful tutorial Tim will be giving this a go, you. You my friend are a brilliant teacher Thanks for sharing 👍👍👍
@SAK_The_Coder5 ай бұрын
This is what i need thank you bro ❤
@Larimuss6 ай бұрын
Wow thanks! This is really simple, straightforward guide to start me getting into writting the python rather than just using peoples UI. Love the explanations.
@WhyHighC6 ай бұрын
New to the world of coding. Teaching myself through YT for now and this guy is clearly S Tier. I like him and Programming with Moshs' tutorials . Any other recommendations? I'd prefer more vids like this with actual walkthroughs on my feed.
@SiaAlawieh5 ай бұрын
idk but I never understood anything from programming with Mosh videos. Tim is a way better explainer for me, especially that 9 hour from beginner to advanced video.
@M.V.CHOWDARI5 ай бұрын
Bro code is GOAT 🐐
@WhyHighC5 ай бұрын
@@M.V.CHOWDARI Appreciate it!
@SuspiciousLookingSlime13 күн бұрын
6:20 DO NOT MISS that he went back in his code and added result as a var!
@hectorgrey95057 күн бұрын
It would be cool if you could give the ai a voice function and a clean looking GUI , Just a thought , But overall we cool video you've earned yourself a new subscriber💯
@timstevens33613 ай бұрын
very helpful video Tim !
@leonschaefer48326 ай бұрын
This just inspired me saving GPT Costs for our SaaS Product, Thanks Tim!
@CashLoaf5 ай бұрын
hey i'm into saas too did u make any project yet?
@kfleming782 ай бұрын
Fantastic explanation - thank you for this
@Zpeaxirious_Official11 күн бұрын
Me just made my own python script so it can tell time, date, history manager, filter, tts, stt, before even finding this video randomly on my KZbin feed. Also would recommend y'all, to have a good pc, otherwise it might take a while. Good instruction tho
@proflead5 ай бұрын
Simple and useful! Great content! :)
@burnoutcreations360629 күн бұрын
5:06 i personally find using conda for virtual environments is effecient, it even comes in with jupyter so its a plus!!
@asharathod97654 ай бұрын
Awesome.....i really needed a replica of chatbot for a project and this worked perfectly....thank you
@specialize.55222 ай бұрын
Very much enjoyed your instruction style - subscribed!
@ShahZ6 ай бұрын
Thanks Tim, ran into buncha errors when running the sciprt. Guess who came to my rescue, chatGPT :)
@carsongutierrez70726 ай бұрын
This is what I need right now!!! Thank you CS online mentor!
@techknightdanny60946 ай бұрын
Timmy! Great explanation, concise and to the point. Keep 'em coming boss =).
@TechyTochi6 ай бұрын
This is Very useful content Keep it up
@repairstudio49406 ай бұрын
Awesomesauce! Tim make more vids covering LangChain projects please and maybe an in depth tutorial! ❤🎉
@bause61826 ай бұрын
If you combine this with a webview you can make a sorta of artifact in your local app
@joohuynbae50843 ай бұрын
for some Window users, if all of the commands don't work for you, try source name/Scripts/activate to activate the venv.
@siddhubhai25086 ай бұрын
Please Tim help me how to add long term (infact ultra long) memory to my cool AI agent using only ollama and rich library. May be memgpt will be nice approach. Please help me!
@birdbeakbeardneck36176 ай бұрын
not na ai expert so i could said something wrong: you mean the ai remeber things from messagrd way back in the conversation? if so thats called context of the ai, and is limited by the training and is also an area of current developpement, on the other hand tim is just making an intrface for already trained ai
@siddhubhai25086 ай бұрын
@@birdbeakbeardneck3617 I know that bro but I want custom solutions for what I said, like vector database or postgre, the fact is I don't know how to use them, the tutorials are not streight forward unlike Tim's tutorial also docs are not able to provide me specific solution. Yes I know after reading docs I will be able to do that but I have very little time (3 Days), and under these days I will have to add 7 tools to the AI agent. Otherwise I'm continuously trying to do that. ❤️ If you can help me through any article or blog or email, please do that 🙏❤️
@davidtindell9506 ай бұрын
Thx. Tim ! Now, llama3.1 is available under Ollama, It generates great results and has a large context memory !
@siddhubhai25086 ай бұрын
@@davidtindell950 But bro my project is accordingly that can't depend on the LLM's context memory. Please tell me if you can help me with that!
@davidtindell9506 ай бұрын
@@siddhubhai2508 I have found FAISS vector store provides an effective and large capacity "persistent memory" with CUDA GPU support. ...
@KumR6 ай бұрын
Hi Tim - Now we can download Llama3.1 too... By the way can u also convert this to UI using streamlit
@MwapeMwelwa-wn9ed6 ай бұрын
Tech With Tim is my favorite.
@WhyHighC6 ай бұрын
Can I ask who is in 2nd and 3rd?
@tech_with_unknown6 ай бұрын
@@WhyHighC 1: tim 2: tim 3: tim
@dimox115x96 ай бұрын
Thank you very much for the video, i'm gonna try that :)
@jagaya36626 ай бұрын
Thanks, super useful and simple! I just wondered with the new Llama model coming out, how I could best use it - so perfect timing xD Would have added that Llama is made by Meta - so despite being free, it's compareable to the latest OpenAI models.
@weiguangli5934 ай бұрын
Great video, thank you very much!
@sacv23 ай бұрын
This is great! thanks
@konradriedel48534 ай бұрын
Hey man thanks a Lot, could you explain how to implement own Data, PDF, web sources etc. for giving answers when I need to give it some more detailed knowledge about certain internal information about possible questions regarding my use Case?
@build.aiagents4 ай бұрын
lol thumbnail had me thinking there was gonna be a custom UI with the script
@cyrilypil5 ай бұрын
How do you get Local LLM to show? I don’t have that in my VS Code
@sunhyungkim57644 ай бұрын
Amazing!
@franxtheman5 ай бұрын
Do you have a video on fine-tuning or prompt engineering? I don't want it to be nameless please.😅
@RevanthK-y1l3 ай бұрын
Could you please tell us about how to create a fine tunning chatbot using our own dataset.
@swankyshivy2 ай бұрын
how can this be moved from locally to on an internal website
@rhmagalhaes6 ай бұрын
I love how you make it easy for us. After that we need an UI and bingo. Btw, does it keep the answers in memory after we exit? Don't think so, right?
@josho2255 ай бұрын
based on the code, no. only a single runtime
@bsick68566 ай бұрын
Thank you so much!!
@alexandresemenov86716 ай бұрын
Hello! Tim when i run ollama directly there is no delay in response but using script with langchain some delay appear. Why is that? How to solve it?
@neprr182520 күн бұрын
Can you train the robot or give it a prompt? For example, if you want to create a chatbot for a business, can you give it prompts from the business so it can answer questions only based on the information from the given prompts?
@taymalsous58945 ай бұрын
hello tim! this video is awesome, but the only problem i have is that the ollama chatbot is responding very slowly, do you have any idea on how to fix this?
@kinuthiastevie40316 ай бұрын
Nice one
@tengdayz26 ай бұрын
Thank You.
@arunbalakrishnan89786 ай бұрын
Useful. keep doing
@sean_vikoren3 ай бұрын
thank you.
@TigerBrownTiger3 ай бұрын
Why does microsoft publisher window keep popping up saying unlicensed product and will not allow it to run?
@AlexTheChaosFox19966 ай бұрын
Will this run on an android tablet?
@toddgattfry54056 ай бұрын
Cool!! Could I get this to summarize my e-library?
@harshiramani727414 күн бұрын
My download stoppes midway why is it I am not getting it?
@jorgeochoa40325 ай бұрын
hello do you know if its possible to use this model as a "pre trained" one, and add some new let say.. local information to the model to use it for an specific task?
@Hrlover2055 ай бұрын
i dont know what is happening when i run python file in cmd it shows me hello world then the command ends
@usamaejaz526412 күн бұрын
i implemented it , it is responding after taking minutes , why its to slow?
@sharanvellore90165 ай бұрын
Hi, I have tried this and its working, but the model is taking long response time anything I can do for reducing that?
@praveertiwari35456 ай бұрын
Hi Tim, I recently completed your video on django-react project , but i need an urgent help from your side if you can make a video on how to deploy django-react project on vercel,render etc. or other known platform (this would really be helpful as there are many users on the django forum still confused on deployment of django-react project to some popular web deployment sites. Kindly help into this.
@skadi33996 ай бұрын
Great video! Is there any way to connect a personal database to this model? (So that the chat can answer questions based on the information in the database). I have a database in Postgre, already used RAG on it, but I have no idea how to connect the db and the chat. Any ideas?
@NameRoss4 ай бұрын
do i need to install longchain?
@thegamingaristocrat76154 ай бұрын
Is there any way to make python script to automatically train a locally-ran model?
@Money4Jam20115 ай бұрын
Great video learned a lot. Can you advise me the route I would take if I wanted to build a chatbot around a specific niche like comedy. build an app that I could sell or give away for free. I would need to train the model on the specific niche and that niche only. Then host it on a server I would think. An outline on these steps would be much appreciated.
@pixelmz6 ай бұрын
Hey there, is your VSCode theme public? It's really nice, would love to have it to customize
@m.saksham34096 ай бұрын
I have not implemented myself, but I have doubt, you are using langchain where the model is llama 3.1, langchain manages everything here, then what's the use of Ollama ?
@gunabaki77556 ай бұрын
the langchain simplifies interactions with LLM's, it doesn't provide the LLM. We use Ollama to get the LLM
@31-jp6ok5 ай бұрын
If you read my message, thank you for teaching and would you mind teaching me more about fine-tune? What should I do? (I want Tensorflow) and I want it to be able to learn what I can't answer by myself. What should I do?
@naveenrohit64295 күн бұрын
Will it work if we host this website
@H4R4K1R1x6 ай бұрын
This is swag, how can we create a custom personality for the llama3 model?
@bhaveshsinghal64846 ай бұрын
Tim this ollama is running on my cpu and hence really slow can I make it run on my GPU somehow?
@davidtindell9506 ай бұрын
Adding a context, of course, generates interesting results: context": "Hot and Humid Summer" --> chain invoke result = To be honest, I'm struggling to cope with this hot and humid summer. The heat and humidity have been really draining me lately. It feels like every time I step outside, I'm instantly soaked in sweat. I just wish it would cool down a bit! How about you? ...🥵
@ruthirockstar28525 ай бұрын
is it possible to host this in a cloud server? so that i can access my custom bot whenever i want?
@kingkd71794 ай бұрын
It was a great tutorial and I follow it properly but still I am getting an error : ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it I am running this code on my office machine which has restricted the openai models and Ai site
@okotjakimgonzalo22706 ай бұрын
Where do you get all this stuff from
@rushilnagpal6565Ай бұрын
Can i have this code
@andhika2775 ай бұрын
How much ram required to make this program running well? Cause i have 4GB ram only
@ParanoidNotAndroid5 ай бұрын
Does anybody know what type of data the llama software is exchanging ?
@PythonCodeCampOrg26 күн бұрын
If you're using it locally, the data stays on your machine. However, if it's connected to any external service, it might exchange data like model updates, logs, or other configuration-related information.
@mit28746 ай бұрын
do i need vram 4 this ?
@ccKuang-ziqian2 ай бұрын
should i install ollama in a virtural env?
@muhammadsikandarsubhani8954Ай бұрын
doesnt matter it will always be stored in AppData/Local/ollama
@PythonCodeCampOrg26 күн бұрын
It's not mandatory, but using a virtual environment is highly recommended. It helps manage dependencies more cleanly and avoids potential conflicts with other projects. However, if you prefer not to, you can install it globally, though it might cause issues later if you work on multiple projects.
@TanujSharma-d9o6 ай бұрын
Can You teach us how to implement it in GUI form, i don't want to run the program every time i want help of this type things
@akshajalva2 ай бұрын
Can I use a document as context, so that the chatbot answers user queries only from that document?
@PythonCodeCampOrg26 күн бұрын
Yes, you can just load your PDF file and start asking questions from it. The Mistral 7B model will generate answers based solely on the content of the document, ensuring that responses are relevant to the information you’ve provided.
@silasknapp44503 ай бұрын
Hi. Is there a way to uninstall llama3 again?
@aviralshastri5 ай бұрын
how can we stream output ??
@PaulRamone3566 ай бұрын
PS C:\Windows\system32> ollama pull llama3 Error: could not connect to ollama app, is it running? what seems to be wrong? (sorry for hte noob question)
@gunabaki77556 ай бұрын
you need to run the ollama application first, it usually starts when u boot up ur pc
@PaulRamone3566 ай бұрын
@@gunabaki7755 will try ths thanks bro!
@antoniosa6 ай бұрын
A dummy question.. Where is used the template ?
@БогданСірський6 ай бұрын
Hey, Tim! Thanks for your tutorial. A haver a problem. Bot isn't responding to me? Mabe someone else have the same problem. Give me some feedback, please
@VatsalyaB5 ай бұрын
thx ;)
@7348335 ай бұрын
Nice
@vivekanandl87986 ай бұрын
Does respose speed of AI bot depend on gpu like llama ?
@gunabaki77556 ай бұрын
YES
@GeneKim-g1w5 ай бұрын
How do I activate this on windows ??
@davidtindell9506 ай бұрын
You may find it 'amusing' or 'interesting' that when I (nihilistically) prompted with "Hello Cruel World!', 'llama3.1:8b' responded: " A nod to the Smiths' classic song, 'How Soon is Now?' (also known as 'Hello, Hello, How are You?') " !?!?!🤣
@khushigupta57986 ай бұрын
Hey how can i show this as a ui I want to create a chatbot which can provide me the programming related ans with user authentication otp Please tell me how can i create this by using this model And create my UI i am a full stack developer ane new to ml please reply
@AmitErandole6 ай бұрын
Can you show us how to do RAG with llama3?
@乾淨核能6 ай бұрын
what's the minimum hardware requirement? thank you!
@gunabaki77556 ай бұрын
8GB RAM
@乾淨核能6 ай бұрын
@@gunabaki7755 no discrete GPU needed?
@felipemachado83112 ай бұрын
can i train this model? give him information that he can answer to me before?
@REasycodingАй бұрын
sir, is it necessary to ask request access llama models actually i am confused about permission terms can u please help regarding that, please 😊😊
@PythonCodeCampOrg26 күн бұрын
For using Llama models locally, you generally don't need to request access, as the models are open-source and available for local deployment. However, you should always check the specific licensing and permission terms for the version you're using. Most open-source versions are free to use, but it's always good to review the terms to ensure compliance.