I'm feeling lucky that I got this video in my suggestions.
@divyaramesh31056 ай бұрын
Thank you Krish sir. In Building RAG from scratch ,sunny sir showed about Ollama. Both of you were giving foundational knowledge and updates in GenAI. It was very useful sir.
@devanshgupta60646 ай бұрын
please give sunny sirs youtube @
@divyaramesh31056 ай бұрын
@@devanshgupta6064 @sunnysavita10
@sailikitha85026 ай бұрын
Sunny Savita @sunnysavita10
@neerajshrivastava56002 ай бұрын
Krish, Fantastic Video and great explanation!!! Keep it up
@mehdi97716 ай бұрын
We need a long versions videos like previously and thanks for your efforts ❤
@sarimali8853Ай бұрын
Very good explanation, I have question can I train this model for specific taks mean features extraction or others?
@SomethingSpiritual5 ай бұрын
why ollama not taking full gpu? its taking full cpu only, pls guide
@manjeshtiwari74346 ай бұрын
Thank You so much for a such a great video , I have a query , I am getting very slow response does the speed of response depends on system config , I have chekced out system use and while running it isn't using much resource , can you tell how can we increase response speed
@rajendarkatravath22076 ай бұрын
Thanks krish! for sharing this knowledge . what an amazing model it is .....!
@NISHANTKumar-ct3pb6 ай бұрын
Thanks , it's great video. Wanted to ask when we say local what is the configuration of local is it a cpu or GPU based system? Are models compressed / quantized or same as original ? Is there a model size limitation vs local system config?
@kenchang34566 ай бұрын
Hey Krish, thanks for doing this video in Windows.
@Shrieenidhi16 күн бұрын
If the model is installing locally means, will it take space of the RAM?
@lionelshaghlil17546 ай бұрын
Thanks Krish, the briliant, innovative and master of the AI 😊, I have a question please related to the hosting, so assume I'd like to implement my solution on a server, will I need to have both, OLAMA and my app in two seperate dockers? they would communicate together? or they could be implemented in one single docker?
@krishnaik066 ай бұрын
It can be implemented in one docker
@ayushmishra58614 ай бұрын
Have you got clarity on the same, can you please share.
@roshanchandel79295 ай бұрын
The heroes we need!!
@usingsk4 ай бұрын
Thanks for Sharing knowledge. Can we fine tune with company domain content in downloaded model and the data is not shared. I mean it comply with IPR if we use locally
@BelhsanMohamed5 ай бұрын
as always thanks for the information
@jacobashwinmathew37636 ай бұрын
Can you make a complete video of production ready open source LLM basically LLMOps
@deeks_edits6 ай бұрын
You are the best!🤓
@krishnaprasadsheshadri62066 ай бұрын
Can we get a video about reading tables using unstructured and such frameworks
@sawankumar2088Ай бұрын
Can we just download and use or do we require any meta-ai api key as well?
@ankitshaw20116 ай бұрын
Thankyou so much for these videos
@tharunps80486 ай бұрын
Since it is running locally, using this model with organization's data doesn't expose it right ?
@nagasudha69286 ай бұрын
Hi Krish This is Sudha from ISRO Hyderabad, I would like to know the documents to be provided for ollama and get the answers from it
@shashank0463 ай бұрын
Hi, how do I use gpu on open web ui? My model response is really slow and is not using gpu even though is used the command for using gpu for installing as mentioned on the open web ui GitHub page ..
@susnatakanjilal7036 ай бұрын
Sir I need to create a custom text data set from common crawl.for Bengali language....and train llama2 using that...can you plz demonstrate similar project!?
@sanjaynt74346 ай бұрын
Can this read a document and answer my questions on that document can it.
@YashDeveloper-rq2yc6 ай бұрын
Bro using these techniques can I convert it as superb ai assistant? And what capabilities can use?
@ranemghalion5813 ай бұрын
thankyou
@VishalTank-vk5ju3 ай бұрын
Hello, krish, I am facing an issue with the Ollama service. I have an RTX 4090 GPU with 80GB of RAM and 24GB of VRAM. When I run the Llama 3 70B model and ask it a question, it initially loads on the GPU, but after 5-10 seconds, it shifts entirely to the CPU. This causes the response time to be slow. Please provide me with a solution for this. Thank you in advance. Note:- GPU load is 6-12 % and CPU load is 70% .
@AjayYadav-xi9sj6 ай бұрын
Make a video on Python framework of ollama. Make a end to end project and also host it somewhere where real people can use it
@kashishvarshney22256 ай бұрын
hello sir, what is the minimum system configuration for ollama
@parthwagh36072 ай бұрын
Thank you so much krish. I am having problem running models downloaded from hugging face having safetensor file. I have these files in oobabooga/text-generation-webui. I have to use this for ollama. I followed everything, even created modelfile with path to safetensor directory, but it is not running >> ollama create model_name -f modelfile. Please help me.
@rajarshidey4244 ай бұрын
How can we get the code?
@pssab86 ай бұрын
Excellent videos. I set up mistral model locally on ubuntu20.04 and found that it is taking more than a minute for every response .Running in cpu mode only.Can you suggest me to improve the performance.
@amazingedits92986 ай бұрын
This models are running on your computer hardware.So it requires a good hardware like gpu or something for creating quicker responses
@KumR6 ай бұрын
Do we need to download the entire 7gb llama2 locally to use with ollama
@NeelamDevi-z6e6 ай бұрын
Great content Krish...Need these coding files kindly share those
@manasjohri24956 ай бұрын
Can you please tell me how we can run this ollama on GPU right now it is working on CPU?
@ashishdayal1724 ай бұрын
hii krish, i am facing error creating modelfile .Please help
@hassanahmad14835 ай бұрын
How to deploy these custom gpts...?
@naveenkumarmaurya31826 ай бұрын
hi krsih i m getting this error Ollama run codella! 🐰💨 (Note: I'm just an AI, I don't have personal preferences or the ability to run code, but I can certainly help you with any questions or tasks you may have!)
@VishalKumar-gv6gy5 ай бұрын
Does it require GPU ?
@Nagireddy-lw7rl3 ай бұрын
Hi Krish sir I have need ollama chatbot python code provide me. I check with your Github.
@YashDeveloper-rq2yc6 ай бұрын
After installing it will work in offline?
@krishnaik066 ай бұрын
Yes
@YashDeveloper-rq2yc6 ай бұрын
@@krishnaik06 Thanks for sharing quality content
@rishiraj25486 ай бұрын
🙏💯👍
@jatinchawla16805 ай бұрын
llm=ollama(base_url='localhost:11434',model="llama 2") TypeError: 'module' object is not callable Can someone pls help w this?
@kavitajakkali50308 күн бұрын
I Installed ollama in my local system but getting responses is taking very long time what can i do for that one ?
@omarnahdi33806 ай бұрын
Hey sir😄, please make a video on BioMistral( a LLM trained on Medical and Scientific Data). It would perfectly fit your AI Nutriationist. Thanks for your daily dose of GenAI
@starkgaming14256 ай бұрын
Please release a step by step guide on how to fine tune Gemini API in Python.....I tried by refering to documents but encountered a lot of errors with OAuth Setup please...........!!!
@nasiksami23516 ай бұрын
Great tutorial! Can you please make a video on finetuning model on custom csv dataset and integration with Ollama. For instance, consider I have class imbalance problem in my dataset. Can I finetune a model, then ask it in Ollama, to generate more samples of minority class using the finetuned model?
@copilotcoder6 ай бұрын
Sir please create a codebase understanding model using ollama and test it on a opensource codebase
@mohammedalfarsi43616 ай бұрын
are these model support arabic language ?
@AjaySharma-jv6qn6 ай бұрын
Content is helpful, thanks for your effort.🎉
@durgakorde35896 ай бұрын
R u a data scientist?
@velugucharan80966 ай бұрын
Sir please complete the fine tuning llms playlist as much as possible sir
@DeadJDona6 ай бұрын
please finish that Chrome update 😢
@computerauditor4 ай бұрын
Really insightful krish!!
@marcoaerlic25763 ай бұрын
Thanks for the video.
@haritdey4306 ай бұрын
Nice video sir
@MotoEdge40-j3v6 ай бұрын
Every time we see a kid we ask him to say a poem and when you have so many llm models but you only want a poem on machine learning