Hello, thank you for your video. Could you please let me know if I can use it on my laptop, which only has an NVIDIA GeForce MX330 and 16GB RAM?
@techCodio20 күн бұрын
Yes you can use it, but don't go for bigger models ,go for 8B parameters models or less than that.
@quanbuiinh60420 күн бұрын
@@techCodio Thank you very much. And could you please make a demo video of the Llama model using the API?
@techCodio15 күн бұрын
@@quanbuiinh604 using groq api or any other platforms
@techCodio15 күн бұрын
@@quanbuiinh604 Recently I started Rag course in this channel using free models,In the advanced rag I will use groq api for llama model,
@sp.kannanАй бұрын
can you suggest configuration, understood 8gb GPU, and 16GB RAM sufficient, how about cpu and motherboard best to use
@techCodioАй бұрын
I have no idea about the motherboard ,but icore 5 is enough along with the above mentioned configuration 16gb ram and 8gb gpu
@PraveenM-f2t2 ай бұрын
Hello bro, how to make mental health counseling chatbot, which Llama source is easy?
@techCodio2 ай бұрын
Ollama is easy or else you can take llama api key bro Amazon bedrock. 1. For ollama you need high end gpu and cpu. 2. Try to create an account in AWS and get the llama3.1 key it's not expensive. You can get 1000 input tokens for 0.0004 $ . The approximate budget is 10$ for your project. Best option
@jayasreevaradarajan9382Ай бұрын
How to view this app in our mobile phone which is locally deployed in our own device?
@techCodioАй бұрын
When you deploy the streamlit application you are able to see the deployment link,then you can access the link anywhere
@jayasreevaradarajan9382Ай бұрын
@@techCodio When I deploy, the streamlit throws the error here:Error invoking LLM: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))
@jayasreevaradarajan9382Ай бұрын
Could you please help me to solve this error?
@techCodio26 күн бұрын
We cannot deploy the big llm application in the streamlit,it's a light weighted application development,
@techCodio26 күн бұрын
In order to deploy the app you need to host in AWS ,azure services and that is a big process,
@jefnize24442 ай бұрын
How to see This chatbot in public ip website ? So in This way my friend can use my ollama? What I have to change?
@techCodio2 ай бұрын
It's a private ip, yeah you can use ollama
@jefnize24442 ай бұрын
@@techCodio can you do a tutorial that explain how to install ollama on linux then how to share the website with this ollama chatbot in public?
I think they want to be able to use this as a chat bot on their website. How do you go about doing that?
@techCodio2 ай бұрын
@@Danny_Bananie downloading on Linux ? It just required 2 commands,I used ollama on ec2 Linux server with the above mentioned commands. see for the local laptop we need to download ollama .
@divakarv77272 ай бұрын
m getting unable to ModuleNotFoundError: No module named 'llama_index'