Пікірлер
@selvaraj001
@selvaraj001 2 күн бұрын
authentication error
@DataEdge01
@DataEdge01 2 күн бұрын
Within hugging face,you need to request access to the chosen model.when access is granted, you can go ahead and use the model.
@bilaljamal-e1t
@bilaljamal-e1t 3 күн бұрын
That's awesome!!!!
@Point.Aveugle
@Point.Aveugle 4 күн бұрын
Theres a button on each gguf page, its in the top right "use this model", ollamas in the list now, then you have a easy option to select the quant.
@jaydev8148
@jaydev8148 20 сағат бұрын
I didn't know now they have listed Ollama, problem is there is various model in HF but not directly mention how much system gig needed if you run locally
@lokeshamv6242
@lokeshamv6242 7 күн бұрын
@DataEdge please provide research paper you used forthis project
@DataEdge01
@DataEdge01 3 күн бұрын
Is in the description
@lokeshamv6242
@lokeshamv6242 7 күн бұрын
Sir please provide the research paper you used to bulid this project
@DataEdge01
@DataEdge01 3 күн бұрын
Is in the description
@DataEdge01
@DataEdge01 3 күн бұрын
Is in the description
@nicolasabosch
@nicolasabosch 7 күн бұрын
It shows KeyError: 'Api-key' Before you ask, yes i putted the api key but i didn't put it in the comments for obvious reasons
@indrapurnama5826
@indrapurnama5826 10 күн бұрын
it says local llm is not defined, also please help me to find the API Base of the local LLM I installed llama 3.2:1B, so should i change the code to that?
@bilaljamal-e1t
@bilaljamal-e1t 11 күн бұрын
Thank you for your time and effort on the new video.⭐⭐⭐⭐⭐
@Reality_Check_1984
@Reality_Check_1984 12 күн бұрын
I see how to run this out of the terminal but how do we import and run this in a python file? I have had some issues.
@yashchangare7791
@yashchangare7791 14 күн бұрын
your title says FULLY LOCAL RAG and you are using API of Groq , i dont understand, are you sending data to the Groq cloud and calling it Fully local ??
@Naejbert
@Naejbert 16 күн бұрын
I've got "Batch size 218 exceeds maximum batch size 166" when uploading only 2 pdfs of 1.2 Mb.
@existentialbaby
@existentialbaby 20 күн бұрын
thanksss
@LokeshaMV
@LokeshaMV 20 күн бұрын
I am facing problem with ollama pull
@DataEdge01
@DataEdge01 20 күн бұрын
Can you specify the error please
@LokeshaMV
@LokeshaMV 19 күн бұрын
@@DataEdge01 thats cleared but I am facing issue with Max retries exceeded with url: /batch/ (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x0000024A55F111C0>: Failed to resolve 'us.i.posthog.com' ([Errno 11001] getaddrinfo failed)")))
@LokeshaMV
@LokeshaMV 19 күн бұрын
@@DataEdge01 sir please provide me a source code I am a college student
@LokeshaMV
@LokeshaMV 19 күн бұрын
Failed to upload not defined Is the error I am getting
@LokeshaMV
@LokeshaMV 19 күн бұрын
@@DataEdge01 failed to upload not defined is the error we getting
@joschelboschel
@joschelboschel 21 күн бұрын
Very nice video. The json at kzbin.info/www/bejne/aHO3emaBfs56frc can be made much nicer to read if you pipe it through the 'jq' tool.
@lukasukhita
@lukasukhita 22 күн бұрын
is there way to use it without api key? i dont have them :(
@DataEdge01
@DataEdge01 22 күн бұрын
You don’t need an api key with ollama
@lukasukhita
@lukasukhita 22 күн бұрын
@@DataEdge01 i just switch model to ollama?
@soldierkitvictor
@soldierkitvictor 22 күн бұрын
I replicated the whole tutorial, just the happen that my aider is not able to generate the CSS file in the first command, only the html is generated. While i ask it to create a style.css file, it is not able to save it on my desk (not asking the yes, no question, while it said it is saved, but actually not) Seems Aider need a bit more automation functionality, otherwise, I feel like i can just copy and pasting the coding response from Ollama without the need of using Aider. What else can Aider do, if the file saving function is also not working?
@DataEdge01
@DataEdge01 22 күн бұрын
Aider has released some updates with added features.will make a video on that soon!
@aeshapatel919
@aeshapatel919 24 күн бұрын
Sir when i am the running my streamlit the chat box for conversation is not showing just healthcare chatbot is displayed . Please guide me through it
@DataEdge01
@DataEdge01 22 күн бұрын
It may take sometime to display the conversation box depending on your system requirement especially when running on cpu.
@bigRat4335
@bigRat4335 24 күн бұрын
do i need gpu to run this? i only have core i5 cpu and 8 gb ram?
@DataEdge01
@DataEdge01 22 күн бұрын
CPU works fine
@derpnaifu
@derpnaifu 26 күн бұрын
is there any way to increase token limits? some sites after scraping are more than 1024 tokens and I can't seem to pass any parameter to change that in graph config. Tried different models that suppose to have more than 1024 tokens too. Is that some sort of scrapegraph limitation maybe?
@asifequbal
@asifequbal 27 күн бұрын
Does not work, also the response errors out after a long time.
@DataEdge01
@DataEdge01 25 күн бұрын
You may need to optimize your prompt to get a good response when using models
@IlllIlllIlllIlll
@IlllIlllIlllIlll Ай бұрын
Your mic is very low. I just came from another video
@DataEdge01
@DataEdge01 Ай бұрын
Thanks,Will check that next time!
@pensiveintrovert4318
@pensiveintrovert4318 Ай бұрын
Infinite looping with Claude Dev + Ollama + mscode.
@DataEdge01
@DataEdge01 Ай бұрын
It may depend on the prompt.
@apbuzzz
@apbuzzz 19 күн бұрын
@@DataEdge01 Same for me, the prompt was exactly as yours - "create a simple to do list app using HTML, CSS, JS" also tried many others but most of the time getting realy weird output.
@mik3lang3lo
@mik3lang3lo Ай бұрын
Very cool, thanks I will give it a try
@UserB_tm
@UserB_tm Ай бұрын
Great video. Really helped me get started with Aider, Ollama, and VS Code.
@DataEdge01
@DataEdge01 Ай бұрын
Glad it was helpful!
@sasmartty
@sasmartty Ай бұрын
May I know your system specifications ? Like OS version and hardware details?
@vishalranjan2429
@vishalranjan2429 Ай бұрын
"module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'? " ,getting this error
@DataEdge01
@DataEdge01 Ай бұрын
can you give more details on this error.You can send an email
@HumanHealthLink
@HumanHealthLink Ай бұрын
Where are you getting the Anthropic api key?
@DataEdge01
@DataEdge01 Ай бұрын
there you go. console.anthropic.com/settings/keys
@philemonakuagwu1517
@philemonakuagwu1517 Ай бұрын
I am getting bad request, I don't know why
@DataEdge01
@DataEdge01 Ай бұрын
Which model are you using?you may also be specific with your prompt
@ashimov1970
@ashimov1970 Ай бұрын
Thank you for the vid. I wonder how much disk space all the libraries take
@DataEdge01
@DataEdge01 Ай бұрын
Minimum of 15GB will work fine
@pabodhaliyanage4779
@pabodhaliyanage4779 Ай бұрын
What other models besides the Mistral 7B can we use to provide fast replies on a chatbot in the hugging face. Can you give any examples?
@DataEdge01
@DataEdge01 Ай бұрын
U may try llama3.1 or even groq.i have similar video on this
@bilaljamal-e1t
@bilaljamal-e1t Ай бұрын
Thank you for your effort on the new video.
@DataEdge01
@DataEdge01 Ай бұрын
Really appreciate your comment!
@gani2an1
@gani2an1 Ай бұрын
what is the benefit of replit. Is visual studio an alternative? what role does replit play here?
@DataEdge01
@DataEdge01 Ай бұрын
Replit is an AI software development & deployment platform for building and sharing apps faster.It has a lot of features which makes deploying your apps really easier as compared to vscode
@christerjohanzzon
@christerjohanzzon Ай бұрын
That is one ugly VS Code theme! :D
@AliAlias
@AliAlias Ай бұрын
❤👍✌️
@nithing9303
@nithing9303 Ай бұрын
Thank you so much!
@lawrencejessejames
@lawrencejessejames Ай бұрын
how would you go about using aider chat to make a nextjs app? seeing as you need to start in python env
@DataEdge01
@DataEdge01 Ай бұрын
In the shell.just write your prompt with the file format and aider will generate the code in the desired file format.All you have to do is run the code.I suggest you watch the video to the end.
@lawrencejessejames
@lawrencejessejames Ай бұрын
@@DataEdge01 yes I watched it all. My understanding is you need to create a new python repl in order to install aider. Once you have that I asked aider to create a nextjs project / environment and was unable to do that.
@mdashiqurrahmanbayezid5037
@mdashiqurrahmanbayezid5037 Ай бұрын
I do not understand this thing fully, can you answer a question for me? Can I make simple android apps using this?
@DataEdge01
@DataEdge01 Ай бұрын
Yes,you can make apps.
@KABILAN.V.S
@KABILAN.V.S Ай бұрын
Hi 7:26 in this they said that want to Pull llama but you typed run llama. Are you install the llama 3.1:8B already? Are this is the step to install pls clarify this sir...
@DataEdge01
@DataEdge01 Ай бұрын
ollama pull downloads a model from the Ollama model hub, while ollama run runs a downloaded model locally.
@KABILAN.V.S
@KABILAN.V.S Ай бұрын
​@@DataEdge01 After pulling the ollama 3.1:8B only we want to use the run ollama correct?. (I have small doubt i want to pull the ollama in vs code or in windows SQL) pls clear this sir
@DataEdge01
@DataEdge01 Ай бұрын
@@KABILAN.V.S you write the pull command in vs code using the terminal
@dayasagareddya.c.9547
@dayasagareddya.c.9547 Ай бұрын
Good one . I'm able to setup but on chart I'm getting below error.i did changed the my model to llama3 async for stream_resp in self._acreate_stream( File "C:\projects\RAGGemmaModel-main\env\Lib\site-packages\langchain_community\llms\ollama.py", line 331, in _acreate_stream raise ValueError( ValueError: Ollama call failed with status code 400. Details: <bound method ClientResponse.text of <ClientResponse(localhost:11434/api/chat) [400 Bad Request]> <CIMultiDictProxy('Content-Type': 'application/json; charset=utf-8', 'Date': 'Tue, 03 Sep 2024 15:33:42 GMT', 'Content-Length': '54')
@TechWithOnpassive
@TechWithOnpassive Ай бұрын
is it expensive
@TheMetalisImmortal
@TheMetalisImmortal Ай бұрын
Thanks man!
@mik3lang3lo
@mik3lang3lo 2 ай бұрын
Great video, thanks
@joaotolovi
@joaotolovi 2 ай бұрын
Python version?
@DataEdge01
@DataEdge01 2 ай бұрын
3.10>=
@beissel_glitch
@beissel_glitch 2 ай бұрын
Can you make a complete video on the installation of Docker Anything LLM please ??
@WirelessGus
@WirelessGus 2 ай бұрын
Great knowledge share!📚✌️🤝
@solank7620
@solank7620 2 ай бұрын
Thanks for the informative video 👍 How much RAM does your graphics card have? How much RAM does Llama 8B parameters need? Did you use float16 or float32?
@DataEdge01
@DataEdge01 2 ай бұрын
16GB should work
@voedito
@voedito 2 ай бұрын
I have a Nvidia GPU but the response takes too long only to ask how many records. I have only 1000 records. I wonder why it is so slow.
@DataEdge01
@DataEdge01 2 ай бұрын
What are the specs for the GPU?
@Henriqueoi
@Henriqueoi Ай бұрын
Ta funcionando via CPU provavelmente
@voedito
@voedito Ай бұрын
GeForce RTX 3050 Ti
@sergio2494
@sergio2494 2 ай бұрын
Hey this is amazing, thanks for sharing the tutorial. As i was playing around, I managed to connect it and work as one of my custom GPTs, although Once I tried to Try the Vectorshift Chat GUI, it didnt provide an image in the chat, but just a bunch of numbers and text, which i believe is a vector? I contacted support and they said that their chat cannot show images so im wondering how can i still use vectorshift in the back end and allow users to interact with my backend Dalle or whatever it can be and still receive their creations in chat? Is there any other solution?
@DataEdge01
@DataEdge01 2 ай бұрын
Really appreciate your comment Sergio, I think vectorshift would require some form of subscription to get that feature running.however you could build your own GUI around it.There are free tools that does this.It depends on your use case.
@sergio2494
@sergio2494 2 ай бұрын
@@DataEdge01 Thanks for the answer. Yeah, that's what i have found so far. IS there any way you could make a simple tutorial around this? also, if the platform required paid subscription, that is fine too. For instance, I would like to create a simple platform where i can set up prompts in the backend and make people pay for tokens each time they create an image. I found that bubble.io may do this too but i thought this vectorshift solution may be simpler.
@j0hnc0nn0r-sec
@j0hnc0nn0r-sec 2 ай бұрын
Feel the rhythm feel the rhyme. get on up, it’s development time
@raabiyahraabiyah8597
@raabiyahraabiyah8597 2 ай бұрын
Is the data secure and private?
@DataEdge01
@DataEdge01 2 ай бұрын
Everything is stored locally on your computer