Spring AI - Run Meta's LLaMA 2 Locally with Ollama 🦙 | Hands-on Guide |

  Рет қаралды 14,074

Java Techie

Java Techie

Күн бұрын

Пікірлер
@dmode1535
@dmode1535 4 ай бұрын
Thanks for posting this sir, I have learn a lot from you. Developers should be the highest paid group of any field because we do more learning than any field. We never stop learning, there is always something new to learn.
@itdev7097
@itdev7097 4 ай бұрын
"Developers should be highest paid group in any field".... exactly the opposite should happen.
@Derrick-f8m
@Derrick-f8m 3 ай бұрын
man Busan you are awesome. you are the only guy on KZbin that innovates. you've been doing this for long time and never quit. No other teacher on this platform has continued to serve his dev community like this. Java will never die.
@TheMrtest123
@TheMrtest123 4 ай бұрын
Thanks for starting a series on AI. This is the need of the hour. Thanks for accepting our request for AI demos
@_vku
@_vku 4 ай бұрын
Thanks Basant sir, as a developer i like ur video to watch, whenever new leaning required i follow ur video.
@gopisambasivarao5282
@gopisambasivarao5282 4 ай бұрын
Thanks Basant. Appreciate your efforts. God bless you!
@praveenj3112
@praveenj3112 2 ай бұрын
Thank you for making this video and giving some basic info regarding the gen AI and spring AI
@dhavasanth
@dhavasanth 2 ай бұрын
I appreciate your tireless efforts-thank you! We are eagerly awaiting more information on RAG and Vector Database.
@SouravDalal1981
@SouravDalal1981 4 ай бұрын
Great job. Will look forward for Vector database with AI integration
@grrlgd3835
@grrlgd3835 4 ай бұрын
thanks for this JT. another great video. I'm going to take a look at the ETL framework for Data Engineering. look forward to more content
@abdus_samad890
@abdus_samad890 4 ай бұрын
Loved it... Yes you can create more.
@rajasekharkarampudi2669
@rajasekharkarampudi2669 4 ай бұрын
Great start to the Java AI world bro, Will be Waiting for the demos in this playlist on RAG applications using Vector DB, and Function calling, Take your time and plan a video on E2E real world projects, deploying it on a server.
@SrkSalman416
@SrkSalman416 4 ай бұрын
Very Good Video On Spring-AI. Your content(Videos) like Java Magnet, Its attract very fast.
@satyendrabhagbole3746
@satyendrabhagbole3746 4 ай бұрын
Brilliant explanation
@rabiulbiswas5908
@rabiulbiswas5908 4 ай бұрын
Thanks for the video. I hope more will come on Spring AI soon.
@technoinshan
@technoinshan 3 ай бұрын
Yes please create a detailed video with spring java and llm
@MrSumeetshetty
@MrSumeetshetty 4 ай бұрын
Thanks bro.. You are a good motivation for all developers to keep learning ❤
@kappaj01
@kappaj01 4 ай бұрын
Great video. Can you do a RAG solution using the Weaviate Vector DB? I had one with AI 0.8 running, but it has changed so muched to 1.0...
@Javatechie
@Javatechie 4 ай бұрын
Sure I will give a try as currently I am learning it 😊
@muralikrishna-qh6vg
@muralikrishna-qh6vg 4 ай бұрын
Thank you Basanth, can you please take us to Spring ai with google cloud (like vertexAiGemini) also vertextai with java
@srinivaschannel6230
@srinivaschannel6230 3 ай бұрын
any course starting regarding Spring AI
@Javatechie
@Javatechie 3 ай бұрын
No buddy not planned any
@mayankkumarshaw635
@mayankkumarshaw635 4 ай бұрын
Very nice ❤❤
@petersabraham7423
@petersabraham7423 4 ай бұрын
Great video. Thank you for always making effort to update us with the latest technologies. I have one issue though, after implementing the generate rest api, my response takes too long, sometimes it takes up to 7minutes and i also noticed that you used GET request while llama "/api/chat" expects a POST request. Is there a particular reason you used a GET request? 2. Is it possible to train the llama model to recognize and provide responses based on the trained data?
@ChandraSekhar-jm3sr
@ChandraSekhar-jm3sr 4 ай бұрын
Please share different model videos...they are very helpful
@psudhakarreddy6548
@psudhakarreddy6548 4 ай бұрын
Thank you
@rajkiran4572
@rajkiran4572 4 ай бұрын
Thanks
@Javatechie
@Javatechie 4 ай бұрын
Thanks buddy 🙂👍
@shrirangjoshi6568
@shrirangjoshi6568 3 ай бұрын
Please add the videos to a playlist.
@Javatechie
@Javatechie 3 ай бұрын
Sure buddy
@ashokpandit1367
@ashokpandit1367 3 ай бұрын
That's we are waiting for , let's give left and right to the Python people now 😅😅
@flutterdevfarm
@flutterdevfarm 4 ай бұрын
Sir, are you launching any latest course spring boot & microservices? The course which is listed on your site, is it Live?
@attrayadas8067
@attrayadas8067 4 ай бұрын
It's pre-recorded!
@Javatechie
@Javatechie 4 ай бұрын
It was recorded session of past live class currently I don't have any plan for new batch but If I have any plan in future then definitely I will update first in my KZbin channel for my audience 😀
@liqwis9598
@liqwis9598 4 ай бұрын
Hey basant nice video , can you please teach us how to use RAG functionality in locally running llms
@Javatechie
@Javatechie 4 ай бұрын
I will do this
@liqwis9598
@liqwis9598 4 ай бұрын
@@Javatechie thank you as always 🙂
@Nilcha-2
@Nilcha-2 3 ай бұрын
Is it possible to augment the model with local files (PDF, txt, docs) and then have llama scan through the files and answer any relevant questions in that file?
@Javatechie
@Javatechie 3 ай бұрын
Yes we can do that. I already did poc on it do let me know if you want video on this concept
@Nilcha-2
@Nilcha-2 3 ай бұрын
@@Javatechie Yes sir. I will greatly appreciate if you can do a video on that. Asking generic questions can also be done on free chatgpt and Gemini. The main use case is when business wants to provide their employees/customers with chatbot on their data. E.g HR policies, corporate plans, etc. A complex requirement is when a chatbot is required to parse database and summarize data, etc Currently we do that using Azure chatbot. But management do not like the idea of uploading confidential files. So if ollama can handle that locally and securely that will be the main use case.
@theniteshkumarjain
@theniteshkumarjain 4 ай бұрын
Do we need tokens to generate the response? Also are these models for free?
@Javatechie
@Javatechie 4 ай бұрын
No tokens required yes these models are open-source
@theniteshkumarjain
@theniteshkumarjain 4 ай бұрын
@@Javatechie thanks
@koseavase
@koseavase 4 ай бұрын
Here comes Spring boot to challenge Python
@technoinshan
@technoinshan 3 ай бұрын
hi getting below error when ever running that project java.lang.RuntimeException: [500] Internal Server Error - {"error":"model requires more system memory (8.4 GiB) than is available (7.5 GiB)"}
@Javatechie
@Javatechie 3 ай бұрын
Let me check this
@vaibhavshetty3781
@vaibhavshetty3781 4 ай бұрын
getting this error while , Error: llama runner process has terminated: signal: killed
@Javatechie
@Javatechie 4 ай бұрын
At what step are you getting this error?
@vaibhavshetty3781
@vaibhavshetty3781 4 ай бұрын
@@Javatechie while running docker exec -it ollama ollama run llama2
@vaibhavshetty3781
@vaibhavshetty3781 4 ай бұрын
does it require any higher machine spec
@Javatechie
@Javatechie 4 ай бұрын
No specification required
@abhishekkumar2020
@abhishekkumar2020 4 ай бұрын
getting same error
@atulgoyal358
@atulgoyal358 4 ай бұрын
I got below issue Error: model requires more system memory (8.4 GiB) than is available (3.9 GiB) docker exec -it ollama ollama run llama2
@moinakdasgupta3341
@moinakdasgupta3341 4 ай бұрын
Hi JT, I am running both ollama and my spring boot app using docker compose, but app is getting 500 response when hitting ollama api. This is fixed only when I manually run ollama run llama2 or ollama pull llama2 in the ollama container. Is there any way to automatically pull the model while starting from docker compose? I tried command: ["ollama", "pull", "llama2"] in docker compose file with no luck :(
@Javatechie
@Javatechie 4 ай бұрын
not sure buddy, I will check and update you
@RohitSharma-qb1vw
@RohitSharma-qb1vw 2 ай бұрын
Awesome video, Please make videos on remaining Models also.
@2RAJ21
@2RAJ21 4 ай бұрын
I got below issue verifying sha256 digest writing manifest removing any unused layers success Error: model requires more system memory (8.4 GiB) than is available (2.9 GiB) after running -> docker exec -it ollama ollama run llama2 how to solve this ?? please help me..
@atulgoyal358
@atulgoyal358 4 ай бұрын
I got same issue ? did you got solution.
@2RAJ21
@2RAJ21 4 ай бұрын
@@atulgoyal358 I did not touch after this issue. I think increase docker memory size.
@atulgoyal358
@atulgoyal358 4 ай бұрын
@@2RAJ21 Need to check how to increase docker memory size
@2RAJ21
@2RAJ21 4 ай бұрын
@@atulgoyal358 no idea bro.. I am confused with ram or memory..
@atulgoyal358
@atulgoyal358 4 ай бұрын
@@2RAJ21 Need to configure .wslconfig file in %userprofile% increase RAM and processor then it will work..
@rishiraj2548
@rishiraj2548 4 ай бұрын
😎👍🏻💯🙏🏻
@CenturionDobrius
@CenturionDobrius 4 ай бұрын
As usual, great job ❤ Please, if possible, work on your microphone voice quality recording
@Javatechie
@Javatechie 4 ай бұрын
Hello buddy thanks for your suggestion actually mic quality is good and things are echoing so will definitely try to improve it
Ice Cream or Surprise Trip Around the World?
00:31
Hungry FAM
Рет қаралды 22 МЛН
Why no RONALDO?! 🤔⚽️
00:28
Celine Dept
Рет қаралды 72 МЛН
Getting Started with Ollama, Llama 3.1 and Spring AI
17:36
Dan Vega
Рет қаралды 9 М.
Implement Authentication with Spring Security
28:21
Developer Hut
Рет қаралды 186
Learn RAG From Scratch - Spring AI Tutorial
34:38
Daily Code Buffer
Рет қаралды 5 М.
Java Spring Boot 3 Years Interview Experience
32:14
GenZ Career
Рет қаралды 13 М.
Ice Cream or Surprise Trip Around the World?
00:31
Hungry FAM
Рет қаралды 22 МЛН