You're right when warning us that things change faster than what we think. Gratitudes for your content!
@arnaudpoutieu13316 күн бұрын
Many thanks!
@ArpitPublic10 күн бұрын
A good usecase for jlink and jpackager could be packaging and shipping javafx applications
@DevXplaining10 күн бұрын
Yes, and speaking of which, been looking for an excuse to dip into JavaFX again. :)
@higherbeats28 күн бұрын
this blog entry is very good, an update with the situation today would be awesome.
@DevXplaining28 күн бұрын
Ahha! Thank you! A good idea too!
@gonzalooviedo5435Ай бұрын
Vector like supabase is neccesary for reading documents as well
@gonzalooviedo5435Ай бұрын
Vector is neccesary! to allow context savings , Thanks!
@TheExcentroАй бұрын
Thanks a lot!
@DenzilFerreiraАй бұрын
Step by step, a house is built 😉
@DevXplainingАй бұрын
Or, at least stairs! :)
@NoProgАй бұрын
ollama on a pi5. while possible its idiotic.
@DevXplainingАй бұрын
Haha, depends on the definition of idiotic - since it includes multiple useful use cases, when models are lightweight enough, for example better Alexa I'm working on. But crazy it definitely is! :)
@JesseHuang-w1k25 күн бұрын
EDIT: Now that I read it again, it sounds a little bit of idiotic....... It is not idiotic. A lot of us are making cyberdecks with Pi 4 or 5, and we expect to use it when we are "surviving" in the wild or war zone or something similar (which will probably never happen, hopefully). And it is fun. I have made a ceberdeck with pi 5, and I am running Ollama off line, also with full Wikipedia archived, tones of pictures of mushrooms and plants and etc classified. I am also training a model so I can use the Pi camera to take a picture of the mushroom and it can tell me whether it is poisonous。 This is not idiotic at all!
@stormfox81Ай бұрын
I can't even run Ollama without 100% cpu usage on my Dell PowerEdgeR630 server.
@DevXplainingАй бұрын
Hahaha, yes, if you are having too much processing power at your fingertips, running LLMs locally is the modern solution to the problem for sure :)
@DenzilFerreiraАй бұрын
If you do “ollama run llama3.1 -verbose” you get the tokens/second after the answer 😉
@DevXplainingАй бұрын
Ooo, lovely! I'll try that!
@fastmamajamaАй бұрын
wow i run ollama 3 on my pc. it gets stuck everyone once in a while so i have a batch script called killollama.cmd. it does pskill every 10 minutes. i use ollama to recognize flying saucers in shots taken usen haar datasets of saucers, tictacs and orbs. we are getting there slowly.
@heyzeeshanАй бұрын
anime titties community in your reddit 💀
@markurionn9004Ай бұрын
This year gpt4 will enter the ring . For leaderboard.
@DevXplainingАй бұрын
Oh for sure this years Advent is going to be very different again, there are multiple levels of AI use and no easy way to police against that. But for those who enjoy honing their own skills the fun is still there to have. I just hope the people who put sweat and effort every year to come up with the fun puzzles don't get discouraged by people choosing to cheat. On the other hand, even before LLM popularity, those who choose to cheat, cheat. Of course, good to note, after the leaderboards are filled, and that typically happens rather fast, everything is fair game as long as you're having fun. And for those who wish to hone their AI skills, there's a treasury of puzzles from previous years, all fair game to try and crack fast as one can.
@rafacarmonaАй бұрын
Thank you ! Very nice!
@DevXplainingАй бұрын
Thank you!
@saundiАй бұрын
Really interesting, thank you very much! Have you already tried some Spring AI applications to data analysis or data ingestion?
@DevXplainingАй бұрын
Hi, so far just exploring it and sharing my findings here. For actual analysis and data ingestion right now it's more about using ready-made tools and services. But playing with Spring lets me understand more how things are built, what are the obstacles, challenges, etc. But RAG is closest to that data ingestion, and much more, already got one Spring AI RAG video out, and working on an updated one. It's an essential part of pretty much any non-trivial LLM solution right now.
@firstandforemost-z9b2 ай бұрын
when i try to run the springboot web aplication, i get the errornoclassdeffund erro? why is it so? could it be because im using ava 17 intead of java 23?
@DevXplaining2 ай бұрын
Hi, I tried it again with a fresh copy to make sure it's not messed up, and works for me, on Java 23. I did create this one with Java 23, so if you want to run on Java 17, safest bet would be to do your own app with Initializer on Java 17. I cannot say for certain if that is the case (typically it's a good lead to see which class it does not find), but initializing your own should give you a good start. There's nothing Java 23 specific so far that would not work on Java 17. But the solution in my Git for example has this: <java.version>23</java.version>
@firstandforemost-z9b2 ай бұрын
@@DevXplaining I will try to clone your repo and try again. if not, ill try to do it without using @Bean. If this doesn't work. i don't know what I'm doing wrong, because i've followed every step you've shown to a T
@firstandforemost-z9b2 ай бұрын
it works in your cloned repo! thank you. Though i don't know why it wasn't working on my original file.
@mr24411392 ай бұрын
Interested
@fastmamajama2 ай бұрын
wow this is really god stuff. i got running on windows. for some reason it does not work in linux
@DevXplaining2 ай бұрын
Thank you! It's been a while since I did these, but it was and still is really fun tech. I also had Linux problems with the drivers, always a bit of pain to get it working. I think I had best luck on Windows, too.
@DenzilFerreira2 ай бұрын
This was cool and looking forward to see the web side of things 💪
@DevXplaining2 ай бұрын
Thank you, Sir! :)
@bowenwang2 ай бұрын
VS Code
@DevXplaining2 ай бұрын
Hahaha, that's definitely a good answer! :)
@bowenwang2 ай бұрын
@@DevXplaining It keeps getting better
@ladro_magro57372 ай бұрын
It doesnt appear that EmbeddingClient exist. Am I missing some dependency or is it deprecated?
@DevXplaining2 ай бұрын
Yes, this video is a bit outdated now that Spring AI is at 1.0 and out there, things have changed and as you guessed, EmbeddingClient is gone. Thanks for the heads-up! I should probably revisit this topic, perhaps redo with current version, and dig a bit deeper, but meanwhile, here's up-to-date documentation on the 1.0 approach: docs.spring.io/spring-ai/reference/api/embeddings.html#_api_overview Not a lot of changes, but move to EmbeddingModel interface instead. Here is an example how to use that one (for OpenAI): docs.spring.io/spring-ai/reference/api/embeddings/openai-embeddings.html
@ladro_magro57372 ай бұрын
@@DevXplaining thank you so much! I will try to implement it. I tend to have difficulties learning from documentation only. Any advice? Do you work as a mentor maybe?
@balnenikhil60942 ай бұрын
Is it possible to handle the file at runtime, where the user provides the file via an API, and then chat based on the content of that file?
@DevXplaining2 ай бұрын
Hi, thanks for your question! Short answer is yes. Longer answer is: Each LLM has a context window: Number of tokens it may take as input. That can be roughly translated to how many much text it can eat. So for a runtime solution, with modern LLMs, typically a lot of text or even contents of a single file can fit into the context window, and thus there would actually be no need to involve RAG or a vector database like here. Just include the file contents/text along with any questions you would like to ask, simple as that. The RAG/Vector database approach comes in useful for two scenarios: 1) We have a lot of files, or a book, or books, and would like to ask questions about all of them. And 2) we would like to improve the quality of the answers by NOT including a lot of text, but instead just including relevant text we have retrieved that most likely is necessary to answer that question. So yes, you can pass in a file via API or UI, somehow parse its contents to something resembling structure, and include it in the LLM call. Or, you start keeping a database of files, and set them up so it's easy for you to search and find the relevant chunks.
@mikaelfiil37333 ай бұрын
Nice and well structured explanation
@DevXplaining3 ай бұрын
Thank You, much appreciated!
@ydexpert1313 ай бұрын
awesome mate....
@DevXplaining3 ай бұрын
Thank You!
@unbroken-hunter3 ай бұрын
Very helpful!
@DevXplaining2 ай бұрын
Thank You!
@27meghtarwadi993 ай бұрын
hahaa just what i was looking for now i will make an anime waifu with png showing besides its response if the response is positive than png will be that if its negative than waifu png will be diff and it will be more like fastfetch
@DevXplaining3 ай бұрын
Hahaa, omg I think I created a monster!! ;)
@stanTrX3 ай бұрын
Thanks. How to create a conversational agent with langchain
@talktotask-ub5fh3 ай бұрын
Great content. Very easy to understand the tutorial. +1 like and +1 subscriber
@DevXplaining3 ай бұрын
Thank you! Appreciated!!
@okra30003 ай бұрын
I'm trying to turn my RPI 5 into a local virtual assistant that only communicates data from PDFs, with low latency. I've installed a 1TB Samsung 980 PRO PCIe 4.0 NVMe M.2 SSD to it hoping it will help with all the pdf data as well as whatever LLM I decide to install. But im in a rut; I'm not familiar with RAG, not to great at coding Im not even sure if the RPI 5 can handle all this (been alternatively considering the Jetson Orin Nano Developer Kit) 😮💨... Could you please offer your wise council?
@DevXplaining3 ай бұрын
Hi, thank you for sharing this, sounds awesome! Your SSD will handle things just fine, but as you can see in my video, unfortunately the performance when run in raspberry is not going to be low latency, typical speed of a raspi (depending on the model, memory, and LLM being used) tends to be around 1 token per second. Meaning, it's going to be slowwww, and there's not many ways around it. So it's okay for usecases where response does not need to be immediate, but it's pretty far from low latency. Mostly great for experimentation, I would say. I've been toying with virtual assistants that I actually use myself, and for this, a raspberry won't cut it. You want: - A heavy machine with a lot of oomph, definitely a good graphics card and working CUDA drivers. The more the better, most models being run as a service run on CRAZY hardware. But my personal gaming machine does okay. - Coding approach centered around streaming, aka: give me the tokens immediately as they are generated, don't wait for full answer. You have to play a bit with granularity. I think good starting point would be to grab the tokens, and send them to speech interface when you have full sentences. Otherwise the intonation will be far off. - Fastest, real-time, gpu-accelerated versions of any parts, so use a very low-latency text-to-speech solution, preferably gpu accelerated, along with the model. - Unfortunately offline models you can run locally are somewhat slower and stupider than the big models you use via API. But they make up for that by being more secure, especially with RAG, or at least letting you control the security. And being potentially more cost-effective, depending on how you calculate costs. So, just general advice, but if you see the slowness demonstrated in this video, you can see that a raspberry is not good for low-latency case. I built my own virtual assistant on top of a local model, running it on my gaming beast, and it runs with acceptable latency, aka some seconds most of the time. To get natural dialogue going on, you actually want faster, and that requires heavier hardware. But all is good for research and experimentation! Latency and speed is an optimization question once you know what you want to be building.
@okra30003 ай бұрын
@@DevXplainingWould you recommend Nvidia's Jetson nano then? And thanks by the way, I appreciate the detailed response.
@MsProtestante3 ай бұрын
Hi! Thanks for sharing ! Very useful! Is it correct to use this idea to read .pdf invoices? I have a situation that I need to read .pdf invoices and extract the information into JSON format. What is most correct approach? Extracting text from the .pdf and send this text to the LLM or use this RAG idea? Thanks.
@DevXplaining3 ай бұрын
Hi, thank you! I would not include RAG easily in anything that touches money, like invoices - except if you just want to get some summary information with flexible queries. Too easy to get some zeros or decimals wrong there. But depends entirely on your usecases. To read pdfs in depends on their format, one approach is to use similar code as I'm using here. Some pdfs are images with no structural content that could be extracted, but I suspect with invoices this should work.
@MsProtestante3 ай бұрын
@@DevXplaining thanks for your response. Actually my usecase is not that critical, I just need to read the information from the pdf invoice and exhibit them on a HTML form. The user will edit the information and change it if necessary. I feel like I don´t need RAG for this but I am not sure. Maybe if I only extract the text from the pdf and send this text and the empty JSON to be fill up by the LLM will do.
@arnaudpoutieu13313 ай бұрын
Gratitude for your time and energy! Thank you for sharing such informative content!
@DevXplaining3 ай бұрын
Thank you! Much appreciated!
@arnaudpoutieu13313 ай бұрын
Hi! This Very well-explained concept in a concise video: Great!!! How could you get a general sentiment of the overall comments, meaning getting a sentiment of the set of sentences in your case?
@DevXplaining3 ай бұрын
Thank you! What an interesting question, I haven't thought of that before, so don't have a good answer on what would be the best way to do it..Please let me know if you find one! If I had to do it now I'd probably see if this is already solved by a library elegantly. If not, probably still analyze the text in suitable sized pieces, then find a way to somehow summarize/categorise/postprocess. But haven't done that yet! Hmm, would also have to define what overall means, aka do we count scores forr each and sum them?
@arnaudpoutieu13313 ай бұрын
@@DevXplainingYeah this could be tricky. I was thinking about grouping the sentences by their sentiment score and assign the most representative one as the overall sentiment of the entire comments ecosystem from the dataset. Maybe this approach is naive. Would appreciate any feedback on this.
@hexadecimalhexadecimal52414 ай бұрын
how do we combat the non existent job market?
@DevXplaining4 ай бұрын
Hi, rather large question, but thanks for watching the video to the end :) I cannot say much about that, except there is and will be always demand greater than zero in any market. What is means is that if job market is tough, it is possible to beat it still. It just requires more work to be able to show. Another way to beat it is simply to weather the storm, in my almost 30 years of software development career I've seen plenty of tough times, and plenty of crazy times, and they tend to follow eachothers. That being said, I know right now it's extremely tough to break into software development as a junior developer, near impossible. Then I would work on things to show, have an awesome and active git repo of interesting hobby projects to show, practice the leetcode, there's still no guarantees but you don't need to beat them all, just be ahead of some. For more senior developers it seems there's plenty of demand even right now. But the skill profiles might be changing, so it's a good idea to also invest in learning new things, same thing, having something to show, whether it be certificates, knowledge of relevant tech, etc. This is at least in the job markets near me, they tend to vary by country and even state, but common thing is that yes it's difficult right now. And yes it is possible that overall demand for software developers who mostly just write code will be less in the future than it is now. But people also tend to doom and gloom about the future, we'll see. So no real answers, just speculation. But this might be a topic for a future video, for sure. I agree it's not what we are used to right now.
@kamertonaudiophileplayer8474 ай бұрын
What's bytecode version for these new features? I do a bytecode analyzer and need to be sure it's compatible with new versions.
@DevXplaining4 ай бұрын
Hi, have to say I haven't taken a look at bytecode level for quite some time. As far as I know the recent changes (Java 21 to 22) are on compiler, not bytecode level, just syntactic sugar mostly, but the type system related changes from Project Valhalla may end up eventually changing things. But only way to know for sure is to do a deeper dive into the release notes and/or run a test with some new features code and see if it did change. Java 21 had a potential bytecode change flag for sequenced collections. As far as I know, no such changes in Java 22. This page (for each new release) is a great source to keep an eye on. I would not say 100% clear always but at least an attempt is made to signal changes. www.oracle.com/java/technologies/javase/22-relnote-issues.html JDK release notes page is also typically good to quickly see if any binary/bytecode level changes are mentioned.
@-na-nomad62474 ай бұрын
Hear a suomalainen speak and you know he's suomalainen no matter the language 😁
@DevXplaining4 ай бұрын
Hahahaa, so true!
@-na-nomad62474 ай бұрын
@@DevXplaining So I was correct? 😀
@tarunarya17804 ай бұрын
Thanks for this and your "How to run ChatGPT in your own laptop for free? | GPT4All" video showing the practicalities of running a language model. I think that an important aspect and benefit of a local model is being able to train it. Please cover this or point us. Being able to read pdfs to learn would be great.
@47s184 ай бұрын
Great video! Thank you
@DevXplaining4 ай бұрын
Thank you! Appreciated!
@MuayViaura5 ай бұрын
Good video but, editor color does not match for broadcast and dragging mouse in code editor is not good
@DevXplaining5 ай бұрын
Thank you! I'll take a look at those, details are important.
@ELVISTAKUNDACHIKUDZAH5 ай бұрын
hey bro. did you see that the new aws challenge just give you the code to put reward functions only. mine is good in the track but its slow how do i increase the speed
@DevXplaining5 ай бұрын
Yeah, they tend to have great challenges and competition leagues going on most of the time, some with really great prices including trip to Vegas. I won a smaller qualifier and got some sweet swag for it. As for how to win. I try to give some basics in this video, but the fun thing is: There's no right or wrong answer. I always try to compete against myself, make it faster than before. What I do know is that typically the fastest models are trained using waypoints that are specific to track, and since that requires human to define optimal routes, it's not what I typically like to do myself. I enjoy letting the algorithm figure it out using 100% machine learning. It does not grant the greatest results but I enjoy it. I always start with a theory of how to reward. Let's say I reward based on progress on track, and keeping close to centerline. Then I let the training do its thing. I've surprisingly had great results often with very simple reward function, just a lot of training on proper track. I can drop one hint, there's a great Discord community that I'm also part of, they will happily discuss approaches, tips and tricks, and the community includes some very skilled people who tend to do well in the challenges. Besides, things are often more fun in a community, to share. Here's the link: join.deepracing.io/
@Scarecrow_official-5 ай бұрын
@@DevXplaining let me try this. Thanks man
@SkillisCool5 ай бұрын
Keep going dude..😀
@DevXplaining5 ай бұрын
Hahaa thanks! :)
@blazy69075 ай бұрын
Hi I tried this RAG with pdf document , some of pdf is not parsed by PagePdfDocumentReader
@DevXplaining5 ай бұрын
Hi, yes, that's a known issue, it's not perfect. Pdf readers often struggle with table data even in the best RAG solutions, and of course cannot easily understand any information that's not in text format already. It's mostly for poc level example, to make it work better, much more work is needed. What can be done? - check which part is not parsed properly - you can isolate the code that parses pdf and play with it - there are typically adjustments you can make to better meet your needs, but as mentioned, it can mostly handle text chapters and titles, table data and images need some other approach overall - possibility to see the library issues list, see if anyone else is struggling with same issues - possibility to check to see if there's a better library available for your needs - there typically are options. This was just one that was fast to get working. So, pdf documents are not the easiest usecase to feed for RAG, something that would be better structured is typically going to give better results. But it's a nice starting place. I'm a bit concerned myself about losing the graphs, images, and potentially having table data cut in halves etc, those can alter the results vastly. If you can find any better means of converting the pdf to any structured format that would retain what you need, that would improve the results.
@botnnesen89465 ай бұрын
You have save my day - THANKS 🙂
@sugaith5 ай бұрын
yes the best coding videos are very long and we are able to see the person suffering when coding as it is in the reality.
@DevXplaining5 ай бұрын
Hahaa yeah, I can relate to that :)
@LanJulio6 ай бұрын
Thanks for the Video !!! Will try on my Raspberry Pi 5 with 8GB of RAM !!!
@DevXplaining6 ай бұрын
Perfect! It's gonna be slowwww... But fully local too :)
@leonardosouzaconradodesant62136 ай бұрын
Great thank you. And by the way, I'd like that video using chromaDB as an application node running in background. See you!
@DevXplaining6 ай бұрын
Thank you!
@JoanApita6 ай бұрын
Thank you for sharing this video. I learned a lot today. Please keep them coming. Thank you
@DevXplaining6 ай бұрын
Thank you for your feedback! Much appreciated!
@praveenmail2him6 ай бұрын
Great video. Please keep posting more examples.
@julienr81146 ай бұрын
What a useless feature... and a dirty impletentation of templating
@DevXplaining6 ай бұрын
Indeed. Seems it's not going to make it into future releases.
@liqwis95987 ай бұрын
one doubt i have ollama which will have a ai model lets say mistral or meta now i am connecting this to my springboot app and lets say i have a pdf about my product can show me how a rag implementation done here and when a api is sent it should give me the data about the PDF