The most productive 14 minutes of my day watching and learning from this video :)
@codingcrashcourses85338 ай бұрын
great! Thanks for your comment
@sivi38835 ай бұрын
Best 15 mins of my day! You explained every single component in the code clear and crisp! Excited to check the other videos of yours. Thanks a bunch
@fuba448 ай бұрын
UniqueList = list(set(ListWithDuplicates)) to replace those nested for loops. Love your content!
@codingcrashcourses85338 ай бұрын
Does not work for complex objects in that way probably;)
@philgalebach329425 күн бұрын
really really good video. best I've seen
@codingcrashcourses853325 күн бұрын
thank you man :)
@codewithbrogs38097 күн бұрын
This is GREAT!!!
@henkhbit57489 ай бұрын
Very informative.👍 Love the umap visualization 2 see the query and the embeddings.
@roguesecurity8 ай бұрын
This channel is a gem 💎
@Pure_Science_and_Technology9 ай бұрын
Thanks for the video. Perfect timing…. Need this for tomorrow.
@samyio42566 ай бұрын
Youre vids are insanely good. I doubt there is a better ai-prog-tuber
@codingcrashcourses85336 ай бұрын
Thank you so much :)
@kenchang34566 ай бұрын
This video is terrific, I'll give it a try!
@codingcrashcourses85336 ай бұрын
Thank you!
@felipecordeiro85314 ай бұрын
I don't know the umap library, its very interesting. Good explanation about RAG advanced techniques, sucess for you!
@codingcrashcourses85334 ай бұрын
thank you :)
@vegansinnigeunterhaltung21 күн бұрын
In the images you complain that the similarity search return dots too far away from the Red Cross, the problem imho is the umap projection, maybe it would be different had you calculated the umap projection with the queries included; the projection down from 1024 components to two might loose some important details, so have you manually inspected the allegedly incorrect similarity search results?
@micbab-vg2mu9 ай бұрын
Thank you for the great video:)
@codingcrashcourses85339 ай бұрын
Thanks for your comment. Glas you enjoyed it :)
@andreypetrunin57029 ай бұрын
Спасибо!!
@codingcrashcourses85339 ай бұрын
Your welcome andreij:)
@andreypetrunin57028 ай бұрын
@@codingcrashcourses8533 I cannot run the code in VSCode. When running the import: From langchain_community.document_loaders import TextLoader, DirectoryLoader Error: File c:\Python311\Lib\enum.py:784, in EnumType.__getattr__(cls, name) 782 return cls._member_map_[name] 783 except KeyError: --> 784 raise AttributeError(name) from None AttributeError: COBOL I have installed the langchain-community library.
@austinpatrick18719 ай бұрын
Awesome video. So I glad I found this channel. Long shot question: After testing several chunk/overlaps, my experimentation indicates an optimal chunk_size=1000 and overlap=200. My RAG contains about 10 medical textbooks (~50,000 pages). However, every video I see on RAG nobody uses chunks anywhere near that large. Does it seem improbable that my ideal chunk size is 1,000, or is there likely another variable at play?
@sivi38835 ай бұрын
Did you find anything? At least from my experience so far, with fixed chunk methodology (whatever be the chunk size or overlap) its easier to do POC but not for production grade quality. Did you try semantic chunking or chunking based on sections/headings and then capture relationship between the chunks via graph database?
@Reality_Check_19842 ай бұрын
This isn't backed in any data that I found but brute force trial and error I found that I am served better with different chunk sizes for different document types. Something like sentiment is fine at rather large chunk sizes. Something like a spec sheet I will actually place it multiple times with different chunk sizes. I am not saying this is the way but certainly found an improvement with finer details and critical information if I do that. My sweet spot has been 1k/1.5k/2k depending on the document type. I am sure less works but I don't need to with most context windows and the greater context of the larger chunk does have a quality aspect. You have to tame that idea by not going too large when you need more than a general pointing direction from your chunk otherwise you start to get sentiment and not the finer details.
@Reality_Check_19842 ай бұрын
@@sivi3883 how much latency do the added layers add? Are you running locally or API calls?
@maxlgemeinderat92029 ай бұрын
Thank, always nice videos! Do you have a favorite german cross-encoder?
@codingcrashcourses85339 ай бұрын
No, I don´t! I did not work that much with cross encoders to be honest
@vinaychitturi51836 ай бұрын
Thanks for the video.. But while genering queries using llm_chain.invoke(query), facing exception related to output parser. OutputParserException: Invalid json output:
@vinaychitturi51836 ай бұрын
I resolved it temporarily by removing parser al together and formatted the output in the next step. Thank you again for the video. It is helpful.
@codingcrashcourses85336 ай бұрын
Weird. Normally i never have Problems with that parser
@StnImg9 ай бұрын
Can u please make a video on retrieving data from SQL using SQL agents & Runnable using LCEL. If not possible here, if you can update the same in the udemy course. It helps alot
@codingcrashcourses85339 ай бұрын
I would rather do it here than on my udemy course, since it´s quite specific. Give me some time to do something like that please ;-)
@Sonu007OP9 ай бұрын
Looking for a similar video with LangChain templates. Production level SQL-ollama app. Greatly appreciated 🙏❤
@codingcrashcourses85339 ай бұрын
@@Sonu007OP have not worked with ollama yet, i am afraid my 7 year old computer wont get it running ^^
@codingcrashcourses85337 ай бұрын
Video about this topic will be released on 03/25 and 03/28 :)
@verybigwoods6 ай бұрын
How much computation resource (specifically GPU) required in running this cross encoder model?
@codingcrashcourses85336 ай бұрын
It also works on a cpu
@sumangautam40164 ай бұрын
LLMChain() is deprecated and the output_parser in the examples also cause json output error. Would be nice, if you could update the github code. Thank you If anyone having issue with json output, here is a fix: from langchain_core.output_parsers import BaseOutputParser class LineList(BaseModel): lines: list[str] = Field(description="Lines of text") class LineListOutputParser(BaseOutputParser[LineList]): def __init__(self) -> None: super().__init__(pydantic_object=LineList) def parse(self, text: str) -> list[str]: lines = text.strip().split(" ") return lines
@erenbagc91648 ай бұрын
What's the best way to evaluate this RAG?
@codingcrashcourses85338 ай бұрын
Difficult topic. Performance or output Quality?
@erenbagc91648 ай бұрын
@@codingcrashcourses8533 well it should be advanced as much as possible since I got an advanced rag . I saw many cases that people used ragas,trulens etc. I'm indecisive