This is what a teacher with a deep knowledge on what is teaching can do. Thank you very much.
@tryit-wv8ui Жыл бұрын
Wow! I finally understood everything. I am a student in ML. I have watched already half of your videos. Thank you so much for sharing. Greetings from Jerusalem
@wilsvenleong96 Жыл бұрын
Man, your content is awesome. Please do not stop making these videos as well as code walkthroughs.
@DeepakTopwal-sl6bw8 ай бұрын
and learning becomes more interesting and fun when you have an Teacher like Umar who explains each and everything related to the topic so good that everyone feels like they know complete algorithms. A big fan of your teaching methods Umar.. Thanks for making all the informative videos..
@abisheksatnur17704 ай бұрын
The 🐐. I wish I had you as my teacher in real life, not just through a screen
@AqgvP07r-hq3vu2 ай бұрын
Among all videos about hnsw, the best. Others dont understand. They just pretend. This one, on the other hand, is honest and thorough.
@Rockermiriam8 ай бұрын
Amazing teacher! 50 minutes flew by :)
@faiyazahmad28695 ай бұрын
This is one of the best explanation i ever seen in youtube.... Thank you.
@sarimhashmi97537 ай бұрын
Wow, thanks a lot. This Is the best explanation on RAG I found on KZbin
@nithinma86972 ай бұрын
This is the type of content we really want
@Ask0ldd12 күн бұрын
Thank you very much for all your hard work. Your channel is a goldmine. 👑
@ramsivarmakrishnan13997 ай бұрын
You are the best teacher of ML that I have experienced. Thanks for sharing the knowledge.
@yuliantonaserudin763010 ай бұрын
The best explanation of RAG
@mohittewari67965 ай бұрын
The way you've explained all these concepts has blown my mind. I won't be surprised to see your number of subscribers skyrocket. Channel Subscribed !!
@bevandenizclgn92826 ай бұрын
Best explanation I found on KZbin, thank you!
@redfield126 Жыл бұрын
Waited for such content for a while. You made my day. I think I got almost everything. So educational. Thank you Umar
@suman14san10 ай бұрын
What an exceptional explanation of HNSW algo ❤
@venkateshdesai31506 ай бұрын
Amazing !! I finally understood everything. Good Job, all your videos have in-depth understanding
@alexsguha10 ай бұрын
Impressively intuitive, something most explanations are not. Great video!
@mturja9 ай бұрын
The explanation of HNSW is excellent!
@JRB46310 ай бұрын
This was fantastic (as usual). Thanks for putting it together. It has helped my understanding no end.
@NikhilSharma-o2g Жыл бұрын
One of the best channels to learn and grow
@alirahmanian5127Ай бұрын
Cannot thank enough! Awesome content!!
@alexandredamiao136510 ай бұрын
This was fantastic and I have learned a lot from this! Thanks a lot for putting this lesson together!
@kiranshenvi262611 ай бұрын
Awesome context sir, it was the best explanation I found till now!
@myfolder45619 ай бұрын
Thank you so much - this is a great video. Great balance of details and explanation. I have learned a ton and have saved it down for future reference
@vasoyarutvik289711 ай бұрын
Hello sir i just want to say thanks for creating very good content for us. love from India :)
@SureshKumarMaddala Жыл бұрын
Excellent video! 👏👏👏
@1tahirrauf Жыл бұрын
Thanks Umar. I look forward for your videos as you explain the topic in an easy to understand way. I would request you to make "BERT implementation from scratch" video.
@emptygirl296 Жыл бұрын
Hola, coming back with a great content as usual
@umarjamilai Жыл бұрын
Thanks 🤓😺
@tysun27392 ай бұрын
Very nice explanation. Many thanks!
@amblessedcoding Жыл бұрын
Wooo you are the best I have ever seen
@maitreyimandal891012 күн бұрын
Such great content!
@amazing-graceolutomilayo504110 ай бұрын
This was a wonderful explanation! I understood everything and I didn't have to watch the Transformers or BERT video (I actually know nothing about them but I have dabbled with Vector DBs). I have subbed and I will definitely watch the transformer and BERT video. Thank you!❤❤ Made a little donation too. This is my first ever saying $Thanks$ on KZbin haha
@goelnikhils Жыл бұрын
Amazing content and what clear explanation. Please make more videos. Keep making this channel will grow like anything.
@melikanobakhtian6018 Жыл бұрын
Wow! You explained everything great! Please make more videos like this
@MihailLivitski Жыл бұрын
Thanks!
@sethcoast8 ай бұрын
This was such a great explanation. Thank you!
@trungquang15819 ай бұрын
Thank you so much for sharing. Looking for more content about NLP and LLMs
@jeremyregamey495 Жыл бұрын
Just love ur videos. Soo much Details but extremly well put together
@akramsalim9706 Жыл бұрын
Awesome paper. Please keep posting more videos like this.
@mdmusaddique_cse745810 ай бұрын
Amazing explanation!
@maximbobrin7074 Жыл бұрын
Man, keep it up! Love your content
@andybhat59885 ай бұрын
Super explanation. Thank you
@dantedt393111 ай бұрын
One of the best videos
@meetvardoriya2550 Жыл бұрын
Really amazing content!!, looking forward for more such content Umar :)
@NeoMekhar Жыл бұрын
This video is really good, subscribed! You explained the topic super well. Thanks!
@nancyyou754811 ай бұрын
Thank you for the excellent content!
@fernandofariajunior10 ай бұрын
Thanks for making this video!
@IndianGamingMaharaja5 ай бұрын
total 48 minutes worthy vedio
@_seeker42310 ай бұрын
Excellent content!
@hientq3824 Жыл бұрын
awesome as usual! ty
@mustafacaml883310 ай бұрын
Great explanation! Thank you so much
@ashishgoyal4958 Жыл бұрын
Thanks for making these videos🎉
@LorenzoMontù10 ай бұрын
amazing work very clear explanation ty!
@DatabaseAdministration Жыл бұрын
Glad I've subscribed to your channel. Please do these more.
@oliva82827 ай бұрын
Best video ever!
@LiuCarl10 ай бұрын
simply impressive
@ShreyasSreedhar210 ай бұрын
This was super insightful, thank you very much!
@rajyadav2330 Жыл бұрын
Great content , keep doing it .
@ChashiMahiulIslam-qh6ks10 ай бұрын
You are the BEST!
@RomanLi-y9c Жыл бұрын
Thank you, awesome video!
@sounishnath513 Жыл бұрын
I am so glad I am subscribed to you!
@bhanujinaidu8 ай бұрын
Good explanation, thanks
@DanielJimenez-yy8xk8 ай бұрын
awesome content
@Zayed.R9 ай бұрын
Very informative, thanks
@SanthoshKumar-dk8vs11 ай бұрын
Thanks for sharing, really a great content 👏
@manyams520710 ай бұрын
wow wonderful explanation thanks
@oliz114810 ай бұрын
so helpful! thx for sharing
@ahmedoumar374111 ай бұрын
Nice lecture, Thank you!
@Jc-jv3wj7 ай бұрын
Thank you very much for a detailed explanation on RAG with Vector Database. I have one question: Can you please explain how do we design the skip list with embeddings? Basically how to design which embedding is going to which level?
@soyedafaria467210 ай бұрын
Thank you so much. Such a nice explanation. 😀
@RudraPratapDhara Жыл бұрын
You are legend
@koiRitwikHai11 ай бұрын
at 44:00 , the order of linked list is incorrect... isn't it? because it should be 1 3 5 9
@TheGreatMind5511 ай бұрын
Even I have the same doubt. It should have been sorted as per the definition
@HichamElKaissi-g4s Жыл бұрын
Thank you so much man..
@satviknaren96817 ай бұрын
Please bring some more content !
@jrgenolsen329010 ай бұрын
💪👍 good introduktion
@hassanjaved47309 ай бұрын
Awesome I completely understand the RAG just because of you, Now I am here with some questions let's I am using the Llama2 model to where my main concern is I am giving him the pdf for context then user can ask question question on this, but this approach took time, during inferencing. so after watching your video what i undersatnd using the RAG pipeline is it possible to store the uploaded pdf into vector db then we will used it like that. I am thinking right or not or is it possible or not? Thanks,
@RomuloBrito-b2z6 ай бұрын
When the algorithm runs to store the k best scores, it uses a pop operation on the list to remove the nodes that have already been visited?
@李洋-i4j Жыл бұрын
Wow, I saw the Chinese knotting on your wall ~
@12.85111 ай бұрын
Great video!! Shouldn't 5 come after 3 in skip list?
@Vignesh-ho2dn8 ай бұрын
How would you find number 3 at 44:01 ? The algorithm you said will go to 5 and then since 5 is greater than 3, it won't go further. Am I right?
@jeromeeusebius6 ай бұрын
I think he is mostly explaining how the skip-list data structure works. In general, with HNSW, you are not looking for a particular value (those values are cosine similarity scores) but rather you are traversing the graph to find neighbors with smaller similar scores until you get to a local minima, then that is the node that is returned. You then repeat it again from another entry point.
@mohamed_akram110 ай бұрын
Thanks
@tomargentin51988 ай бұрын
Hey, big thanks for this awesome and super informative video! I'm really intrigued by the Siamese architecture and its connection to RAG. Could someone explain that a bit more? Am I right in saying it's used for top-K retrievals ? Meaning, we create the database with the output embeddings, and then use a trained Siamese architecture to find the top-K most relevant chunks computing similarities ? Is it necessary to use this approach in every framework, or can sometimes just computing similarity through the embeddings work effectively?
@jeromeeusebius6 ай бұрын
The siamese network he talked about just provides details of the sentence-bert that is used for encoding. The connection to RAG is that the sentence-bert model is used to do the encoding for both the query and the rest of the document chunks fed into the DB. In the case, Umar is providing some additional information regarding how the sentence-bert model was developed and why it is better than the natural BERT. I think it's important to understand the distinction. The top-K retrievals is done by the vector search. Using the HNSW, example, the query is compared with a random entry and then you proceed to the neighbors of each of the vectors until you get to a local minimum. You save this point. You do this a few times (> k) and retrieve the top-K ones sorted by their similarities. So the embeddings from S-BERT are used but not directly. The retrieval of the top-K embeddings is done at the vector DB search level. And doing this multiple times (via a different entry into the HNSW graph) you will get different results. And then you retrieve the top-K from there. I hope this is clear.
@rkbshiva Жыл бұрын
Umar, great content! Around 25:00, when you say that we have a target cosine similarity. How is that target's cosine similarity calculated? Because there is no mathematical way to calculate the cosine similarity between two sentences. All we can do is only take a subjective guess. Can you please exlain in detail to me how this works?
@umarjamilai Жыл бұрын
When you train the model, you have a dataset that maps two sentences to a score (chosen by a human being based on a scale from 1 to 10 for example). This score can be used as a score for the cosine similarity. If you look papers in this field, you'll see there are many sofisticated methods, but the training data is always labeled by a human being.
@rkbshiva Жыл бұрын
@@umarjamilai Understood! Thanks very much for the prompt response. It would be great if we can identify a bias free way to do this as the numbering between 1 - 10, especially when done by multiple people and at scale, could get biased.
@jeromeeusebius6 ай бұрын
@@rkbshiva The numbering is not done by random people. Usually, some specialists, e.g., language specialists are employed to get this dataset, and this reduces the noise in the label (but you'd still get some bias but should be small). Google does this for the search quality. They have a standard search quality evaluation document that is provided to the evaluators and they use the document as a guide and how to score the different documents returned for a give query.
@qicao776910 ай бұрын
Cool video about RAG! You could also upload into Bilibili, as you live in China, you should know that. :D
@rvons2 Жыл бұрын
Are we storing the sentence embeddings together with the original sentence they were created? If not how do we map them back (from the top-k most similar stored vectors) into the text they were originated for, given that the sentence embedding lost some information when pooling was done.
@umarjamilai Жыл бұрын
Yes, the vector database stores the embedding and the original text. Sometimes, they do not store the original text but a reference to it (for example instead of storing the text of a tweet, you may store the ID of the tweet) and then retrieve the original content using the reference.
@parapadirapa11 ай бұрын
Amazing presentation! I have a couple of questions though... What size of chunks should be used when using Ada-002? Is that dependent on the Embedding model? Or is it to optimize the granularity of 'queriable' embedded vectors? And another thing: am I correct to assume that, in order to capture the most contexts possible, I should embed a 'tree structure' object (like a complex object in C#, with multiple nested object properties of other types) sectioned from more granular all the way up to the full object (as in, first the children, then the parents, then the grand-parents)?
@prashantharipirala76522 ай бұрын
Can you tell how that hierarchial structure supporting HNSW is created?
@chhabiacharya30711 ай бұрын
Thank YOU :)
@Tiger-Tippu Жыл бұрын
Hi Umar,does RAG also has context window limitation as prompt engineering technique
@adatalearner86838 ай бұрын
why is the context window size limited? Is it because these models are based on transformers and for a given transformer architecture, long distance semantic relationship detection will be bounded by the number of words/context length ?
@christopherhornle4513 Жыл бұрын
Great video, keep up the good work! :) Around 19:25 you're saying that the embedding for "capital" is updated during backprop. Isn't that wrong for the shown example / training run where "capital" is masked? I always thought only the embedding associated with non-masked tokens can be updated.
@umarjamilai Жыл бұрын
You're right! First of all, ALL embedding vectors of the 14 tokens are updated (including the embedding associated with the MASK token). What happens actually is that the model updates the embedding of all the surrounding words in such a way that it can rebuild the missing word next time. Plus, the model is forced to use (mostly) the embedding of the context words to predict the masked token, since any word may be masked, so there's not so much useful information in the embedding of the MASK token itself. It's easy to get confused when you make long videos like mine 😬😬 Thanks for pointing out!
@christopherhornle4513 Жыл бұрын
I see, didn't know that the mask token is also updated! Thank you for the quick response. You really are a remarkable person. Keep going!
@YouHaveToLoveMe3 ай бұрын
Master peice :)
@ltbd7811 ай бұрын
Legend
@parichehresmailian9 күн бұрын
How can i start to deep dive into the RAG? I have studied software engineering in the university as bsc and these days i’m studying E-Commerce as msc, which is mixed up completely with Ai.. and i exactly want to continue to RAG🥺
@汪茶水 Жыл бұрын
keep it up!
@peabrane806718 күн бұрын
奥利奥 is so cute !
@UncleDavid Жыл бұрын
Salam Mr Jamil, i was wondering if it was possible to use the BERT model provided by apple in coreml for sentimental analysis when talking to siri then having a small gpt2 model fine tuned in conversational intelligence give a response that siri then reads out
@ravindarmadishetty7364 ай бұрын
This is really awesome session. Off course it is a lengthy but nice. Seems problem with git. Unable to access python and pdf files.