Can't believe your channel is that much underrated tho, you are doing a really good content, keep it up !
@CooperCodes Жыл бұрын
I can't believe I missed this! Thank you so much for your support it means a lot :)
@Anton_Sh.9 ай бұрын
What is the source for this cool 3D embeddings viz at 1:04 ?
@munkuo510 ай бұрын
What is the font you are using? Love it.
@Leo_OxfordАй бұрын
Loved the concise explanation! Thank you
@Pritex212110 ай бұрын
Whats the tool used for the schemas ?
@willyjauregui65415 ай бұрын
Thanks for the explanation. When storing embeddings, how does the system determine which phrase or words are similar to each other ? Does it assign weights according to some previous train or knowledge? Also, in the VectorDB , do we have the text associated with embedding or it's just the arrays? If so, the system needs to convert it to text again when retreiving the data?
@itskittyme6 ай бұрын
2:50 I don't understand how text is grouped. Who decides or what decides they are grouped by the fact they are athletes and not their nationality? Or why is everything grouped together that says "Cooper" and why isn't it grouped together with all youtubers, humans, or programmers?
@ImNotActuallyChristian4 ай бұрын
It's a bit of a simplified model. What's not really shown here is that it's not embedded in 3 dimensions, but with thousands of dimensions. Some concepts may be close to each other in some dimensions, but far away from each other in other dimensions. Again, even this is a very simplified mental model.
@itskittyme4 ай бұрын
@@ImNotActuallyChristian and so we are searching in all dimensions at the same time?
@MageDigest-c1z21 сағат бұрын
it's the LLM embedding technique that does all the magic in the background and store in the vector space with n dimensions based on the semantic nature of the word/sentence/docs
@russelnormandia2876 Жыл бұрын
thanks for the concise explanation
@Username2313410 ай бұрын
Nice video!
@parkerrex Жыл бұрын
good explainer
@mindfulnessrock2 ай бұрын
I still don't quite get what those embedding are. Are they some kind of base64 encoding of data to provide context for AI models, or just a reference to my data stored in the remote/local AI model's memory?