The most important step here is left to the end. You can only use RAG with a transparent or locally built LLM.
@amritbro7 ай бұрын
Very simple and clear explanation.. cheers to IBM
@HustlerCoder27 күн бұрын
Correction 1. Vector databases use mathematical representations like vectors, not arrays. Array-like similarity oversimplifies. 2. Transparency reduces bias but cannot entirely eliminate it without robust training data. 3. Embeddings are typically static; new embeddings are created for updated data.
@thedevmachine20 күн бұрын
3, I think generating new embedding is easier then updating the old embeding. How are you going to update the old one by similarity scroe? What if a word was changed in an earlier document? Also you have the scan existing embeddings and match it with the new one is more time consuming and power.
@paulaichniowski9687 ай бұрын
Shawn & Luv!!!! Awesome job!!!!
@emteiks7 ай бұрын
There is good point of halucinations of AI and the video unfortunately does not address it. The data governance is not addressing this issue we still can have a scenario where input is valid but output generated by AI is a garbage.
@vintastic_7 ай бұрын
What's the solution?
@frackinfamous61267 ай бұрын
@@vintastic_you have to make sure the relevant data is going to the model. Good info into the data base is only half the battle. Semantic chunking. Size of chunks..types of search. Type of vector database used. For example PG Vector is a Postgres plugin and is not near as good at retrieval (usually) as something like pinecone
@frackinfamous61267 ай бұрын
Then the prompt used can also tremendously affect the model. You have to put it in the right context and use industry specific terms when prompting. Even a genius needs context or a bit of time to think. No matter who good the model, you have to know a bit about the specific industry to obtain great results. It’s like explains a noise to you mechanic or telling them you have a miss-fire on cylinder 1.
@miraculixxs6 ай бұрын
@@vintastic_ human expert review for every response
@joeillingworth114111 күн бұрын
Have it use python code to extract the info once it finds it and then having it check that against what it actually tells you and providing receipts to you for how it determined what to look at (keywords etc)
@myidanny3 ай бұрын
Thought Shawn was very flirty until I realised he didn't say "love" but "Luv"
@saminhaque22333 ай бұрын
Same Here XD
@aaryaxz7 ай бұрын
Hey! If I ask a RAG-based language model, "Tell me the features of the iPhone 17," what will it tell me? Will it say it doesn't know or will it hallucinate? I understand that once the iPhone 17 is released, the database will be updated to provide the correct information. But what happens if I ask about it before its release?
@EduardoSantAnna6 ай бұрын
I can see two scenarios here: if it is indeed RAG-based, then you have provided info about the yet-to-be-released iPhone 17. So the LLM will respond based on that. If you don't have it in your additional documents/vector DBs, then I'd recommend you to have always added something along the lines of "only answer with facts you have access to" to your system prompt and to set Temperature to a low number. (Temperature is a parameters in LLMs that defines how "creative" the model can be) Great question and it highlights the importance of having experts in GenAI guiding enterprises on how to implement this in a way that suits their use cases.
@ChanceTEK2 күн бұрын
Excellent! Thank you. 🔥
@ShivamKumar-kx6rbАй бұрын
great quick session, thanks !!
@LorenzoMarkovian5 ай бұрын
Clear and simple, thank you guys and thank you IBM
@akshitgoel11835 ай бұрын
Hi I have few Questions , Please find time to answer 0. Are we filtering the Data in the Vector DB ? (If Yes? then ) 1. How are we filtering the relevant data from our vector DB to augment to our prompt for the LLM ? 1. 1 who is doing this process , another LLM, our own code , or some-different tool? 1.2 Are we feeding the complete data as whole to the LLM? 1.2 if we are filtering the vector data using Rule Based Mechanism then what will be the use case of LLM , how is the power of LLM being drawn if we are the one who is making the decision what to feed as a relevant data to the LLM?
@satish10124 ай бұрын
Hi This is what my understanding Storing Data as Embeddings: Correctness: Storing data (documents, images, etc.) as embeddings in a Vector DB is a valid approach. Embeddings represent the data in a high-dimensional vector space, capturing its semantic meaning. Consideration: Ensure that the embedding model you use is appropriate for your data type. For images, you might use a different model (e.g., CLIP) compared to text embeddings. Searching with Embeddings: Correctness: Converting the search query into embeddings and then comparing these embeddings with those stored in your Vector DB is correct. This allows for semantic search, which is more effective than keyword-based search. Consideration: Ensure that the conversion process and similarity calculations (e.g., cosine similarity) are implemented correctly. The returned plain text should be accurately relevant to the search query. Summarization by LLM: Correctness: Sending the retrieved plain text content to an LLM for summarization is appropriate. LLMs are designed to generate summaries and provide concise explanations based on the input text. Consideration: Ensure that the LLM is correctly configured for summarization tasks. Provide clear instructions or prompts to achieve the desired summarization quality. Returning Summarized Text to User: Correctness: Receiving the summarized text from the LLM and returning it to the user is the final step in the process. This is standard practice for providing user-friendly summaries. Consideration: Validate that the summarized content meets user expectations and provides accurate, meaningful information. 1. We Store all out relevant data in Vector DB ..like documents , images etc as part of Embeddings 2. When User Searches, it will not hit LLB directly, it will convert our search into Embeddings and return the resutlt with plain text 3. Then we send the same to text for summarization to LLM 4. Then LLM returns the summarized text back to user
@fanzhang88235 ай бұрын
1. Vector DB may or may not stored relevant information related to the question. 2. LLM may already have information or more accurate information to the question. So RAG may not always be helpful for GenAi applications
@angelinagokhale9309Ай бұрын
Very Well explained! Thank you so much.
@LinkingL-x4v3 ай бұрын
Clean database, stable generator and clever retriever.
@5uryaprakashPi6 ай бұрын
So with a rag approach, can I say that we can update the original vector db with our own processed data?
@paoloalberti87922 ай бұрын
IMO. Data Gov Management seems to be the same as correct Database data input workflow. Prompt is a another way to query the database. LLM avoid use of query language to interrogate the DB. In this way a common people can query the DB. Appears to be a good idea to fine tuning a large LLM. But is not a fine tuned train is more similar to data embedding. In RAG how the vector database interact with LLM ? The vector database grows the LLM's latent space ? Is there a possibility that the LLM parameters can overlaps vector parameters making a mix of knowledge?
@SocialSketch2 ай бұрын
Is there a pane of glass in front of them, or is this some other technology?
@thedesignbro2 ай бұрын
yes it's a glass
@yinlan26725 ай бұрын
How to do this video? screen as the board.
@bangyor59496 ай бұрын
Do LLMs store our sensitive data when using RAG?
@satish10124 ай бұрын
No , Vector DB stores all the sensitive and confidential data Once obtained send that data to LLM to get it summarized because Vector DB would have given only the connecting data for the string
@research2you-su9om7 ай бұрын
Thanks for the straight forward description of RAG.
@ColmFearon-dc8dr6 ай бұрын
Do these guys write on the whiteboard backwards or how does that work?
@JShaker5 ай бұрын
Yes they learned to write backwards for this video because it's cheaper to do that than to run the algorithm to flip a video horizontally
@mackkaputo89895 ай бұрын
@@JShaker 😂😂😂
@stephaniakossman29234 ай бұрын
Very clear explanation, thanks! How do you manage to avoid using "blackbox" models?
@complexity5545Ай бұрын
I suggest prioritizing a terminal rather than a drawing board.
@hathanh46506 ай бұрын
Thank you for your sharing! Very helpful and easy to follow. Just one question, is there anyway we can test or reinforcement train the model to make sure the outputs are appropriate?
@zehaankhan12 ай бұрын
The guy on the left wanted to laugh out loud at that sketch 😂
@Thomas___018s5 ай бұрын
Poetic journey: the essence of refund details and expected actions
@mictow4 ай бұрын
Did you need to learn to write backwards for this videos?? or is there a product that help you with this nice board?
@nguyentran70684 ай бұрын
You record the video then mirror the image. They simple write on the clear board
@CraigMatadeen2 ай бұрын
kzbin.info/www/bejne/i6DdkKKpe9mqbJo
@vvishnuk7 ай бұрын
Neat and detail explanation.
@maxcontini3483 ай бұрын
Nice explanation. Well done boys 😁
@gatsby667 ай бұрын
Nobody's ever been fired for buying RAGs from IBM.
@miraculixxs6 ай бұрын
... yet
@dak20093 ай бұрын
Love the 1980s meme.
@AgentBangla7 ай бұрын
Excellent video, love it
@CumaliTurkmenoglu-zn7hp5 ай бұрын
Great explanation. Thank you
@commentatorxyz5514Ай бұрын
The unhappiness in their eyes and frowns tell the pain of working for Artificial Intelligence jobs
@DiegoSarasua-jn2wh7 ай бұрын
Thanks guys, very clear!
@Roy-h2q7 ай бұрын
Interesting , thanks both
@THEaiGAI7 ай бұрын
Awesome video
@yojackleonАй бұрын
Meh, they leave the biggest question unanswered, how are enterprises expected to govern the data that was used to train the LLM ?
@bayesian74047 ай бұрын
Good job. I still need to learn more about data accuracy in a LLM.
@DeanJohn7Ай бұрын
Bias within LLM's is a topic that needs more like shed on.
@PretendCoding27 күн бұрын
You'd think with this being IBM the mic would be better
@mzimmerman19887 ай бұрын
nice work.
@akhil71107 ай бұрын
Does not address how do you validate the Q1 results returned are accurate. You should have built in a process parallel to querying the LLM of actually querying the results and training the LLM to address any discrepancies, if that is possible or correct them.
@ricko134 ай бұрын
good explanation
@Ash-bc8vw3 ай бұрын
Exactly love.
@nachoeigu7 ай бұрын
Cool explanation
@siddid76205 ай бұрын
7:10 sounds like support for open source.
@ammularajeshsagar79537 ай бұрын
Informative
@jonniuss7 ай бұрын
ok, gotcha 👌
@brianvandermey4223Ай бұрын
Great backwards writing skills!
@satishrachaiah17 күн бұрын
They are just talking the high level concept
@PaulMoreira2 ай бұрын
This was an excellent explanation! Thank you.
@hassanabida7 ай бұрын
Gotcha.
@datoalavista5817 ай бұрын
Thank for sharing !!
@HazelChikara2 ай бұрын
My left arm started tingling. Quite hard to concentrate now. 😅
@Laura__sa8q5 ай бұрын
The demanding world of refund specifics and anticipated actions
@George-j3George_372u5 ай бұрын
Ironically, because transfers between banks and cards always go so smoothly, don't they? But seriously, it's all good.
@BlueBearOne7 ай бұрын
And then a wide spread global epidemic crisis is brought to light wherein our gold standard "books" (peer reviewed journals) are rife with bad and corrupt data due to mismatched incentivization and misalignment of directives; and we then realize...how much good data through science do we really have? Shame we polluted the books we are supposed to be able to trust now that we have this magnificent technology here. 😭
@godlymajins7 ай бұрын
Dude, keep on topic. This isn’t the place for your grievances
@gcg8187Ай бұрын
i know right? it's too bad we not have unbiased data to make the most of this technology :(
@dheerajkumar-uk6ec6 ай бұрын
exactly love
@BachBaldwin3 ай бұрын
8615 Balistreri Forge
@topcommenter6 ай бұрын
🤔I think Luv saw the connection the entire time
@EasyCodingStudioАй бұрын
good
@sk3ffingtonai7 ай бұрын
👏👏
@MelissaLee-k8c3 ай бұрын
Alverta Field
@VioletaEasterwood2 ай бұрын
946 Cremin Ranch
@markmotarker6 ай бұрын
lol. must be annoying to talk to "Luv". "Hi, Luv", "Exactly, Luv"
@RosaSam-n3k2 ай бұрын
1411 Carson River
@DonnaGonzalez-r9e2 ай бұрын
Dibbert Throughway
@TenishaBentrem-f8t3 ай бұрын
Etha Road
@KennyGennaria-d1n3 ай бұрын
Sipes Forks
@LutherHalifax-e4m2 ай бұрын
Malika Spur
@EllenJared-x3y2 ай бұрын
Dayna Course
@vrohan077 ай бұрын
kabhi haans bhi liya karo.. (Smile a bit bros)
@WilliamsBarco-z3x3 ай бұрын
Pacocha Estates
@maliciousinferno7 ай бұрын
How in the world is this dude writing inverted for us too read straight lol
@inriinriinriinriinri6 ай бұрын
they mirror the video) you can notice that most of the people writing on a glass board are left handed(in reality 90% of planet’s population is right handed), that’s also because they mirrored the video
@ObscuredByCIouds6 ай бұрын
It's a skill only left-handed people have
@HowardBlair-m5d3 ай бұрын
Tom Harbor
@SidStrong-l9t6 ай бұрын
Terrible Analogy! When a journalist is ants to do research, he goes to a library and asks the librarian?? As opposed to doing a google search ? This scenario is from last century before Luv was born ?