The most important step here is left to the end. You can only use RAG with a transparent or locally built LLM.
@amritbro6 ай бұрын
Very simple and clear explanation.. cheers to IBM
@akshitgoel11834 ай бұрын
Hi I have few Questions , Please find time to answer 0. Are we filtering the Data in the Vector DB ? (If Yes? then ) 1. How are we filtering the relevant data from our vector DB to augment to our prompt for the LLM ? 1. 1 who is doing this process , another LLM, our own code , or some-different tool? 1.2 Are we feeding the complete data as whole to the LLM? 1.2 if we are filtering the vector data using Rule Based Mechanism then what will be the use case of LLM , how is the power of LLM being drawn if we are the one who is making the decision what to feed as a relevant data to the LLM?
@satish10123 ай бұрын
Hi This is what my understanding Storing Data as Embeddings: Correctness: Storing data (documents, images, etc.) as embeddings in a Vector DB is a valid approach. Embeddings represent the data in a high-dimensional vector space, capturing its semantic meaning. Consideration: Ensure that the embedding model you use is appropriate for your data type. For images, you might use a different model (e.g., CLIP) compared to text embeddings. Searching with Embeddings: Correctness: Converting the search query into embeddings and then comparing these embeddings with those stored in your Vector DB is correct. This allows for semantic search, which is more effective than keyword-based search. Consideration: Ensure that the conversion process and similarity calculations (e.g., cosine similarity) are implemented correctly. The returned plain text should be accurately relevant to the search query. Summarization by LLM: Correctness: Sending the retrieved plain text content to an LLM for summarization is appropriate. LLMs are designed to generate summaries and provide concise explanations based on the input text. Consideration: Ensure that the LLM is correctly configured for summarization tasks. Provide clear instructions or prompts to achieve the desired summarization quality. Returning Summarized Text to User: Correctness: Receiving the summarized text from the LLM and returning it to the user is the final step in the process. This is standard practice for providing user-friendly summaries. Consideration: Validate that the summarized content meets user expectations and provides accurate, meaningful information. 1. We Store all out relevant data in Vector DB ..like documents , images etc as part of Embeddings 2. When User Searches, it will not hit LLB directly, it will convert our search into Embeddings and return the resutlt with plain text 3. Then we send the same to text for summarization to LLM 4. Then LLM returns the summarized text back to user
@fanzhang88234 ай бұрын
1. Vector DB may or may not stored relevant information related to the question. 2. LLM may already have information or more accurate information to the question. So RAG may not always be helpful for GenAi applications
@aaryaxz6 ай бұрын
Hey! If I ask a RAG-based language model, "Tell me the features of the iPhone 17," what will it tell me? Will it say it doesn't know or will it hallucinate? I understand that once the iPhone 17 is released, the database will be updated to provide the correct information. But what happens if I ask about it before its release?
@EduardoSantAnna5 ай бұрын
I can see two scenarios here: if it is indeed RAG-based, then you have provided info about the yet-to-be-released iPhone 17. So the LLM will respond based on that. If you don't have it in your additional documents/vector DBs, then I'd recommend you to have always added something along the lines of "only answer with facts you have access to" to your system prompt and to set Temperature to a low number. (Temperature is a parameters in LLMs that defines how "creative" the model can be) Great question and it highlights the importance of having experts in GenAI guiding enterprises on how to implement this in a way that suits their use cases.
@myidanny2 ай бұрын
Thought Shawn was very flirty until I realised he didn't say "love" but "Luv"
@saminhaque22332 ай бұрын
Same Here XD
@ShivamKumar-kx6rb3 күн бұрын
great quick session, thanks !!
@stephaniakossman29233 ай бұрын
Very clear explanation, thanks! How do you manage to avoid using "blackbox" models?
@complexity55459 күн бұрын
I suggest prioritizing a terminal rather than a drawing board.
@LinkingL-x4v2 ай бұрын
Clean database, stable generator and clever retriever.
@emteiks6 ай бұрын
There is good point of halucinations of AI and the video unfortunately does not address it. The data governance is not addressing this issue we still can have a scenario where input is valid but output generated by AI is a garbage.
@vintastic_6 ай бұрын
What's the solution?
@frackinfamous61266 ай бұрын
@@vintastic_you have to make sure the relevant data is going to the model. Good info into the data base is only half the battle. Semantic chunking. Size of chunks..types of search. Type of vector database used. For example PG Vector is a Postgres plugin and is not near as good at retrieval (usually) as something like pinecone
@frackinfamous61266 ай бұрын
Then the prompt used can also tremendously affect the model. You have to put it in the right context and use industry specific terms when prompting. Even a genius needs context or a bit of time to think. No matter who good the model, you have to know a bit about the specific industry to obtain great results. It’s like explains a noise to you mechanic or telling them you have a miss-fire on cylinder 1.
@miraculixxs5 ай бұрын
@@vintastic_ human expert review for every response
@paulaichniowski9686 ай бұрын
Shawn & Luv!!!! Awesome job!!!!
@angelinagokhale930911 күн бұрын
Very Well explained! Thank you so much.
@Thomas___018s4 ай бұрын
Poetic journey: the essence of refund details and expected actions
@DeanJohn79 күн бұрын
Bias within LLM's is a topic that needs more like shed on.
@hathanh46505 ай бұрын
Thank you for your sharing! Very helpful and easy to follow. Just one question, is there anyway we can test or reinforcement train the model to make sure the outputs are appropriate?
@commentatorxyz55142 күн бұрын
The unhappiness in their eyes and frowns tell the pain of working for Artificial Intelligence jobs
@SocialSketchАй бұрын
Is there a pane of glass in front of them, or is this some other technology?
@thedesignbroАй бұрын
yes it's a glass
@maxcontini3482 ай бұрын
Nice explanation. Well done boys 😁
@LorenzoMarkovian4 ай бұрын
Clear and simple, thank you guys and thank you IBM
@bayesian74046 ай бұрын
Good job. I still need to learn more about data accuracy in a LLM.
@vvishnuk6 ай бұрын
Neat and detail explanation.
@paoloalberti8792Ай бұрын
IMO. Data Gov Management seems to be the same as correct Database data input workflow. Prompt is a another way to query the database. LLM avoid use of query language to interrogate the DB. In this way a common people can query the DB. Appears to be a good idea to fine tuning a large LLM. But is not a fine tuned train is more similar to data embedding. In RAG how the vector database interact with LLM ? The vector database grows the LLM's latent space ? Is there a possibility that the LLM parameters can overlaps vector parameters making a mix of knowledge?
@research2you-su9om6 ай бұрын
Thanks for the straight forward description of RAG.
@ricko133 ай бұрын
good explanation
@zehaankhan1Ай бұрын
The guy on the left wanted to laugh out loud at that sketch 😂
@nachoeigu6 ай бұрын
Cool explanation
@mzimmerman19886 ай бұрын
nice work.
@brianvandermey422316 күн бұрын
Great backwards writing skills!
@THEaiGAI6 ай бұрын
Awesome video
@5uryaprakashPi5 ай бұрын
So with a rag approach, can I say that we can update the original vector db with our own processed data?
@CumaliTurkmenoglu-zn7hp4 ай бұрын
Great explanation. Thank you
@Roy-h2q6 ай бұрын
Interesting , thanks both
@AgentBangla6 ай бұрын
Excellent video, love it
@gatsby666 ай бұрын
Nobody's ever been fired for buying RAGs from IBM.
@miraculixxs5 ай бұрын
... yet
@dak20092 ай бұрын
Love the 1980s meme.
@datoalavista5816 ай бұрын
Thank for sharing !!
@Ash-bc8vw2 ай бұрын
Exactly love.
@hassanabida6 ай бұрын
Gotcha.
@mictow3 ай бұрын
Did you need to learn to write backwards for this videos?? or is there a product that help you with this nice board?
@nguyentran70683 ай бұрын
You record the video then mirror the image. They simple write on the clear board
@CraigMatadeenАй бұрын
kzbin.info/www/bejne/i6DdkKKpe9mqbJo
@DiegoSarasua-jn2wh6 ай бұрын
Thanks guys, very clear!
@yojackleon11 күн бұрын
Meh, they leave the biggest question unanswered, how are enterprises expected to govern the data that was used to train the LLM ?
@ColmFearon-dc8dr5 ай бұрын
Do these guys write on the whiteboard backwards or how does that work?
@JShaker4 ай бұрын
Yes they learned to write backwards for this video because it's cheaper to do that than to run the algorithm to flip a video horizontally
@mackkaputo89894 ай бұрын
@@JShaker 😂😂😂
@ammularajeshsagar79536 ай бұрын
Informative
@akhil71106 ай бұрын
Does not address how do you validate the Q1 results returned are accurate. You should have built in a process parallel to querying the LLM of actually querying the results and training the LLM to address any discrepancies, if that is possible or correct them.
@Laura__sa8q4 ай бұрын
The demanding world of refund specifics and anticipated actions
@PaulMoreiraАй бұрын
This was an excellent explanation! Thank you.
@George-j3George_372u4 ай бұрын
Ironically, because transfers between banks and cards always go so smoothly, don't they? But seriously, it's all good.
@bangyor59495 ай бұрын
Do LLMs store our sensitive data when using RAG?
@satish10123 ай бұрын
No , Vector DB stores all the sensitive and confidential data Once obtained send that data to LLM to get it summarized because Vector DB would have given only the connecting data for the string
@Patricia___361n4 ай бұрын
Behind the scenes: Binance CEO shares insights into future developments in an exclusive interview
@yinlan26724 ай бұрын
How to do this video? screen as the board.
@EasyCodingStudio10 күн бұрын
good
@jonniuss6 ай бұрын
ok, gotcha 👌
@HazelChikaraАй бұрын
My left arm started tingling. Quite hard to concentrate now. 😅
@BlueBearOne6 ай бұрын
And then a wide spread global epidemic crisis is brought to light wherein our gold standard "books" (peer reviewed journals) are rife with bad and corrupt data due to mismatched incentivization and misalignment of directives; and we then realize...how much good data through science do we really have? Shame we polluted the books we are supposed to be able to trust now that we have this magnificent technology here. 😭
@godlymajins6 ай бұрын
Dude, keep on topic. This isn’t the place for your grievances
@gcg818716 күн бұрын
i know right? it's too bad we not have unbiased data to make the most of this technology :(
@topcommenter5 ай бұрын
🤔I think Luv saw the connection the entire time
@siddid76204 ай бұрын
7:10 sounds like support for open source.
@dheerajkumar-uk6ec5 ай бұрын
exactly love
@MelissaLee-k8c2 ай бұрын
Alverta Field
@BachBaldwin2 ай бұрын
8615 Balistreri Forge
@markmotarker5 ай бұрын
lol. must be annoying to talk to "Luv". "Hi, Luv", "Exactly, Luv"
@RosaSam-n3kАй бұрын
1411 Carson River
@TenishaBentrem-f8t2 ай бұрын
Etha Road
@KennyGennaria-d1n2 ай бұрын
Sipes Forks
@DonnaGonzalez-r9eАй бұрын
Dibbert Throughway
@EllenJared-x3yАй бұрын
Dayna Course
@LutherHalifax-e4mАй бұрын
Malika Spur
@VioletaEasterwoodАй бұрын
946 Cremin Ranch
@HowardBlair-m5d2 ай бұрын
Tom Harbor
@sk3ffingtonai6 ай бұрын
👏👏
@WilliamsBarco-z3x2 ай бұрын
Pacocha Estates
@vrohan076 ай бұрын
kabhi haans bhi liya karo.. (Smile a bit bros)
@maliciousinferno6 ай бұрын
How in the world is this dude writing inverted for us too read straight lol
@inriinriinriinriinri5 ай бұрын
they mirror the video) you can notice that most of the people writing on a glass board are left handed(in reality 90% of planet’s population is right handed), that’s also because they mirrored the video
@ObscuredByCIouds5 ай бұрын
It's a skill only left-handed people have
@SidStrong-l9t5 ай бұрын
Terrible Analogy! When a journalist is ants to do research, he goes to a library and asks the librarian?? As opposed to doing a google search ? This scenario is from last century before Luv was born ?