Check out all the AI videos at Google I/O 2024 → goo.gle/io24-ai-yt
@gangababu20636 ай бұрын
IO2024_Multimodal_RAG_Demo.ipynb can't find this notebook
@PeterLappo5 ай бұрын
Pretty useless video without sample code.
@jprak123asd5 ай бұрын
I wanted to extend my heartfelt thanks for the excellent session on how Retrieval-Augmented Generation (RAG) can be used to train Large Language Models (LLMs) to build expert systems in the retail, software, automotive, and other sectors. Your explanation was incredibly clear and insightful, making a complex topic easily understandable. I truly felt like Dr. Watson listening to Sherlock Holmes unravel the mysteries of the universe, marveling at the clarity and depth of the information presented. Your efforts in breaking down the concepts and applications of RAG in such a straightforward manner have left me feeling both enlightened and excited about the potential this technology holds for our industry. Thank you once again for your time and for sharing your expertise. I look forward to exploring and implementing these innovative solutions in our own projects
@sarvariabhinav4 ай бұрын
WHERE IS THE SAMPLE CODE??????? This is very frustrating to showcase but not share code.
@diegomoralessepulved2 ай бұрын
@googledevelopers I second this comment.. could you please share that notebook?
@noetic4681Ай бұрын
agree
@thyagarajesh1844 ай бұрын
Impressive technology. Look forward to using it for my project.
@charlesbabbage67866 ай бұрын
Could'nt find the exact notebook used here.
@dumbol81266 ай бұрын
will there be an opensource version of this, or atleast a paper
@zuowang51854 ай бұрын
How do you handle terabytes of enterprise data, just do embedding groups? Should you generate sub questions first? How do you handle large amount of users?
@homeandr1Ай бұрын
Hello Jeff, could be that there is a mistake 24:00 in a for loop instead of “for i, s in enumerate(texts + table_summaries + image_summaries)” should be “for i, s in enumerate(text_summaries + table_summaries + image_summaries)”
@nestorbao21082 ай бұрын
Why do you use multimodal embedding model if you summarize images and ground them into text?
@mariaescobar80036 ай бұрын
When I use RAG, Am I sharing my data with the model/company? or is it private with an extracost?
@vichupayyan6 ай бұрын
Rag is an architecture i believe. with out without it - whatever happening to the data same applies
@hitmusicworldwide5 ай бұрын
Not necessarily. You can keep the data local. You only use the LLM for it's ability to summarize and generate responses as well as queries
@mohamedkarim-p7jАй бұрын
Thank for sharing👍
@hasszhao6 ай бұрын
where is this notebook in the cookbook repo?
@d.d.z.6 ай бұрын
Same question
@shubhamsharma56316 ай бұрын
33:18
@Chitragar6 ай бұрын
I have a notebook in Kaggle named Multimodal RAG Gemini - should help, YT removing links for some reason.
@d.d.z.6 ай бұрын
@@Chitragar thank you
@cullenharris18376 ай бұрын
@@shubhamsharma5631 I challenge you to find it. That is simply a link to the general github which is convoluted , not the exact notebook which is difficult to find.
@TL7352 ай бұрын
Nice, but why don't you develop a simple drag-and-drop RAG? e.g. I add a drive folder link and Google generates a RAG chat based on its content.
@evanrfraser2 ай бұрын
Fantastic. Thank You!
@yadav-rАй бұрын
Can I use the fine tuned Gemini RAG model via API from a mobile app?
@nagpalvikas6 ай бұрын
Is "unstructured" the best choice here for parsing PDF? Any better alternatives?
@ai_asymmetric5 ай бұрын
Llamaparse
@You_Only_LiveOnce5 ай бұрын
langchain would be a good choice
@nagarathnabheggade84105 ай бұрын
This example briefs about text and PDF, do we have any for video how de we use RAG, Vector store for Video can anyone give some reference
@descarded5 ай бұрын
im not sure if there are existing libraries to do that, maybe check docs. although here's my intuitive approach. video is basically series of images with some history/context attached to previous and subsequent frames. so if you keep that history across frames intact by either providing previous frames as input, or keep a local vector of it all, you can make it work. not sure if its the best approach, but i m open for discussion
@RiccardoCarlessoGoogleАй бұрын
Is there a link to the python notebook? I'd love to play with it!
@IndianLeopard76 ай бұрын
Wat about Copyright and Ethical issues? How much do u guys charge for using ur model? And as per IBM and Oracle embeddings are nothing new so why use urs?
@ammarfasih38664 ай бұрын
where is the notebook? Can someone please share the link?
@julianayue4022 ай бұрын
Can you please provide the source code? It would be great help!! Thank you!
@ajanieniola917229 күн бұрын
Wonderful
@SonuChaudhary3 ай бұрын
Where is the code link?
@pra84956 ай бұрын
github link please
@shubhamsharma56316 ай бұрын
33:18
@kaushikdas51156 ай бұрын
@@shubhamsharma5631 can we run the code without subscription?
@oldmansgoldenwords6 ай бұрын
You can get blue driver and get all error codes and example
@adithiyag46166 ай бұрын
Please share the colab link
@gnanasenthil6543216 ай бұрын
Yes please do
@shubhamsharma56316 ай бұрын
33:18
@ai_asymmetric5 ай бұрын
dense embeddings are never enough for RAG system
@SB-md2km6 ай бұрын
Ok but someone could literally look any of this up online or look for it in a manual, etc. w/out using AI...
@dr.p.srinivasaragavanperum29114 ай бұрын
Happy
@dr.p.srinivasaragavanperum29114 ай бұрын
🎉
@dr.p.srinivasaragavanperum29114 ай бұрын
❤
@user-xx3mr6vx9u5 ай бұрын
Haha we just need your browsing history
@KitchenAIdev20 күн бұрын
Hmm... interestin...
@fast-path6 ай бұрын
🥺
@JH-bb8in6 ай бұрын
This shows how garbage Langchain is as a library. Extremely verbose and intransparent.
@imai-pg3cz6 ай бұрын
Is there any framework better than Langchain?
@gokusaiyan11285 ай бұрын
can you tell me more about it please :)
@Inceptionxg6 ай бұрын
After Muaadh Rilwan's post on LinkedIn
@ohmatokita59904 ай бұрын
So if I'm using the 2nd way, what's the name of the multidality-modal would be?