Clinical Trial Accelerator
10:20
21 күн бұрын
PropSentinel
6:31
21 күн бұрын
LitPilot
13:41
21 күн бұрын
Marketing Content Enhancer
7:59
21 күн бұрын
Agentic Real Estate Humberto Gerardo
7:33
Smart Lease Navigator
9:54
21 күн бұрын
Publicus
10:41
21 күн бұрын
Drug Alert AI
9:53
3 ай бұрын
Пікірлер
@sohailalip.r1626
@sohailalip.r1626 11 сағат бұрын
I am getting OpenAI key error 😢😢😢
@tanguero2k7
@tanguero2k7 4 күн бұрын
This was quite an amazing (and unexpected) GPU "deep dive" 🤯. Thanks for the care you put into all of what you share with us!
@AI-Makerspace
@AI-Makerspace 4 күн бұрын
Glad you enjoyed it @tanguero2k7 … we did too! We’ll keep going to wherever exploration of the LLM Edge takes us!
@AI-Makerspace
@AI-Makerspace 5 күн бұрын
Flash Attention - AIM Event: colab.research.google.com/drive/1-OPCDWnK3sQQncg6gm0bSMr9Nmx3lAil?usp=sharing Event Slides: www.canva.com/design/DAGXB3D4Yd0/ufpIRKB21NdBjzCwNYJOXA/view?DAGXB3D4Yd0&
@mmasa1
@mmasa1 5 күн бұрын
Dr Greg and the Wiz, what a dream team! Thanks for your wonderful, truly wonderful efforts. I am certainly getting better following you guys. God bless.
@MrMoonsilver
@MrMoonsilver 10 күн бұрын
AI engineering +1, but eventually will be interested in getting more into detail and understand LLM engineering better.
@AI-Makerspace
@AI-Makerspace 10 күн бұрын
The most classic pattern we've seen from our community!
@thuhuongnguyen6543
@thuhuongnguyen6543 10 күн бұрын
very useful! Thank you for your sharing
@mentorships3309
@mentorships3309 11 күн бұрын
Dr Greg, you are absolutely incredible teacher. Keep it up!
@AI-Makerspace
@AI-Makerspace 12 күн бұрын
Replit: replit.com/@replit/Anthropic-Computer-Use Event Slides: www.canva.com/design/DAGWX1fpgkA/AYTV-idbZfIoz-0uoosImQ/view?DAGWX1fpgkA&
@rakeshkumarrout2629
@rakeshkumarrout2629 12 күн бұрын
hey did you called up the large action model live session?
@fredflintstone7924
@fredflintstone7924 15 күн бұрын
It was a great video and RLHF was explained perfectly, thanks!
@AI-Makerspace
@AI-Makerspace 10 күн бұрын
Thanks!
@lucindalinde4198
@lucindalinde4198 16 күн бұрын
Thank you for boldly diving into these highly technical topics in plain English! Your videos are a gift to the community
@yangwang8232
@yangwang8232 18 күн бұрын
Thank you for introducing our VPTQ project. VPTQ is an ongoing project and we welcome suggestions from everyone. We have currently open-sourced the inference code and algorithm code. : D
@yangwang8232
@yangwang8232 18 күн бұрын
3. Inference Performance: Thank you both for helping me explain this :) Currently, our inference code is just a very naive inference example based on Torch. We haven't done any optimizations specifically for inference yet. I am currently trying to integrate VPTQ into vLLM. I believe this will greatly enhance the inference speed with vLLM. I am currently discussing this in the vLLM Slack channel and with the sig-quant group. Stay tuned!
@yangwang8232
@yangwang8232 18 күн бұрын
2. Entropy: I am very interested in using Shannon entropy and information theory to quantitatively analyze the impact of model quantization. In fact, an unresolved academic issue in the quantization field is how to quantitatively describe the effects of quantization on the final loss, or even model accuracy, especially since there are many nonlinear layers involved. The GPTQ/VPTQ series of algorithms, which are based on second-order optimization, actually have very strong assumptions (as analyzed in the paper WoodFisher). These assumptions simplify the impact of quantization on model loss to the impact on proxy error, thus simplifying the algorithm. I believe this is a more fundamental problem. Currently, I am still conceptualizing and do not yet have a clear idea on how to perform quantitative analysis.
@yangwang8232
@yangwang8232 18 күн бұрын
I'll try to address the last few questions from the video: 1. The results of VPTQ on smaller models. I have actually tried compressing smaller models like Qwen2.5 1B-3B, and achieved good results around 3-4 bits. I will release some of these results later. Indeed, on smaller models, compressing the model weights is quite challenging. I am currently working on improvements and will continue to provide updates. Thank you.
@AI-Makerspace
@AI-Makerspace 19 күн бұрын
VPTQ - Vector Post-Training Quanization: colab.research.google.com/drive/1yItAepbYh9HVs3SEBqtZclc5AY4aJIAa?usp=sharing Event Slides: www.canva.com/design/DAGVuEs7C_s/cohyWXMzy7TD_MzR8-MA_g/view?DAGVuEs7C_s&
@aryansaurabhbhardwaj
@aryansaurabhbhardwaj 21 күн бұрын
Hi Greg and Wiz. Great tutorial. I am actually applying it to my own application. I was wondering what would you sugges to do if the whole document size is large more than 700 pages. It want be passed in the contextual chunking function. If I take the 500 pages around the chhunk, the chaching wont work. Please can you advice? Thanks Saurabh
@AI-Makerspace
@AI-Makerspace 20 күн бұрын
I would build some metadata, in this case, like a summary/outline and use that to generate contextual augments.
@PhiliBuster-i7t
@PhiliBuster-i7t 22 күн бұрын
I'm super happy for this video supernova was on my radar last month.
@MegaClockworkDoc
@MegaClockworkDoc 25 күн бұрын
I am trying to understand the unique concepts of this paper. It sounds like this is a workflow of agents and programmatic validators to synthetically generate DPO data. Is the system self learning as well?
@AI-Makerspace
@AI-Makerspace 20 күн бұрын
It can be online, yes.
@rakeshkumarrout2629
@rakeshkumarrout2629 25 күн бұрын
Hey this is quite useful.can you help me in how large action model works? The recent Claude computer or ominiparser and lavauge model or how the rabbit mq works.can you help with the codes refference or implementation.thank you
@AI-Makerspace
@AI-Makerspace 25 күн бұрын
We're planning to cover computer use in an upcoming event soon - probably nov 13. Stay tuned!
@hoopNscoops
@hoopNscoops 25 күн бұрын
Great information here! Thanks for making it public. I think you're going to get a sizeable community around you because of these live streams. Q: where in the code is prompt caching evoked?
@AI-Makerspace
@AI-Makerspace 20 күн бұрын
Caching is offered by Anthropic's endpoint by default - and is being taken advantage of under the hood here.
@AI-Makerspace
@AI-Makerspace 26 күн бұрын
Calibrated reward: github.com/huggingface/trl/pull/2155/files Mixture of judges: github.com/huggingface/trl/pull/2159/files CGPO Trainer (single task single objective): github.com/huggingface/trl/pull/2190/files Event Slides: www.canva.com/design/DAGVDvGDG54/kEflcFEuGxDKMTYb6Rj2vA/view?DAGVDvGDG54&
@MarkDavisRocks
@MarkDavisRocks 26 күн бұрын
at 00:24:15 you give a formula for faithfulness, think it is flawed a bit. Should be (#Claims from the answer which exist in the context) / (#claims in answer). Otherwise there could be >1 result.
@AI-Makerspace
@AI-Makerspace 24 күн бұрын
Can you be more specific about what the flaw is? Also, why do you choose the word "exist" rather than "inferred from?" --Here's what appears to be true from the documentation: -- "To calculate this a set of claims from the generated answer is first identified. Then each one of these claims are cross checked with given context to determine if it can be inferred from given context or not." Three steps to the calculation: 1. Break generated answer into statements 2. For each statement, verify if it can be inferred 3. Calculate Faithfulness! It seems that the condition "if (and only if) it can be inferred from the context" will keep the faithfulness calculation from going higher than 1.0
@MarkDavisRocks
@MarkDavisRocks 17 сағат бұрын
@AI-Makerspace you might be right, but at the point referenced in the video, it talks about the context not the generated answer. So a context like "Paris is a bustling French capital and center of culture and art" could contain 2-3 claims , but the answer to "what is the capital of France" may contain one claim "Paris is the Capital of France". The faithfulness would be 3/1 in that case if they were not related to the golden truth answer. I may be missing something! Great video though, thanks!
@MarkDavisRocks
@MarkDavisRocks 17 сағат бұрын
Ah I get it now, duh - the element of the formula "number of claims that can be inferred from the given context" I was reading as the number of claims that can be inferred from the context alone. It's really the number of claims in the generated answer which can be inferred from the given context.
@AI-Makerspace
@AI-Makerspace 9 сағат бұрын
"It's really the number of claims in the generated answer which can be inferred from the given context." Nice follow-up @Mark! Let's gooo! We find it helpful to look directly at the prompted examples in the src code here: github.com/explodinggradients/ragas/blob/7d051437a1a5d8e9ad5c42252bf1debf51679140/src/ragas/metrics/_faithfulness.py#L52 You can see how FaithfulnessStatements turn into SentencesSimplified with an example and in general via the instruction given in NLIStatementPrompt as "Your task is to judge the faithfulness of a series of statements based on a given context. For each statement you must return verdict as 1 if the statement can be directly inferred based on the context or 0 if the statement can not be directly inferred based on the context."
@niting1978
@niting1978 27 күн бұрын
Great work here Richard and Gil - loved the demo
@richardgower4890
@richardgower4890 27 күн бұрын
Love this guys. Great job!
@johnini
@johnini Ай бұрын
hello!! thank you a lot for the videos! what is the best way to interact in sort of chat engine of chat loop with a workflow?
@AI-Makerspace
@AI-Makerspace Ай бұрын
Can you expand on your request?
@johnini
@johnini Ай бұрын
@@AI-Makerspace thank you for answering! I'm curious about the best practices for building a chat engine or chatbot that can interact in a continuous loop with a workflow. Currently, we are receiving one response at a time from the workflow, but I was wondering if we could enhance this by buffering the "chatmemory" and keep on with the conversation. Should this be achieved with a loop? I feel like I remember a llamaIndex or Langchain tool that kept the chat engine running, but I might be mistaken, maybe I was just re-querying. Also, how can I ensure other workflows share the same context? Additionally, is it possible to store interactions as vectorized semantic and episodic memories in a vector database, allowing the system to recall past conversations and in the future query from those memories and the RAG? and maybe do some type of reranking.
@johnini
@johnini Ай бұрын
from llama_index import SimpleDirectoryReader, VectorStoreIndex from colorama import Fore, Style, init init(autoreset=True) def chat(): print(f"{Fore.CYAN}Loading documents...") index = VectorStoreIndex.from_documents( SimpleDirectoryReader("./data").load_data() ) chat_engine = index.as_chat_engine() print(f"{Fore.GREEN}Ready! Type 'quit' to exit ") while True: query = input(f"{Fore.GREEN}You: {Style.RESET_ALL}").strip() if query.lower() == 'quit': break if query: print(f"{Fore.BLUE}Assistant: {Style.RESET_ALL}{chat_engine.chat(query)} ") if __name__ == "__main__": try: chat() except Exception as e: print(f"{Fore.RED}Error: {e}")
@MegaClockworkDoc
@MegaClockworkDoc Ай бұрын
Great video, but using text that the model was already trained on is a bad test case
@AI-Makerspace
@AI-Makerspace Ай бұрын
Agreed! We typically stick with easy to consume toy-examples, however!
@solyarisoftware
@solyarisoftware Ай бұрын
Hi! I really appreciated your video. BTW, I wrote an article titled "SWARMing Conversational AI: Integrating No-Code and Code in Agent-Based Workflows," which you can find online. I would love to hear your feedback on my perspective (SWARM emphasis on blending no-code instructions with hardcoded conversational steps. Thanks! Giorgio
@AI-Makerspace
@AI-Makerspace Ай бұрын
Definitely will check it out!
@AI-Makerspace
@AI-Makerspace Ай бұрын
OpenAI Swarm - Multi-Agent: colab.research.google.com/drive/1NumpfFNIPxsyjmruJ3jzyxxX2HY8V0MO?usp=sharing Event Slides: www.canva.com/design/DAGUZ0A-Zpc/uctbkE6-rHzlRfjxVFPAlg/view?DAGUZ0A-Zpc&
@scitechtalktv9742
@scitechtalktv9742 Ай бұрын
Are the slides available?
@AI-Makerspace
@AI-Makerspace Ай бұрын
Sure thing, just pinned the slides and notebook in a comment!
@enespacalar
@enespacalar Ай бұрын
Congratulations dude
@danielusvyat
@danielusvyat Ай бұрын
Great video! I’m excited to dive into contextual retrieval next week. When it comes to productionizing hybrid retrieval with BM25, I’m considering using Elasticsearch, any other recommendations? My main concern with hybrid retrieval is the added complexity it brings to the production.
@AI-Makerspace
@AI-Makerspace Ай бұрын
Elasticsearch is a great tool for this!
@cmagganas
@cmagganas Ай бұрын
🚨MERT ALERT
@seanbergman8927
@seanbergman8927 Ай бұрын
Great video and demo as always! I learn much from your content. The contextual retrieval paper said if your corpus is less than 200k tokens, just skip rag and dump the entire corpus into the prompt for every question, and they will cache it (but only for a short time) and just use long context Q&A. I didn’t see them publish any metrics comparing long context to rag, so I take it with a grain of salt. They do want customers to spend as many tokens as possible... But I’m very intrigued at the same time. Maybe you could do a video comparing the two methods? That would be amazing research.
@AI-Makerspace
@AI-Makerspace Ай бұрын
Great insights and instincts @Sean! We'll keep the content recommendation in mind for sure! This is farthest we've gotten on Long-Context and Evaluation for the big window LLMs: kzbin.infoBrwhbjh3boU?si=V24z6pagQ0EQ8Ms1
@micbab-vg2mu
@micbab-vg2mu Ай бұрын
thanks :)
@AI-Makerspace
@AI-Makerspace Ай бұрын
Would the results be even better when combined with semantic chunking? Answer: research.trychroma.com/evaluating-chunking
@AmanBansil
@AmanBansil Ай бұрын
RAG-ception 0:55 - Context of the contextually generated chunks. Got it...got..it.......got it....ok wait what? Need to watch the whole thing.
@AI-Makerspace
@AI-Makerspace Ай бұрын
Re; Would the results be even better when combined with semantic chunking? For more on Semantic Chunking strategies: research.trychroma.com/evaluating-chunking
@AI-Makerspace
@AI-Makerspace Ай бұрын
Contextual Retrieval: colab.research.google.com/drive/1KGVxiwc2zoY9v6f3IQfs8qJIZeGeMKAq?usp=sharing Event Slides: www.canva.com/design/DAGTv5ofV8g/-wFTZpoCu8yYzseTb_kx2g/view?DAGTv5ofV8g&
@weiwennie40
@weiwennie40 11 күн бұрын
The Ragas part of code in the notebook is not working. Could you fix it?
@givanildogramacho441
@givanildogramacho441 Ай бұрын
Very interesting
@givanildogramacho441
@givanildogramacho441 Ай бұрын
Where this complete video, id like to understand this loss fuction and the matrix hessian
@AI-Makerspace
@AI-Makerspace Ай бұрын
Hey Givanildo! The full event is here: kzbin.infoxmaG4al2A6E?si=bdHM0wzlll5XkXWJ To learn more about loss functions, check out this one! kzbin.infoiB8FWR9aD5Q?si=4oABKIf-DDNQQv1R
@andresrubio2015
@andresrubio2015 Ай бұрын
Top
@AI-Makerspace
@AI-Makerspace Ай бұрын
GPTQ - AIMS: colab.research.google.com/drive/1iZQ_Byo9F-bM6IGywtb3sN9TIEcE_Mqd?usp=sharing Event Slides: www.canva.com/design/DAGTF8vsIIM/9qbXt5T-pvt-KIHDtC_4nA/view?DAGTF8vsIIM&
@saurabhahlawat425
@saurabhahlawat425 Ай бұрын
Thanks for the great video. Subscribed! Question - We saw here that "similar pairs" were trained where the pair implies a (question , context). Is it possible to get good results by fine-tuning on a "similar questions" dataset i.e. (question1, question2) and the difference between those 2 questions is usually one word/phrase. So question1 would have the full-form of an entity; and question2 the acronym of the same entity. Reason I'm doing this is that I'm storing a mix of questions and contexts in my vector Database. If the user's query matches a question then I look up the corresponding answer (static answer that almost never changes - so no LLM required). If the match is a context instead, then LLM generation takes over.
@AI-Makerspace
@AI-Makerspace Ай бұрын
Yes, that is a decent way to approach that problem.
@nazmusas
@nazmusas Ай бұрын
How do I get a scholarship?
@AI-Makerspace
@AI-Makerspace Ай бұрын
We don't currently have scholarships available @nazmuss! We are working to get our business model right and to grow our partnerships in the US so we can best serve our community members around the world in the long-term moving forward! In short, stay tuned!
@yerson557
@yerson557 Ай бұрын
Where does ground truth come from? Is this a human annotated property? I understand the ground truth in RAGAS refers to the correct answer to the question. It's typically used for the context_recall metric. But how to we get this? Human in the loop? LLM generated? More documents from the retrieval? Thank you?
@AI-Makerspace
@AI-Makerspace Ай бұрын
"Ground Truth" can come from any of these sources! Of course, getting it straight from the people who perform whatever tasks you're automating is the right idea, but this can be very expensive. In the case of RAGAS the "Ground Truth" is represented by the output you get when you provide [question, retrieved context] pairs as input to a generator. That is, we are not actually using a RAG system, but passing "correct" [question, context] pairs as input. These are "correct" because they were synthetically generated and are known to be correct; see Synthetic Test Data Generation: docs.ragas.io/en/stable/concepts/testset_generation.html Note that Ground Truth is different than "Answer" because "Answer" actually uses the RAG application that you're building, while "Ground Truth" passes [question, context] pairs in direclty.
@cmagganas
@cmagganas Ай бұрын
LOVE LOVE LOVE the Snatch Dags meme
@andres.yodars
@andres.yodars Ай бұрын
lovely
@AI-Makerspace
@AI-Makerspace Ай бұрын
Thanks Andres!