Dr Greg and the Wiz, what a dream team! Thanks for your wonderful, truly wonderful efforts. I am certainly getting better following you guys. God bless.
@MrMoonsilver10 күн бұрын
AI engineering +1, but eventually will be interested in getting more into detail and understand LLM engineering better.
@AI-Makerspace10 күн бұрын
The most classic pattern we've seen from our community!
@thuhuongnguyen654310 күн бұрын
very useful! Thank you for your sharing
@mentorships330911 күн бұрын
Dr Greg, you are absolutely incredible teacher. Keep it up!
hey did you called up the large action model live session?
@fredflintstone792415 күн бұрын
It was a great video and RLHF was explained perfectly, thanks!
@AI-Makerspace10 күн бұрын
Thanks!
@lucindalinde419816 күн бұрын
Thank you for boldly diving into these highly technical topics in plain English! Your videos are a gift to the community
@yangwang823218 күн бұрын
Thank you for introducing our VPTQ project. VPTQ is an ongoing project and we welcome suggestions from everyone. We have currently open-sourced the inference code and algorithm code. : D
@yangwang823218 күн бұрын
3. Inference Performance: Thank you both for helping me explain this :) Currently, our inference code is just a very naive inference example based on Torch. We haven't done any optimizations specifically for inference yet. I am currently trying to integrate VPTQ into vLLM. I believe this will greatly enhance the inference speed with vLLM. I am currently discussing this in the vLLM Slack channel and with the sig-quant group. Stay tuned!
@yangwang823218 күн бұрын
2. Entropy: I am very interested in using Shannon entropy and information theory to quantitatively analyze the impact of model quantization. In fact, an unresolved academic issue in the quantization field is how to quantitatively describe the effects of quantization on the final loss, or even model accuracy, especially since there are many nonlinear layers involved. The GPTQ/VPTQ series of algorithms, which are based on second-order optimization, actually have very strong assumptions (as analyzed in the paper WoodFisher). These assumptions simplify the impact of quantization on model loss to the impact on proxy error, thus simplifying the algorithm. I believe this is a more fundamental problem. Currently, I am still conceptualizing and do not yet have a clear idea on how to perform quantitative analysis.
@yangwang823218 күн бұрын
I'll try to address the last few questions from the video: 1. The results of VPTQ on smaller models. I have actually tried compressing smaller models like Qwen2.5 1B-3B, and achieved good results around 3-4 bits. I will release some of these results later. Indeed, on smaller models, compressing the model weights is quite challenging. I am currently working on improvements and will continue to provide updates. Thank you.
Hi Greg and Wiz. Great tutorial. I am actually applying it to my own application. I was wondering what would you sugges to do if the whole document size is large more than 700 pages. It want be passed in the contextual chunking function. If I take the 500 pages around the chhunk, the chaching wont work. Please can you advice? Thanks Saurabh
@AI-Makerspace20 күн бұрын
I would build some metadata, in this case, like a summary/outline and use that to generate contextual augments.
@PhiliBuster-i7t22 күн бұрын
I'm super happy for this video supernova was on my radar last month.
@MegaClockworkDoc25 күн бұрын
I am trying to understand the unique concepts of this paper. It sounds like this is a workflow of agents and programmatic validators to synthetically generate DPO data. Is the system self learning as well?
@AI-Makerspace20 күн бұрын
It can be online, yes.
@rakeshkumarrout262925 күн бұрын
Hey this is quite useful.can you help me in how large action model works? The recent Claude computer or ominiparser and lavauge model or how the rabbit mq works.can you help with the codes refference or implementation.thank you
@AI-Makerspace25 күн бұрын
We're planning to cover computer use in an upcoming event soon - probably nov 13. Stay tuned!
@hoopNscoops25 күн бұрын
Great information here! Thanks for making it public. I think you're going to get a sizeable community around you because of these live streams. Q: where in the code is prompt caching evoked?
@AI-Makerspace20 күн бұрын
Caching is offered by Anthropic's endpoint by default - and is being taken advantage of under the hood here.
@AI-Makerspace26 күн бұрын
Calibrated reward: github.com/huggingface/trl/pull/2155/files Mixture of judges: github.com/huggingface/trl/pull/2159/files CGPO Trainer (single task single objective): github.com/huggingface/trl/pull/2190/files Event Slides: www.canva.com/design/DAGVDvGDG54/kEflcFEuGxDKMTYb6Rj2vA/view?DAGVDvGDG54&
@MarkDavisRocks26 күн бұрын
at 00:24:15 you give a formula for faithfulness, think it is flawed a bit. Should be (#Claims from the answer which exist in the context) / (#claims in answer). Otherwise there could be >1 result.
@AI-Makerspace24 күн бұрын
Can you be more specific about what the flaw is? Also, why do you choose the word "exist" rather than "inferred from?" --Here's what appears to be true from the documentation: -- "To calculate this a set of claims from the generated answer is first identified. Then each one of these claims are cross checked with given context to determine if it can be inferred from given context or not." Three steps to the calculation: 1. Break generated answer into statements 2. For each statement, verify if it can be inferred 3. Calculate Faithfulness! It seems that the condition "if (and only if) it can be inferred from the context" will keep the faithfulness calculation from going higher than 1.0
@MarkDavisRocks17 сағат бұрын
@AI-Makerspace you might be right, but at the point referenced in the video, it talks about the context not the generated answer. So a context like "Paris is a bustling French capital and center of culture and art" could contain 2-3 claims , but the answer to "what is the capital of France" may contain one claim "Paris is the Capital of France". The faithfulness would be 3/1 in that case if they were not related to the golden truth answer. I may be missing something! Great video though, thanks!
@MarkDavisRocks17 сағат бұрын
Ah I get it now, duh - the element of the formula "number of claims that can be inferred from the given context" I was reading as the number of claims that can be inferred from the context alone. It's really the number of claims in the generated answer which can be inferred from the given context.
@AI-Makerspace9 сағат бұрын
"It's really the number of claims in the generated answer which can be inferred from the given context." Nice follow-up @Mark! Let's gooo! We find it helpful to look directly at the prompted examples in the src code here: github.com/explodinggradients/ragas/blob/7d051437a1a5d8e9ad5c42252bf1debf51679140/src/ragas/metrics/_faithfulness.py#L52 You can see how FaithfulnessStatements turn into SentencesSimplified with an example and in general via the instruction given in NLIStatementPrompt as "Your task is to judge the faithfulness of a series of statements based on a given context. For each statement you must return verdict as 1 if the statement can be directly inferred based on the context or 0 if the statement can not be directly inferred based on the context."
@niting197827 күн бұрын
Great work here Richard and Gil - loved the demo
@richardgower489027 күн бұрын
Love this guys. Great job!
@johniniАй бұрын
hello!! thank you a lot for the videos! what is the best way to interact in sort of chat engine of chat loop with a workflow?
@AI-MakerspaceАй бұрын
Can you expand on your request?
@johniniАй бұрын
@@AI-Makerspace thank you for answering! I'm curious about the best practices for building a chat engine or chatbot that can interact in a continuous loop with a workflow. Currently, we are receiving one response at a time from the workflow, but I was wondering if we could enhance this by buffering the "chatmemory" and keep on with the conversation. Should this be achieved with a loop? I feel like I remember a llamaIndex or Langchain tool that kept the chat engine running, but I might be mistaken, maybe I was just re-querying. Also, how can I ensure other workflows share the same context? Additionally, is it possible to store interactions as vectorized semantic and episodic memories in a vector database, allowing the system to recall past conversations and in the future query from those memories and the RAG? and maybe do some type of reranking.
@johniniАй бұрын
from llama_index import SimpleDirectoryReader, VectorStoreIndex from colorama import Fore, Style, init init(autoreset=True) def chat(): print(f"{Fore.CYAN}Loading documents...") index = VectorStoreIndex.from_documents( SimpleDirectoryReader("./data").load_data() ) chat_engine = index.as_chat_engine() print(f"{Fore.GREEN}Ready! Type 'quit' to exit ") while True: query = input(f"{Fore.GREEN}You: {Style.RESET_ALL}").strip() if query.lower() == 'quit': break if query: print(f"{Fore.BLUE}Assistant: {Style.RESET_ALL}{chat_engine.chat(query)} ") if __name__ == "__main__": try: chat() except Exception as e: print(f"{Fore.RED}Error: {e}")
@MegaClockworkDocАй бұрын
Great video, but using text that the model was already trained on is a bad test case
@AI-MakerspaceАй бұрын
Agreed! We typically stick with easy to consume toy-examples, however!
@solyarisoftwareАй бұрын
Hi! I really appreciated your video. BTW, I wrote an article titled "SWARMing Conversational AI: Integrating No-Code and Code in Agent-Based Workflows," which you can find online. I would love to hear your feedback on my perspective (SWARM emphasis on blending no-code instructions with hardcoded conversational steps. Thanks! Giorgio
Sure thing, just pinned the slides and notebook in a comment!
@enespacalarАй бұрын
Congratulations dude
@danielusvyatАй бұрын
Great video! I’m excited to dive into contextual retrieval next week. When it comes to productionizing hybrid retrieval with BM25, I’m considering using Elasticsearch, any other recommendations? My main concern with hybrid retrieval is the added complexity it brings to the production.
@AI-MakerspaceАй бұрын
Elasticsearch is a great tool for this!
@cmagganasАй бұрын
🚨MERT ALERT
@seanbergman8927Ай бұрын
Great video and demo as always! I learn much from your content. The contextual retrieval paper said if your corpus is less than 200k tokens, just skip rag and dump the entire corpus into the prompt for every question, and they will cache it (but only for a short time) and just use long context Q&A. I didn’t see them publish any metrics comparing long context to rag, so I take it with a grain of salt. They do want customers to spend as many tokens as possible... But I’m very intrigued at the same time. Maybe you could do a video comparing the two methods? That would be amazing research.
@AI-MakerspaceАй бұрын
Great insights and instincts @Sean! We'll keep the content recommendation in mind for sure! This is farthest we've gotten on Long-Context and Evaluation for the big window LLMs: kzbin.infoBrwhbjh3boU?si=V24z6pagQ0EQ8Ms1
@micbab-vg2muАй бұрын
thanks :)
@AI-MakerspaceАй бұрын
Would the results be even better when combined with semantic chunking? Answer: research.trychroma.com/evaluating-chunking
@AmanBansilАй бұрын
RAG-ception 0:55 - Context of the contextually generated chunks. Got it...got..it.......got it....ok wait what? Need to watch the whole thing.
@AI-MakerspaceАй бұрын
Re; Would the results be even better when combined with semantic chunking? For more on Semantic Chunking strategies: research.trychroma.com/evaluating-chunking
The Ragas part of code in the notebook is not working. Could you fix it?
@givanildogramacho441Ай бұрын
Very interesting
@givanildogramacho441Ай бұрын
Where this complete video, id like to understand this loss fuction and the matrix hessian
@AI-MakerspaceАй бұрын
Hey Givanildo! The full event is here: kzbin.infoxmaG4al2A6E?si=bdHM0wzlll5XkXWJ To learn more about loss functions, check out this one! kzbin.infoiB8FWR9aD5Q?si=4oABKIf-DDNQQv1R
Thanks for the great video. Subscribed! Question - We saw here that "similar pairs" were trained where the pair implies a (question , context). Is it possible to get good results by fine-tuning on a "similar questions" dataset i.e. (question1, question2) and the difference between those 2 questions is usually one word/phrase. So question1 would have the full-form of an entity; and question2 the acronym of the same entity. Reason I'm doing this is that I'm storing a mix of questions and contexts in my vector Database. If the user's query matches a question then I look up the corresponding answer (static answer that almost never changes - so no LLM required). If the match is a context instead, then LLM generation takes over.
@AI-MakerspaceАй бұрын
Yes, that is a decent way to approach that problem.
@nazmusasАй бұрын
How do I get a scholarship?
@AI-MakerspaceАй бұрын
We don't currently have scholarships available @nazmuss! We are working to get our business model right and to grow our partnerships in the US so we can best serve our community members around the world in the long-term moving forward! In short, stay tuned!
@yerson557Ай бұрын
Where does ground truth come from? Is this a human annotated property? I understand the ground truth in RAGAS refers to the correct answer to the question. It's typically used for the context_recall metric. But how to we get this? Human in the loop? LLM generated? More documents from the retrieval? Thank you?
@AI-MakerspaceАй бұрын
"Ground Truth" can come from any of these sources! Of course, getting it straight from the people who perform whatever tasks you're automating is the right idea, but this can be very expensive. In the case of RAGAS the "Ground Truth" is represented by the output you get when you provide [question, retrieved context] pairs as input to a generator. That is, we are not actually using a RAG system, but passing "correct" [question, context] pairs as input. These are "correct" because they were synthetically generated and are known to be correct; see Synthetic Test Data Generation: docs.ragas.io/en/stable/concepts/testset_generation.html Note that Ground Truth is different than "Answer" because "Answer" actually uses the RAG application that you're building, while "Ground Truth" passes [question, context] pairs in direclty.