Shame that the hook with videos is start open source and then get people draw into handling supporting functions via a commercial platform (for $$$)
@apdurdenКүн бұрын
Yeah, this is cool but not helping. I can open the LiveKit interface but can't find a way to get the agent to connect. API keys all correct
@apdurden17 сағат бұрын
I think the track management piece has changed since you made this. Running into no local_participant attribute for the Room object
@ChefDomein2 күн бұрын
Sir, your AI voice assistant demo's are one of the most valuable and appreciated youtube videos I have come across. Please keep them coming and also would be great to do a demo with Groq for solving latency issue. You are doing great work man and your students are really appriciating it! Thanks a lot brother!
@DevrajKhairwar3 күн бұрын
Excellent video very helpful THANK YOU!
@swibeijason4 күн бұрын
Man this is very instructive video! Thank you for filming this!
@Van-Helssen5 күн бұрын
Maestro! Very well explained Santiago, mis 10+ 😊
@Aldotronix5 күн бұрын
in summary: add a missing indicator
@RedCloudServices6 күн бұрын
DeepGram does not look available to use their website is simply a contact form
@s1nistr4337 күн бұрын
My only gripe with Mojo is that I wish it was marketed less towards AI and more towards general usage. It literally fixes the 2 major issues with Python that being lack of static typing and bad performance, if they market it as general purpose this could be a drop in replacement for python
@HowardKeziah7 күн бұрын
I love how you explain your code!
@concept-theory8 күн бұрын
will this help for automatic1111
@PratheekBabu8 күн бұрын
Thanks for an amzing content can we use giskard without openai key
@mehmetbakideniz8 күн бұрын
great video as always. Does this system keep chat history?
@pratheekbabu2728 күн бұрын
Can you do using gemini pro
@Singasongwithme200412 күн бұрын
Teaching style is too unique and too good 👍
@marncore704813 күн бұрын
This is a very interesting benchmark. Another example is something like, given 1 datapoint e.g. an image of a creature you haven't see before, be able to classify 100 images for whether that creature is visible in the image or not. You, me or even a young child can easily solve a problem like this. Current neural nets require enormous amounts of data and still make mistakes a human would never make. All this AI is going to be smarter than us hype will only be true once we discover algorithmic ideas that get machines to learn given very little data. We absolutely need new ideas.
@AmitMarx-ei8tt13 күн бұрын
Got stuck with the API Keys, i'm not sure how to set them
@allanmogley14 күн бұрын
How do you make those models that interact with data, Like I once saw someone create something really amazing that inteprets data from a a database and makes interpretations and reports from the data wothout hallucinating (It only fetches from the underlying DB)
@dheerajmadaan86614 күн бұрын
This was a really cool stuff. Thanks for sharing such a quality stuff. I ran it on vscode and it worked. The main problem is the latency. It took like 10s for the conversation. Not sure if it is because of the free account or their websocket API has the issue.
@muhammadhilal580714 күн бұрын
Hey there, the last book "Generative AI with LangChain" that you mentioned really cleared what I want. Thank you.
@sehrishilyas841616 күн бұрын
Loved watching it... You explain every thing so nicely!!!❤
@GEORGE.M.M16 күн бұрын
Hi there! I'm new to AI and RAG systems and have spent the past few days diving into your tutorial to understand each step and debug along the way. I have to say, THIS IS A GREAT TUTORIAL! I do have a question about alternative local LLM approaches. Regarding the accuracy of asking questions and retrieving relevant information from PDF documents like research papers and books using local models, would you recommend this approach over using tools like Ollama UI and LM Studio? Are there specific advantages or disadvantages to consider when choosing between these methods?
@miguellameiro249817 күн бұрын
Not sure if this is representative of AI in general. DeepMind and other AIs have solved games (eg chess, go) using similar examples and training. Thoughts?
@underfitted16 күн бұрын
This is way harder for AI… so far, nobody has cracked it
@girirajalone782517 күн бұрын
Best Video on KZbin regarding RAG. Must watch....
@psychicseahorse922217 күн бұрын
AI cant solve yet! I don't agree, that todays LLMs are not finding solutions for these problems. They are finding solutions, just wrong ones. Its a bit funny at the first glance, but in reality what you have shown the success is at 35%. Its not like its at 0%. 35 will grow and in time its gonna be 85. They are faking reason, better and better each day.
@underfitted17 күн бұрын
39% was not using an LLM. An LLM is not capable of getting to 5%.
@ddre5417 күн бұрын
Great content! Extremely helpful to get hands-on introduction into LLM and RAG. Thank you!
@Leo-ph7ow17 күн бұрын
At last some sanity in the web!
@liferacerad18 күн бұрын
Hi, sir I'm in college learning BCA (Bachelor of computer Application). What would you recommend me to learn to for long term goal. I'm confused please share your thoughts 💭 Thankyou love your videos❤
@martinprince825318 күн бұрын
What is love?
@HosseinOutward18 күн бұрын
Baby, don't hurt me
@Leo-ph7ow17 күн бұрын
No more
@charith493-418 күн бұрын
Thank you for sharing these with us ❤
@sirojiddinnuriyev283918 күн бұрын
There are a lot of contents how to fine-tune LLMs with LoRA or QLoRA. You gave us same food just with ‘apple genius’ keyword.
@underfitted17 күн бұрын
I’m glad you knew everything I said already! Good for you.
@vinlin-lk5no18 күн бұрын
可以把你视频通话这里运行吗
@kloklojul19 күн бұрын
You are using an LLM to create a question and a LLM to get another answers and than let an LLM eval both answers, but how do you evaluate the output of the Initial Tests? At this point you are trusting the facts of an LLM by tusting the answers of the llm
@davoodzare895219 күн бұрын
thanks brother!!!😄😄😄
@limotto945219 күн бұрын
You sounds like a hero of all Machine Learning realm. Thank you very much for the video sir.
@DarkXappHiRe19 күн бұрын
Immediately i saw this video i knew i had to subscribe ASAP. I tried it and its up and running though i had some errors cos i am not a python programmer. But i want to use it to build an Ai that can be installed on a vessel (ship). Thanks for sharing ❤
@charith493-419 күн бұрын
Thanks a lot for this awesome content❤ It would be super helpful if you could make a video for people starting their IT careers in 2024. Maybe cover what areas they should focus on. Thanks again!
@underfitted19 күн бұрын
I recorded a video on how to start. A roadmap. Check my past content.
@alaeldinabdulmajid657619 күн бұрын
Oriented - great job👍
@arielponce858620 күн бұрын
All open-source AI models are bad. They hallucinate a lot and give wrong answers. Don't use them. Don't waste your time. Ask your open-source model what the longest beach in the world is and see if it answers correctly. If it gets it wrong, ask again. The answer is Cassino Beach.
@underfitted19 күн бұрын
Well, it’s more nuanced than this. Many open source models work perfectly fine in many tasks. For example, llama 3 is great and cheaper than proprietary models for summarization tasks. Differences are usually negligible. Also, fine tuning an open source model will give you 100x better results than a proprietary model.
@huangphoenix20 күн бұрын
Great video, keep going. Just wonder if you can add barge-in function?
@chineduezeofor248121 күн бұрын
I think Python also supports it. e.g. grade: str # here if age < 11: grade = "Elementary" ... # The code block remains the same
@isaiases21 күн бұрын
I’ve becoming more and more disappointed with Mojo . The resulting language is nothing alike python, which is great for its simplicity and readability. And the overall performance is not really better than rust or other languages …
@perc-ai20 күн бұрын
not true Mojo will be the fastest language among all of them because its optimized for data centers while rust and other languages don't.
@the_mastermage20 күн бұрын
I do have to concur, i hope it becomes great but I honestly hoped for mojo to go more pythony instead of adding like seperate function and class anotations. I would have prefered if mojo had just added gradual typing and (similar to how it works in Julia) and then worked from there.
@bobweiram632118 күн бұрын
@@the_mastermageStrong typing is your friend. You'll appreciate it if you ever worked on large code bases.
@bobweiram632118 күн бұрын
It is a lot more readable and safer than Python because it uses less type inferencing. It's already faster than Rust and in some cases faster than C++.
@perc-ai18 күн бұрын
@@bobweiram6321 that is absolutely correct.
@peterhjvaneijk167021 күн бұрын
Love the video. Great breakdown. Would like to see more detail in evaluation results (e.g. it is now .73 good. WTH...!?), how tweaking the pipeline gives different eval results, and e.g. Ragas versus Giskard.
@AmbrishYadav22 күн бұрын
Thanks ! Exactly what I was looking for. I’ve been cracking my head on how the hell to test a RAG system. How the hell is business going to give me 1000+ questions to test and how can a human verify the response. Top content.
@juanmanuelzwiener444722 күн бұрын
Santiago, the voices of assistant are only in english? or also in spanish? abrazo crack!
@underfitted22 күн бұрын
They speak Spanish too
@mabadolat22 күн бұрын
This is great stuff, Santiago! I wish you had posted this video a few weeks ago. We just completed our final class project where we trained five different BertClassifier models on five different tasks. Our fine-tuning and inference code structure is very similar to yours. We definitely could have used this approach to use just the specialized adapters instead of the full BERT models. However, I have one question: I'm not clear whether the full model will ever be used during this process after we get fine-tuned adapters, or just the fine-tuned weight matrix for evaluation and inference?
@underfitted22 күн бұрын
You need to use both: the general model + the finer tuned adapter. The adapter describes how the general model should change on the fly
@gilbertoparra525522 күн бұрын
Thank you for the content very helpful, I have one question, how do I know it is running locally? I mean for every model we used the langchain library as if it were consulted through the API.
@underfitted21 күн бұрын
They are running locally because I’m using Ollama to host the models in my computer. We use Langchain exactly as we would use it to access online models. That’s good. It means we can switch models without changing the code.
@bibintony87622 күн бұрын
for some wierd reason, the bot won't run on windows. Had to run using WSL2. Hope it helps anyone taking this code out for a spin :)