I remember when creating apps was the biggest trend. Now, it seems the future lies in developing Agents, LLMs and fine-tuning models. Is this the next big evolution?
@deeplearningpartnership9 күн бұрын
Yes.
@MohanRadhakrishnan12 күн бұрын
Distributed ML framework development is interesting. How should I pursue ?
@m_ke14 күн бұрын
What a great combo, you should do a weekly show together
@runfarbeniceАй бұрын
I like the part with real world applications, a deeper dive on that would be valuable
@dizungu8547Ай бұрын
Great podcast. Learned a lot.
@JasonRanesesАй бұрын
Ravin and I worked together at sg. One of the brightest mlnds out there, nice to see him on the pod.
@ravinkАй бұрын
Hugo, Always a pleasure to talk with you. And thank you folks in the audience for all the wonderful questions
@BobKane-g6x4 ай бұрын
Isn't this the same as function calling in LLM?
@jit-r5b4 ай бұрын
thank you!
@MatijaGrcic4 ай бұрын
The course was amazing, Hamel and Dan were perfect hosts, making the experience both insightful and enjoyable.
@vigneshpadmanabhan40885 ай бұрын
One of the best interviews!
@thomaswheeler40455 ай бұрын
what interview / when is this from?
@vanishinggradients5 ай бұрын
kzbin.info/www/bejne/g4Wteamdf9mspLc
@jean-francoisbabin89516 ай бұрын
I dont hearing
@vanishinggradients6 ай бұрын
sorry im a bit confused: what's the issue?
@randomguy756 ай бұрын
what is the hamal course? 1:35:46
@vanishinggradients6 ай бұрын
was this course! maven.com/s/course/e457fa822c
@plattenschieber6 ай бұрын
This is just a gem conversation. Feels like sitting in a nerd talk sofa round, during my math masters ❤
@AAL30876 ай бұрын
Is there a link to the report? Cheers
@vanishinggradients6 ай бұрын
yep! applied-llms.org/
@mottot6 ай бұрын
I 'm blessed to regularly do AI with the Missus
@chineduezeofor24816 ай бұрын
I really enjoyed this episode. Sébastian is a great person.
@ChatQuestTheNLP3AdventureGame6 ай бұрын
0:04:55 Prompt Engineering Fulltime
@vinayjose77577 ай бұрын
TIL: A̷t̷t̷e̷n̷t̷i̷o̷n̷ P̶y̶d̶a̶n̶t̶i̶c̶ Rizz is All you Need
@nosferatuboi7 ай бұрын
Having come to CS from the humanities, I've found Jason unique in his ability to make broad associations between seemingly unrelated concepts, something I've found difficult for many engineers. He's got a knack for language, and that goes really far in working with these new systems in my experience. Love to see it.
@SOMEWORKS4567 ай бұрын
TL; DW 00:00 Introduction: Hugo Bounde Anderson, host of Vanishing Gradients, introduces Jason Liu, an independent consultant specializing in RAG applications. The episode aims to uncover how understanding how to build terrible AI systems can lead to building better ones. Hugo promotes his and Jason's upcoming course on LLMs and his upcoming live stream on in-context learning. 04:58 Leveraging Language Models: Jason shares his journey from skepticism to embracing language models. He describes how chatGPT's conversational nature convinced him of the reasoning capabilities of LLMs, moving them beyond mere "toys". 06:25 Delivering Value with LLMs: Jason believes chat interfaces aren't scalable for delivering value. He advocates for document generation, emphasizing how reports that guide decision-making are more impactful than simple question-answering. He showcases real-world applications in fields like investment and due diligence, where the value of generated reports can be measured in millions of dollars. 15:53 Similarities Between RecSys and RAG: Jason highlights the parallels between his experience with recommendation systems and the current landscape of RAG. He sees RAG as a natural extension of his expertise, as it involves similar infrastructure and metrics. He points out that the focus remains on precision and recall, regardless of the specific LLM technology. 17:43 Consulting Playbook for RAG Systems: Jason details his playbook for building RAG systems in organizations. This involves understanding user queries through topic clustering and capability analysis. He emphasizes the need to identify areas where the system is strong and areas needing improvement, using metrics like relevance and user satisfaction. 42:42 Advice for Conducting Experiments: Jason encourages starting simple and learning by suffering the consequences of design decisions. He recommends focusing on specific datasets that pose challenges, rather than easily processed content. He advocates for continuous iteration and micro-experiments, emphasizing that the velocity of learning outweighs the need for perfectly engineered code. 46:16 The Future of AI Engineering Roles: Jason sees AI engineering as a natural evolution of data science and machine learning roles. He emphasizes the importance of quantitative skills, but believes the most valuable skill is "data sense" - the ability to critically assess a system and identify potential issues. 53:57 The Role of Charisma: Jason attributes his charisma to a conscious decision to embrace non-technical skills, particularly public speaking. He emphasizes the importance of consistent practice and believes confidence emerges from doing the work and delivering value. 58:05 Lessons from a Hand Injury: Jason shares how his hand injury challenged his sense of self-worth, which was tied to his ability to work hard. He describes his journey to embrace the concept of "being enough," a turning point that shifted his focus to maximizing leverage rather than simply coding more. 1:04:07 Maintaining Simplicity in AI Systems: Jason suggests prioritizing code that is easily deleted, recognizing that rapid technological advancements necessitate adaptability. He emphasizes the Unix-like philosophy of building modular systems with clear interfaces. He stresses the importance of comprehensive documentation, arguing that a well-documented system is more valuable than one with extensive but unclear code. 1:25:46 Synthetic Data Generation: Jason draws parallels between computer vision and the use of synthetic data in language models. He argues that generating high-quality synthetic data can significantly compensate for limited real data, especially considering the high parameter counts in LLMs. 1:30:45 Reflections on Work and Self-Worth: Jason highlights the concept of "auto-exploitation" in the entrepreneurial world, where the pressure to achieve success is often internalized. He emphasizes the importance of embracing choices and actively deciding what is truly valuable, rather than constantly striving for more. He encourages others to cultivate a sense of "being enough" to avoid self-imposed pressure and to prioritize choices aligned with their values. 1:37:43 Maintaining Mathematical Skills: Jason and Hugo discuss the importance of maintaining math skills for AI practitioners. They argue that a focus on data problems will naturally lead to the development and application of necessary math concepts. Jason highlights the importance of conceptual understanding and problem-solving skills over memorizing complex formulas. He believes practicing these skills in everyday work is sufficient.
@harrivayrynen7 ай бұрын
Nice, found this video from the great LLM fine-tuning course
@chethanuk7 ай бұрын
Which course? Link please
@vanishinggradients7 ай бұрын
maven.com/s/course/e457fa822c
@ChatQuestTheNLP3AdventureGame7 ай бұрын
0:14:00 Structured Thinking
@LikeALeafOnTheWind9 ай бұрын
this was an excellent talk. thanks to all the participants.
@danielbentes9 ай бұрын
Next time Hugo: don't interrupt so much!
@Amapramaadhy9 ай бұрын
Are they using llm for their unit tests? With llm output, how do you unit test since you are not completely in charge of the unit?
@abiraja51608 ай бұрын
It's bad naming. They are assertion tests against the LLM output, not unit tests.
@shaheerfardan270510 ай бұрын
'unix like philosophy' -- I like it
@vanishinggradients10 ай бұрын
so glad it resonated!
@devon9374 Жыл бұрын
Great video
@JeffMorton-k3l Жыл бұрын
Thanks to all of you for sharing your wisdom, experience and insights. Much appreciated from someone currently attempting to make decisions regarding the future direction of an AI app.
@TwoSetAI Жыл бұрын
ulmfit not getting enough credit publicly! definitely the forefather of modern NLP
@fintech1378 Жыл бұрын
actually there are quite a lot have changed after LLM, although the principles are the same. Jeremy sounds like many very 'senior' AI practitioners, quite pessimistic, maybe because they feel (maybe unconsciously) left behind? it has just been just one year and we can see the unprecedented change already
@thefamousdjx Жыл бұрын
How is he pessimistic? Its quite clear openai cant figure how to improve significantly past gpt4. They've been doing multiple iterations this whole year that get dumber and more data moving the cutoff dates to 2023 hasnt helped either. There needs to be another breakthrough thats not about the amount of data or who has the biggest processing power.
@mlock1000 Жыл бұрын
I lived throught the AI winter of the 80s and 90s and it was so bad that to this day just typing 'AI' literally makes me cringe. The old school tried to write algorithms to create AI, and tried for decades and decades with minimal success barely inching forward. The new school builds neural networks that write the algorithms themselves - that is the game changer - and it's not an easy thing for many in the field to accept (understand?). Once people recongnized that deep learning was for real (alexnet?) the stigma started to go away, the people and money have been turning up, and the progress is staggering.
@kevinbird75 Жыл бұрын
Always love to listen to Hamel share his experiences. One of the best there is when it comes to practical advice!
@vanishinggradients Жыл бұрын
So glad you feel that way, Kevin!
@virgilshelton8160 Жыл бұрын
✔️ "promosm"
@alanfrank7104 Жыл бұрын
promo sm
@MyMpc1 Жыл бұрын
Thanks so much for these vids. I subscribe to the podcast but idkw, always find videos more engaging. Really loving your content, I found you due to DataCamp!
@JamesLongOnGoogle2 жыл бұрын
The best part of this is living through the cringe of realizing how many times I misspoke. It keeps me humble. <bangs head on desk>