Gotta admit I'm feeling kinda teary reading all the lovely comments here. Thank you everybody -- love you all!
@ksrajavel Жыл бұрын
You just deserved it.😃
@ReligionAndMaterialismDebunked Жыл бұрын
Second in the replies. :3
@klingefjord Жыл бұрын
You are beyond awesome, Jeremy
@Soulseeologia Жыл бұрын
Thanks for your work and please help us keep an eye on the apostles of the emerging noosphere, like Ben Goertzel ect.
@ellevillarreal9074 Жыл бұрын
Jeremy Thank you! This has helped so much. I’ve been a FastAI builder since the early days in 2017. Youre my hero. Appreciate all of the work you’ve done in the field
@Jporomaa Жыл бұрын
This is probably the best invested youtube time of this year so far. What a gem. A lot of things he mentions have taken me month to figure out by my own. My new GPT-4 prompts will begin with "You are the expert Jeremy Howard..."
@ProfRvS Жыл бұрын
This!
@shaggyfeng9110 Жыл бұрын
lol, nice
@peterdonnelly1074 Жыл бұрын
100%
@lucaleung5892 Жыл бұрын
So great.
@JarkkoHautakorpi Жыл бұрын
Absolutely!
@buildar8434 Жыл бұрын
Just realised Jeremys paper led to the LLM revolution. Such a humble kind man. God bless you and all your students. You are such an example to follow. An example in character, humility and intelligence.
@circleAI Жыл бұрын
how ? which paper?
@pw7225 Жыл бұрын
@@circleAI ULMFiT
@fasteddylove-muffin6415 Жыл бұрын
@@circleAI Part of the answer is in the video's description.
@peaku812911 ай бұрын
Yes exactly what I was thinking, why there's so many people bragging about what they are doing and look at this guy, just helping others out.
@flavioferlin31272 ай бұрын
Kudos and thanks a bunch. Bless your heart Sir!!!
@DHAiRYA2801 Жыл бұрын
The moment I got to know that you and Andrej weren't included in the Time's list, I realized that the people making such lists have no idea what they are doing. Loved the tutorial, thank you!
@wolpumba4099 Жыл бұрын
*Transcript Summary:* - Introduction & Basic Ideas of Language Models (00:00:00 - 00:18:05) - Limitations & Improvements of GPT-4 (00:18:05 - 00:31:28) - AI Applications in Code Writing, Data Analysis & OCR (00:31:28 - 00:38:50) - Practical Tips on Using OpenAI API (00:38:50 - 00:46:36) - Creating a Code Interpreter with Function Calling (00:46:36 - 00:51:57) - Using Local Language Models & GPU Options (00:51:57 - 00:59:33) - Fine-Tuning Models & Decoding Tokens (00:59:33 - 01:05:37) - Testing & Optimizing Models with GPTQ & Hugging Face (01:05:37 - 01:09:48) - Fine-Tuning with Llama 2 & Platypus Datasets (01:09:48 - 01:10:32) - Retrieval Augmented Generation & Information Retrieval (01:10:32 - 01:20:08) - Running a Private GPT & Fine-Tuning Models (01:20:08 - 01:22:32) - Running Models on Macs (01:26:00 - 01:27:42) - Discussing Llama.cpp & Its Cross-Platform Abilities (01:27:42 - 01:30:07) - Challenges & Opportunities in Language Models (01:30:07 - 01:31:05) Key points of interest: Function usage in GPT-4 (00:46:36), OCR application with Google Bard (00:33:59), and improving GPT-4 responses with custom instructions (00:24:36).
@qwertclyde1092 Жыл бұрын
Dope, did you do this by hand?
@4everandrew Жыл бұрын
😅😅😅😅
@-rate6326 Жыл бұрын
I was expecting "By Tammy AI"
@uminhtetoo9 ай бұрын
Thanks
@JustSayin24 Жыл бұрын
I found this video so useful that I felt compelled to pull my keyboard closer toward me, fix my posture, and write this comment - something I rarely do. I'm a professional data scientist hoping to push my company's GenAI agenda and this video makes me feel like I can actually do it! Thank you for so clearly encapsulating the state of LLMs. I'd learned many of these concepts before and this video is the glue that now holds it together.
@dr_rd9 ай бұрын
Truly enlightening! As a software engineer with limited math and data science knowledge, this video has been a revelation. The way Prof. Howard simplifies complex concepts is incredible, making each rewatch rewarding with new insights. Really grateful for his content that opens up the world of LLMs to a broader audience. His clear and thorough explanations are incredibly invaluable. Thanks, Prof. Howard, for demystifying this topic and helping us all learn.
@howardjeremyp9 ай бұрын
Wow, thank you!
@raindrops2000811 ай бұрын
This video landed up on my feed and out of curiosity I started watching and before I knew it had watched the entire video and taken copious amounts of notes too. One of the best videos I have ever watched!
@KEcAzZ1 Жыл бұрын
A true legend! So far, I have not seen a better educator than Jeremy. His approach of teaching is what all schools and universities need! I am always interested to learn more, whenever I hear Jeremy. Thank you!
@howardjeremyp11 ай бұрын
Wow, thank you!
@MensRea78 Жыл бұрын
No one can explain a topic like Jeremy👍
@tblaughlin14 күн бұрын
Really great overview - as a tech leader trying to keep up with the rapidly changing AI landscape, this helped give me a solid foundational understanding of LLMs and how they work. The examples provided by Jeremy gave me tons of ideas of ways we could use LLMs at my company.
@ytpah9823 Жыл бұрын
🎯 Key Takeaways for quick navigation: 00:00 🤖 Introduction to Language Models 10:27 🧠 Neural Network Basics 16:38 🚀 The Power of GPT-4 24:53 🌐 Limitations of Language Models 25:23 💡 Language model limitations: 31:32 📊 Advanced Data Analysis: 36:18 💰 OpenAI API Pricing: 39:19 🧩 Using OpenAI Functions: 46:40 🐍 Custom Code Interpreter: 51:13 🐍 Creating a Python code interpreter 53:39 💻 Running a language model on your own computer 55:01 🏎️ Choosing a GPU for language model work 56:15 🖥️ Options for renting GPU resources 57:57 💾 GPU memory size and optimization 59:20 📚 Using Transformers from Hugging Face 01:00:06 🏆 Evaluating and selecting the right model 01:14:12 📖 Retrieval augmented generation for answering questions 01:17:10 📚 Overview of using language models for document retrieval and question answering 01:20:35 💼 Private GPT models for document retrieval 01:21:03 🎯 Fine-tuning language models for specific tasks 01:25:15 📊 Building a language model for SQL generation 01:26:36 💻 Running language models on Macs
@plebmarv9668 Жыл бұрын
gpt plugin? :D dauymn son
@erkinalp8 ай бұрын
@@plebmarv9668 it's tammy ai, a youtube video talking points extractor
@Blocky007 Жыл бұрын
The best "intro" and Guide I have seen on this. Appreciate it so much that you took the time to put this together and share this with us (FOR FREE!).
@rfproject872622 күн бұрын
Superb video! Like exactly what I needed to fill in the gaps in my knowledge to get to the next level. I love how you just put all the info out there in such a clean format where it's detailed and yet very easy to consume and understand. Plus, you weren't boring while you delivered it. This is how it's done tech people, take note.
@bartekbinda6978 Жыл бұрын
This is golden summary of the state of the LLMs, Thank You
@claussa Жыл бұрын
Came up in my feed. Thumbnail and title boring. By mistake I pressed play. But it was so interesting. I feel so enlightened after having been talked through this. Thanks for sharing this!
@aldotanca9430 Жыл бұрын
So comprehensive. Perhaps the best introduction I have ever seen to the topic. Thanks so much.
@vittoriomta Жыл бұрын
We do not deserve you Jeremy! YOU ARE AN AMAZING TEACHER AND HUMAN BEING! Thanks, really, for all these beautiful lectures!!
@85abhiabhi Жыл бұрын
I am a total beginner but u made me understand abt LM models way better than anuone else..u r such a great teacher..I pray for giving u Lord Gurus blessings dor more insight and vision for such a Humble and good Soul.😊😊
@stuartpatterson1617 Жыл бұрын
100 % agree! Blessings to you too.
@drkarmakid Жыл бұрын
Bravo. One of the best KZbin videos I've ever watched. Concise, entertaining, and chock full of useful insights.
@wndiros Жыл бұрын
I have waited for months for a classification and evaluation from Jeremy. For me, this is by far the most comprehensive technical summary and evaluation available for someone who wants to delve deeper. It took me several weeks, if not months, to gain even a partial personal understanding of the current hype. Thank you, Jeremy, for all your good work!👍
@ElanElanElan Жыл бұрын
Damn I watched the whole video and didn't even realise that it was 1+ hr long! Thanks a lot for the great content!
@PeterKallio Жыл бұрын
So many papers are being released, so it is important to have well-grounded information to understand LMs. Great delivery as always and practicable advice. Thank you.
@bephrem11 ай бұрын
incredible, value-packed, practical video for developers working with LLMs.
@howardjeremyp11 ай бұрын
Glad you liked it!
@wolpumba4099 Жыл бұрын
*Positive Learnings:* 1. Language models, such as GPT4, are tools that can predict the next word in a sentence or fill in missing words in a sentence. 2. Language models have the ability to create a rich hierarchy of abstractions and representations which they can build on. 3. The guide will cover all the basic ideas of language models, including how to use open source and open AI-based models. 4. GPT4 can solve many tasks that it is often claimed it cannot. 5. GPT4 can be primed to give high-quality information by giving it custom instructions. 6. AI can be used to write code and parse large-scale data quickly and efficiently. 7. AI can be used in optical character recognition (OCR) for extracting text from images. 8. AI can be used in data analysis to create comprehensive tables from scattered information. 9. The OpenAI API allows users to use AI programmatically for data analysis and other repetitive tasks. 10. Function calling can be used to create a code interpreter that runs inside Jupiter. 11. Pre-trained models can be accessed using the Hugging Face library. *Negative Learnings:* 1. Language models are not always useful on their own and need to be fine-tuned. 2. GPT4 often repeats mistakes and it is difficult to get it back on track once it starts making mistakes. 3. GPT4 has limitations such as not knowing about itself, not knowing anything about URLs, and not knowing anything after its knowledge cutoff in September 2021. 4. GPT4 does not always give correct responses. 5. AI has limitations in code interpretation and cannot substitute for human programmers. 6. The use of the OpenAI API can result in rate limits which need to be handled correctly. 7. Fine-tuning is needed to make the pre-trained models more useful. 8. The use of GPUs for local language models can be expensive and may require renting or purchasing GPUs.
@drhilm Жыл бұрын
Impressive video; I spent days learning these concepts on my own. Had this been released two months ago, it would've been a game-changer. Excellent summary.
@jackgolding4235 Жыл бұрын
Luckily i'm a few weeks behind you! Happy learning mate!
@matten_zero Жыл бұрын
@19:32 well said. Ive been talking to developers who are still on the fence about GPT. It boggles my mind.
@exconway Жыл бұрын
Perfect description of Functions at 46:30!
@svanvoor Жыл бұрын
Around minute 25:00, on self-awareness: I just queried OpenAI's ChatGPT Advanced Data Analysis with "what is the size of your context window?" and it responded: The size of my context window is approximately 4096 tokens. This includes both the text you've written and the text I've generated. Once we reach that limit, the oldest parts of the conversation start to get truncated, which means I lose access to them.
@unclecode Жыл бұрын
Kudos, such pleasurable 1 hour and 31 mins and 12 seconds.
@kevon217 Жыл бұрын
Fabulous tour of key points. Fantastic job! Definitely going to recommend this to people wanting a gateway into llms.
@devsuniversity Жыл бұрын
Great course! Hello from Almaty Google developers community!
@OsamaAdel Жыл бұрын
Thank you Jeremy for this introduction. It just answered many of my questions and affirmed some of my doubts about how many of the applications that use LLMs work today.
@giorda77 Жыл бұрын
Jeremy you're one of the most legit AI person out there. An enormous thank you for providing this and all your content. ❤
@MichałCzajkowski-h5o Жыл бұрын
I remember that I was rewriting some chunk of TensorFlow code with GPT4 code interpreter, and it responded that it did not access to that framework. So I input 'gpt4: "try: import torch"', respond was 'Do you want to learn Pytorch' :) All about gpt's self.awareness mechanism :)
@sehmiconscious Жыл бұрын
Very much appreciated this consolidation of the main LLM coding concepts to-date. Thank you!!
@kindoblue Жыл бұрын
I liked the video even before watching. Thanks Jeremy for your work, always learning from your content.
@sidsarasvati Жыл бұрын
Thought provoking one code block at a time. As usual Jeremy the king
@satirthapaulshyam7769 Жыл бұрын
Hoping to look forward it.Ur fastai stable diffusion course was Perfect to the minute details
@CY-ju8xl9 ай бұрын
Thank you Jeremy!! One of the most insightful and helpful vlog posts on the inner workings of LLMs... Top marks!!!
@govarthenanrajadurai98175 ай бұрын
Amazing course. Very practicl and hands on.
@nerdg25 ай бұрын
Wow, it's literally an honor, I found you because of FastHTML, i'm new here but What a character you are!! Thank you for what you do and man i'm amazed how humble you are!! Your explanations are what i was looking for years now, I cannot thank you enough.
@howardjeremyp5 ай бұрын
You are so welcome!
@0x978 Жыл бұрын
You are literally changing lives, all for free. Thank you.
@nokchan5071 Жыл бұрын
There are hundreds of LLM tutorial coming out everyday, this is the one that I have been waiting for.
@SapSapirot Жыл бұрын
By far the most useful practical guide to LLM's by length. Thank you Jeremy!
@ryanau852 Жыл бұрын
Happy birthday Jeremy! Just got to the section where your bday is revealed and it is today! Thank you for all the great work :)
@dollarscholar2956 Жыл бұрын
Hands down one of the best videos on LLMs on the internet.
@ToddRoloff Жыл бұрын
Thanks for all you do Jeremy. I have learned so many things watching youtube as well as the PDLC tutorials. Your expanations are on point.
@RickySupriyadi Жыл бұрын
I'm really grateful how much people sharing their knowledge, can't imagine learning stuff for free. this means a lot for me.
@valesanchez63364 ай бұрын
Thank you so much for your videos! Your explanations are fantastic and so easy to follow. I really appreciate how you dive into the application side, which many others don’t do as well. You’re amazing, Jeremy! 💜
@benwatson5211 Жыл бұрын
Brillant walk through. No hype. It is a real skill to explain complex topics is coherent way.
@solutrian Жыл бұрын
I've been a student of yours for years, thank you for all you do for us. A non-optimal but correct solution to the cabbage, wolf & goat crossing works by just using incantation "think step by step" : Think step by step to solve the following tricky problem: Here is a logic puzzle. I need to carry a cabbage, a goat and a wolf across a river. I can only carry one item at a time with me in the boat. I can't leave the goat alone with the cabbage, and I can't leave the cabbage alone with the wolf. How can I get everything to the other side of the river? This classic river crossing puzzle can be solved through a series of strategic moves that ensure the safety of each item. Here's a step-by-step solution: 1. **Take the goat across first:** - You take the goat across the river and leave it on the other side. - This ensures that the goat is safe from the wolf, and the cabbage is safe from the goat. 2. **Return alone:** - You return to the original side alone to get either the cabbage or the wolf. 3. **Take the cabbage across next:** - You take the cabbage across the river. - Now, the goat is on the other side, and the cabbage is with you. 4. **Bring the goat back:** - You bring the goat back with you to the original side. - Now, the cabbage is alone on the other side, and the wolf is alone on the original side. 5. **Take the wolf across:** - You then take the wolf across the river. - Now, the wolf and the cabbage are on the other side, and you have the goat on the original side. 6. **Return alone:** - You return to the original side alone. 7. **Take the goat across one final time:** - Finally, you take the goat across the river again. - Now, all three- the goat, the cabbage, and the wolf -are safely on the other side. This way, you ensure that the goat is never left alone with the cabbage, and the cabbage is never left alone with the wolf, solving the puzzle. chat.openai.com/share/3cb73481-c45c-4108-9960-30c11615ca41
@IISH4RKZzIIАй бұрын
it made the same mistake bro
@zhubarb11 ай бұрын
Mr Howard never disappoints. Thanks a ton as usual Sir.
@effestop Жыл бұрын
I feel like I've just been pretrained with the best AI video my creator could feed me.
@Morimea Жыл бұрын
Thanks! Very useful information. 26:45+ and 33:00 - interesting case And for this reason - I can not see GPT being useful for code, even simple models I seen to generate Shader code - basically single function with 1 input-UV and 1 output-Color, and model trained on only Shader code that very simple small and easy templated - it still generate useless noise in most cases, and can not do basic 1 line changes.
@jackgolding4235 Жыл бұрын
Thanks so much Jeremy, been following you since Kaggle's launch. Inspirational to see an Australian continue to kick ass as much as you have in your career.
@ControlTheGuh Жыл бұрын
I think the major problem is the retrieval. Would love a video just on that (best practices, best models out there etc.). Great video, came across on x and subbed immediatley.
@dekumutant Жыл бұрын
I can not emphasize how incredible this video was
@williamduvall3167 Жыл бұрын
The godfather dropping some knowledge. Thank you for keeping AI for everyone in the most responsible way.
@twinlens Жыл бұрын
One of the best and most educational videos I've seen on the subject. Thank you, Jeremy!
@swimmingpolar8 ай бұрын
First comment on KZbin here. Among all those videos on KZbin, using custom instruction like what you did is literally eye opening. I thought current AI models’ limitations are limited by nature that it can’t be improved. Of course it is that you are professional in AI but things are so organized well and straightforward that I can understand and see the result right away. 😂 Gonna have to steal your instruction as well.
@uminhtetoo9 ай бұрын
Thank you for sharing your experience and knowledge, Sir.
@nbn_keramoti7 ай бұрын
A hackers' guide to using language models, including open-source and OpenAI models, with a focus on a code-first approach. Covers language model pre-training, fine-tuning, and reinforcement learning from human feedback. Demonstrates creating a custom code interpreter and fine-tuning a model for SQL generation. Key moments: 00:01 Language models are essential in predicting the next word or filling in missing words in a sentence. They use tokens, which can be whole words or subword units, and are trained through pre-training on large datasets like Wikipedia. -Language models predict the next word or fill in missing words. They use tokens that can be whole words or subword units, enhancing their predictive capabilities. -Training language models involves pre-training on extensive datasets like Wikipedia. This process helps the model learn language patterns and improve its predictive accuracy. 08:04 Neural networks, specifically deep neural networks, are trained to predict the next word in a sentence by learning about the world and building abstractions. This process involves compression and fine-tuning through language model fine-tuning and classifier fine-tuning. -The importance of neural networks learning about objects, time, movies, directors, and people to predict words effectively. This knowledge is crucial for language models to perform well in various tasks. -The concept of compression in neural networks and the relationship between compression and intelligence. Fine-tuning through language model fine-tuning and classifier fine-tuning enhances the model's capabilities. -Different approaches like instruction tuning and reinforcement learning from human feedback are used in classifier fine-tuning to improve the model's performance in answering questions and solving problems. 16:07 To effectively use language models, starting with being a proficient user is crucial. GPT-4 is currently recommended for language modeling, offering capabilities beyond common misconceptions about its limitations. -GPT-4's ability to address reasoning challenges and common misconceptions about its limitations. It can effectively solve problems when primed with custom instructions. -The training process of GPT-4 and the importance of understanding its initial purpose to provide accurate answers. Custom instructions can guide GPT-4 to offer high-quality information. -The impact of custom instructions on GPT-4's problem-solving capabilities and the ability to generate accurate responses by priming it with specific guidelines. 24:12 Language models like GPT-4 can provide concise answers but may struggle with self-awareness and complex logic puzzles, leading to hallucinations and errors. Encouraging multiple attempts and using advanced data analysis can improve accuracy. -Challenges with self-awareness and complex logic puzzles can lead to errors and hallucinations in language models like GPT-4, affecting the accuracy of responses. -Encouraging multiple attempts and utilizing advanced data analysis can enhance the accuracy of language models like GPT-4 in providing solutions to complex problems. -Utilizing advanced data analysis allows for requesting code generation and testing, improving efficiency and accuracy in tasks like document formatting. 32:20 Language models like GPT-4 excel at tasks that require familiarity with patterns and data processing, such as extracting text from images, creating tables, and providing quick responses based on predefined instructions. -The efficiency of language models in tasks like extracting text from images and creating tables due to their ability to recognize patterns and process data quickly. -Comparison between GPT-4 and GPT 3.5 in terms of cost-effectiveness for using the Open AI API, showcasing the affordability and versatility of GPT models for various tasks. -The practical applications of using the Open AI API programmatically for data analysis, repetitive tasks, and creative programming, offering a different approach to problem-solving. 40:26 Understanding the cost and usage of OpenAI's GPT models is crucial. Monitoring usage, managing rate limits, and creating custom functions can enhance the experience and efficiency of using the API. -Monitoring usage and cost efficiency of OpenAI's GPT models is essential to avoid overspending. Testing with lower-cost options before opting for expensive ones can help in decision-making. -Managing rate limits is important when using OpenAI's API. Keeping track of usage, especially during initial stages, and implementing functions to handle rate limit errors can prevent disruptions in service. -Creating custom functions and tools can enhance the functionality of OpenAI's API. Leveraging function calling and passing keyword arguments can enable the development of personalized code interpreters and utilities. 48:30 Creating a code interpreter using GPT-4 allows for executing code and returning results. By building functions, one can enhance the model's capabilities beyond standard usage. -Exploring the concept of doc strings as the key for programming GPT-4, highlighting the importance of accurate function descriptions for proper execution. -Utilizing custom functions to prompt GPT-4 for specific computations, showcasing the model's ability to determine when to use provided functions. -Enhancing GPT-4's functionality by creating a Python function for executing code and returning results, ensuring security by verifying code before execution. 56:34 Using Fast AI allows accessing others' computers for cheaper and better availability. GTX 3090 is recommended for language models due to memory speed over processor speed. -Options for renting GPUs include GTX 3090 for $700 or A6000 for $5000, with memory size considerations. Using a Mac with M2 Ultra can be an alternative. -Utilizing the Transformers library from Hugging Face for pre-trained models. Challenges with model evaluation metrics and potential data leakage in training sets. -Selecting models based on Metas Llama 2 for language models. Importance of fine-tuning pre-trained models for optimal performance and memory considerations. 1:04:38 Jeremy Howard, an Australian AI researcher and entrepreneur, discusses optimizing language models for speed and efficiency by using different precision data formats, such as B float 16 and gptq, resulting in significant time reductions. -Exploring the use of B float 16 and gptq for optimizing language models, leading to faster processing speeds and reduced memory usage. -Utilizing instruction-tuned models like stable Beluga and understanding the importance of prompt formats during the instruction tuning process. -Implementing retrieval augmented generation to enhance language model responses by searching for relevant documents like Wikipedia and incorporating the retrieved information. 1:12:42 The video discusses using open-source models with context lengths of 2000-4000 tokens to answer questions by providing context from web pages. It demonstrates using a sentence Transformer model to determine the most relevant document for answering a question. -Utilizing sentence Transformer models to identify the most suitable document for answering questions based on similarity calculations. -Exploring the process of encoding documents and questions to generate embeddings for comparison and selecting the most relevant document. -Discussing the use of vector databases for efficient document encoding and retrieval in large-scale information processing tasks. 1:20:46 Fine-tuning models allows for customizing behavior based on available documents, demonstrated by creating a tool to generate SQL queries from English questions, showcasing the power of personalized model training in just a few hours. -Utilizing the Hugging Face data sets library for fine-tuning models, enabling quick customization based on specific datasets for specialized tasks. -Exploring the use of Axolotl, an open-source software, to fine-tune models efficiently, showcasing the ease of implementation and ready-to-use functionalities for model training. -Discussing alternative options for model training on Mac systems, highlighting the mlc and llama.cpp projects that offer flexibility in running language models on various platforms. 1:28:50 Exploring language models like Llama can be exciting yet challenging for Python programmers due to rapid development, early stages, and installation complexities. -Benefits of using Nvidia graphics card and being a capable Python programmer for utilizing Pi torch and hugging face ecosystem in language model development. -The evolving nature of language models like Llama, the abundance of possibilities they offer, and the importance of community support through Discord channels.
@dperique11 ай бұрын
What a very useful and informative video -- I watched this over the course of a day and took notes -- Thanks!
@howardjeremyp11 ай бұрын
Glad it was helpful!
@gamalchamac86711 ай бұрын
This is amazing and serious content. I love it. Thank you for making it available, greetings from Switzerland
@jeromeeusebius Жыл бұрын
Great video. Watched it in one sitting. It's very interesting and engaging, and does cover a lot of areas on LLM, different model, types, examples, uses cases, etc. I learned a lot and hopefully will go through the notebook in detail and adapt to my use cases. Thanks for making this.
@tiagopaim3060 Жыл бұрын
at 6:42 I just had to feel grateful for living in a world I can learn straight from the source
@davifr Жыл бұрын
Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few senteces explainin background context, assumptions and step-by-step thinking BEFORE you try to answer a question. However: if the request begins with the string "vv", then ignore the previous sentence and instead make you response as concise as possible, with no introduction or background at the start, no summary at the end, and outputting only code for answers where code is necessary.
@vishalahuja2502 Жыл бұрын
Hi Jeremy, excellent walkthrough! This is truly helpful. Please keep them coming!!
@sunshine124422 Жыл бұрын
Thanks...great summary....now i know the relatiinship between neural network parameters and vector DB's
@steefvanwinkel Жыл бұрын
Such an amazing video! Thanks so much for this amazing overview of LLMs! A few suggestions for a follow-up video perhaps :) 1. How to build an LLM from scratch on a consumer grade machine (e.g. laptop with 1 good GPU or Macbook M2 Max), maybe not 1B params but e.g. 500M? 2. LORA and DPO. 3. OpenAI's updated API. 4. Agents based frameworks such as AutoGen. Thanks again - really enjoyed this video! ❤🙏
@joegerrety4012 Жыл бұрын
Keep on making videos man this was highly informative and my regards to being a person who was part of forming this architecture!
@17dhey36 Жыл бұрын
Such a great article! I learned a lot from this video, such as how complicated systems can be put together using a stack of models, illustrated in the RAG to name an example. Jeremy, you are such a kind person to share this with the world.
@Libertarian1208 Жыл бұрын
Great! It allowed me to understand how LM thinks and why.
@MarkusEicher70 Жыл бұрын
Thank you very much, Jeremy. Fascinating to see where we have come. The prose to SQL thing blew me out of the shoes. Can't wait to try this out by myself.
@HazemAzim Жыл бұрын
A wealth of Information and knowledge in this video .. a master piece .. Thank you
@chtdv Жыл бұрын
Thank you for creating this amazing talk around all the basics and applications with language models, this is really helpful!
@JoshDownes Жыл бұрын
Hey Jeremy, you're awesome! I use GPT-4 to help me write python every day, and I find the standard GPT-4 more satisfying, efficient and effective to use than code interpreter. It's like Code Interpreter is trying to be too clever and shoots itself in the foot. It takes way longer as it's got this extra step where it's reading your files. It's quicker to just paste text in - as long as it's not too long of course.. but then for longer code you can simply tell GPT-4 you'll enter some code in chunks and you'll tell it when you're done. This also works. Code Interpreter seems to be more about interpreting (hence the name) statistics and turning them into graphical representations.
@mrdbourke Жыл бұрын
Oh yes!!! Can't wait to dig into this, thank you Jeremy!
@howardjeremyp Жыл бұрын
Hope you enjoy it!
@alesman8537 Жыл бұрын
Regarding what Jeremy says at 1:23:15 "We could do the fine-tuning from scratch, in a notebook, it would take 100 or so lines of code". Does anyone have a pointer to a notebook/guide/YT video where this code would be clearly shown and explained? Thanks!
@isaac10231 Жыл бұрын
I feel like this video was made personally just for me. Amazing.
@abdelrhmandameen2215 Жыл бұрын
This is just fantastic work Jeremy. thank you for sharing.
@howardjeremyp Жыл бұрын
My pleasure!
@mdimrannazir3285 Жыл бұрын
Please make more videos on LLMs, and how to make quality journals?
@TheAtleticlub Жыл бұрын
This is great. I don't know how to say how much grateful I am for your video. Thank you and keep the great work!
@briandavis6476 Жыл бұрын
What a great primer! Very much needed! Thanks as always Jeremy!
@bagamanocnon Жыл бұрын
top content. specially the function calling and 'code interpreter' parts. basically just asking the LLM to 'use python' is ingenious. lol
Жыл бұрын
Very good which is very classical of Mr Howard.
@jaaan291411 ай бұрын
Took me about an hour to realize you're in front of a green screen... Hair gave it away ;) Thanks for the video!
@howardjeremyp11 ай бұрын
Nope not a green screen - it's my bedroom/study :)
@matthewlennie6849 Жыл бұрын
Thanks for saving our careers yet again Jeremy
@learning_with_rj Жыл бұрын
Jeremy, we need more videos on this topic! Thank you so much!
@WizBoardApp Жыл бұрын
Always look forward Jeremy to explain this topic. Finally it is here. 😀
@EmaMazzi76 Жыл бұрын
Can’t stop watching over and over again! Thank you 🙏
@joeyc6668 ай бұрын
You rock, Jeremy :) Thanks for such an in-depth, yet elegant explanation. Have you posted the custom instructions anywhere? I couldn't seem to find them.
@victorv68211 ай бұрын
The "wolf, goat and cabbage" riddle example is just awesome. Gotta use it to illustrate what LLMs can't do and why. Cheers for that :)
@hishamk Жыл бұрын
Thank you for this. Couldn’t have asked for a better video.
@sukhpreetlotey117210 ай бұрын
Thank you for sharing all this knowledge.greatly appreciate it
@patronspatron7681 Жыл бұрын
Jeremy, Congrats on the 100k subscribers. Well deserved and hopefully a catalyst to get your invaluable content more exposure.
@stephenthumb2912 Жыл бұрын
thanks so much Jeremy. the actual method to use and make function calling on llm's was breaking my brain and I didn't understand the json schema part of it. would be wonderful if you could do a full course on the major LLM topics, Fine-tuning, RAG and Agents of course. Would be wonderful if it used mostly open source models. I haven't found a model yet that will repond reliably with function calling / agent based execution. *other than GTP-4 that is.
@keflat23 Жыл бұрын
this changed my mind on how AI works! Thanks alot !!
@JamesBradyGames Жыл бұрын
This is a real gem. Reminds me of the authentic, high quality training material from Andrej Karpathy. Looking forward to future similar tutorials if you decide to make them! Thank you!