OpenAI Q&A: Finetuning GPT-3 vs Semantic Search - which to use, when, and why?

  Рет қаралды 66,878

David Shapiro

David Shapiro

Күн бұрын

Пікірлер: 115
@justcreate1387
@justcreate1387 Жыл бұрын
Every video is like a hot new album drop 🔥
@qusek6446
@qusek6446 Жыл бұрын
Word 🥵🥰😍😳😩
@duan.
@duan. Жыл бұрын
I was literally trying to explain this to my boss an hour ago, but couln’t find the best way to express it! This is perfect, just sent them this link
@petersolimine5175
@petersolimine5175 Жыл бұрын
Hey, this is Duan’s boss. Thanks for the link
@johanngerberding5956
@johanngerberding5956 Жыл бұрын
same
@zd676
@zd676 Жыл бұрын
Hey, Duan’s skip here. Good work!
@kedbreak136
@kedbreak136 Жыл бұрын
Perfect! I was looking exactly for this, and your pragmatic hands on style is such a good way to share your thoughts and experiences. Keep it up!
@hotrodhunk7389
@hotrodhunk7389 Жыл бұрын
I haven't been so interested in something since smartphones started coming out. That energy of anything seemingly being possible. Really really interesting time to be alive. Chat gpt was so amazing to me I have to figure out how it works. Currently learning programming in an effort to understand it better.
@venkat1195
@venkat1195 Жыл бұрын
Hey man! I am also in the same boat. Can I message you and ask you a few questions/tricks? Thanks!
@Rushpatil
@Rushpatil Жыл бұрын
This is the exact video I needed to help me set in the right direction for my project. Thank you!
@ADHDOCD
@ADHDOCD Жыл бұрын
Wow just Wow! Saved me a bunch of time. David's like a philosopher; makes you ask why you do something before doing anything. 99% of YT creators do the opposite; throw content at you.
@AvizStudio
@AvizStudio Жыл бұрын
So much value every day
@stycket
@stycket Жыл бұрын
i was just thinking about how to solve this yesterday, thanks a lot!
@thabua5963
@thabua5963 Жыл бұрын
David, thank you for sharing your knowledge on yt and on the openai forums. You are a the light to many of us who are curious in this AI world. I don't have any AI or computer backgrounds but I'm able to pick things up slowly and watching your videos has opened many nerual paths in my this realm. Thank you! - Troy
@6lack5ushi
@6lack5ushi Жыл бұрын
one of the best videos ive seen on LLM's and fine-tuning. ive seen so many people fine tune get not great results and complain about the cost. so THANK YOU
@kmindoo
@kmindoo Жыл бұрын
Great video! Ok, we should look more into semantic search and use recursive summary to get an answer. Here's another idea for a video: How to create longer results than the GPT 3 token limit? E.g. Codex creates a very long Java class or Davinci writes a whole book. That would probably be the reverse pattern: sketch ideas first, write the outline, then expand each chapter/paragraph until the code or text is detailed enough.
@jermainebrown188
@jermainebrown188 11 ай бұрын
Thankyou for sharing your time and knowledge, the video was flawless
@davidl3383
@davidl3383 Жыл бұрын
You explain so well. Everything is clear.. thank you for your help
@junwatu
@junwatu Жыл бұрын
Thanks, this is precious insight! I was thinking of fine-tuning GPT-3 to do a simple Q&A, but there is another better and cheap way to do that!
@pankymathur
@pankymathur Жыл бұрын
Thank you for this. I was getting exhausted from encountering and debating with numerous self-proclaimed AI experts who continue to approach NLU tasks in 2023 as if it were still 2018. Now, I can simply direct them to this video. :)
@Dan-oj4iq
@Dan-oj4iq Жыл бұрын
I am having so much fun listening to this guy without the technical background that should make this enjoyable. Simple answer: narrative delivery. Content takes a distant place compared to delivery. It's a gift having little to do with actual knowledge.
@boon4568
@boon4568 Жыл бұрын
Thank you for making the differences so clear and easy to understand!
@user-du8hf3he7r
@user-du8hf3he7r 10 ай бұрын
‘If you can’t explain it to others, you don’t understand it yourself.’ - Paraphrase of a quote attributed to the late great Physics Nobel Laureate Richard Feynman.
@MrRulos1
@MrRulos1 Жыл бұрын
Amazing job David! Im very thankful, you explaind excelacty what I was looking for.
@RowanSheridan
@RowanSheridan Жыл бұрын
Thanks David, loved the presentation and delivery style. Straight and to the point!
@alexo7431
@alexo7431 Жыл бұрын
Thanks David for sharing your thoughts, very valuable information.
@jonathanacuna
@jonathanacuna Жыл бұрын
Mic drop! Absolutely gold!
@GerardSans
@GerardSans Жыл бұрын
This series is getting some love. Excellent work. Much appreciated even if it’s not the whole package
@mentimental
@mentimental Жыл бұрын
Would love to see you get into the tradeoffs of using DaVinci vs fine-tuning smaller models for different use cases!
@amador1997
@amador1997 Жыл бұрын
I was thinking about the same thing or just an easy way to prototype
@spacedust8061
@spacedust8061 Жыл бұрын
Thank you a lot, this really helps!
@victorpintotapia4874
@victorpintotapia4874 Жыл бұрын
Great explanation, greetings from Ecuador.
@evyborov
@evyborov Жыл бұрын
thanks for the interesting video. My 2 cents here - while FT costs will go down and this will become more affordable, I totally agree with your use cases definitions - I mean what it is good for and bad for. However, we now see an uplift of RLHF approaches, and if I understand them correctly, this might be a better FT going forward. Especially when we figure out (may be it has already happened?) running those RLHF layers without HF component :) I mean build a discriminator based on the same LLM. That could be fun. Or may be I'm just dreaming...
@HarpreetPaul
@HarpreetPaul Жыл бұрын
Just a perfect video, so very well explained.
@MK-jn9uu
@MK-jn9uu Жыл бұрын
I can now see the difference between someone that knows what they’re talking about or regurgitating KZbin summaries
@pixelperfectpravin
@pixelperfectpravin Жыл бұрын
Thanks for making these videos
@kimie126
@kimie126 Жыл бұрын
15:44 Wow, this is atomic notes and zettelkasten.
@hirefiedinc6313
@hirefiedinc6313 Жыл бұрын
You started ads. And I don't believe that I'm saying this but I totally support you on that. :) Your content is top-notch!
@rogermarquez1314
@rogermarquez1314 Жыл бұрын
I would love you to make a video on fine-tuning your GPT model for blogging purposes. I am not a coder or someone with a programming background, but I consider myself a very tech-savvy guy. I can write my own data set following OpenAI documentation for fine-tuning GPT. My biggest challenge is I can´t find enough documentation on the elements (I hope I am using the correct wording here) that I can use to fine-tune my model so it can generate the output I want in terms of not only the tone of voice (which I know is an element I can use) but, more importantly, transitional phrases and writing in first person. I believe these last two elements are something most AI content assistant tools like Jasper lack and would give a more "human-like" touch to the output. Anyway, I hope this message reaches you and keep on the great work!
@hjups
@hjups Жыл бұрын
I think there is a bit more nuance to the blanket statement that finetuning does not add new information to the model. That's not entirely true as you can get GPT-3 to repeat examples from the finetuning training set if the LR is too high or finetuning uses too many steps (which implies that the information in the training set is added to the model). But since OpenAI likely only unfreezes a few layers (the last few?) as you said, this information addition is not going to perform the way one might expect and semantic search is a better approach. Also, even though it's stated in the title, it's specific to the method OpenAI uses to finetune the models. Finetuning can add information (in a useful way) to other transformer models like the ones from EleutherAI (I was under the impression that the finetuning limitation held for all transformer models when I watched your previous video on finetuning, but that turns out not to be the case for those models - all layers are unfrozen so it's essentially the same as regular training).
@Dron008
@Dron008 Жыл бұрын
Thank you, I also had this misunderstanding. Need to learn so much yet.
@chrisr236
@chrisr236 Жыл бұрын
This is refreshingly insightful
@creneemugo94
@creneemugo94 Жыл бұрын
I figured it out the hard way about fine tuning. I have to start all over and take the approach to use embeddings.
@haissayf
@haissayf 6 ай бұрын
I cant believe there is a useful channel. Thank you. I was growing tired of the usual fine tune in 10 mins nonsense
@phi6934
@phi6934 Жыл бұрын
Great video! Thanks
@pasqualescaife899
@pasqualescaife899 11 ай бұрын
You're hilarious. Great video - lol "You don't! " got me at the beginning :' D
@raphauy
@raphauy Жыл бұрын
thank you!
@lorinma
@lorinma Жыл бұрын
Amazing video! Just amazing
@Sir.Black.
@Sir.Black. Жыл бұрын
Man thanks for this vid, I was very confused about these 2 concepts... I'm very clear now! I hope it's not too late... yet?
@sebastianterrazas9658
@sebastianterrazas9658 Жыл бұрын
Great video!
@truckfinanceaustralia1335
@truckfinanceaustralia1335 Жыл бұрын
great vid!
@soraygoularssm8669
@soraygoularssm8669 Жыл бұрын
your videos are awesome I'm a huge fan, also i had a question have you seen agents and tools in Langchains? can we implement such a thing with embeddings? cause that takes most of the tokens and is expensive
@DaveShap
@DaveShap Жыл бұрын
Pretty sure langchains are based off my book NLCA. Anyways, I've moved beyond basic techniques like that
@Strkrjk
@Strkrjk Жыл бұрын
"Like using a hammer to drive a screw through a board on your knee" 😂😂
@ozorg
@ozorg Жыл бұрын
Great stuff!
@gileneusz
@gileneusz Жыл бұрын
that's a very informative video. I know you have many videos on your channel about finetuning, although they are long. If you would need to some inspiration for a new video, I would ask for short videos with examples on finetuning vs semantic search. Just to show not only theoretically but also practically how they are different to each other and what are use cases.
@kingarthur0407
@kingarthur0407 Жыл бұрын
Subscribed so frickin hard after watching this, what a stellar video. In my project map, I pitted semantic search against text embeddings, and fine tuning against prompt engineering (I have several script-like prompts you can use even with chatgpt to tune it to different fields and answer styles). Is my understanding not accurate? I thought today's systems like docgpt or privategpt with local document access used text embeddings, and plugins like keysearch ai or seo app on chatgpt used semantic search (on their end). Could I trouble you for any insight on this?
@JohnDlugosz
@JohnDlugosz Жыл бұрын
I'm used to QA meaning Quality Analysis. Not to be confused with Q&A.
@Siyar-sb2ub
@Siyar-sb2ub 6 ай бұрын
Hi David, thanks for the incredible value your providing. one question: i understand that you can teach the model with fine tuning, but can you teach the model how to retrieve data from the knowledgebase and then output it in a certain way? Fx for an eccommerce product reccommendation chatbot i want to feed it, lets say 1000 products, can i use fine tuning to make the model ONLY use products from the knowledgebase without reccommending product that is not on the list? this question is related to a problem i have: my model is sometimes overseeing some products from the list and other times its reccommending products that is not on the list. im guessing semantic search is the right way to develop a product reccommendation chatbot? Thanks in advance!
@bestieboots
@bestieboots Жыл бұрын
I wish I could subscribe to this channel twice. Thank you for the content :-). There's so much filler and clickbait about this stuff right now and I don't know how to cut through it. Could you recommend any other channels that talk about similar content?
@DaveShap
@DaveShap Жыл бұрын
Jonas Tyroller does AI and gaming
@MaynzeTV
@MaynzeTV Жыл бұрын
I also created a curie model to write fiction. I built a script that takes an array of pages from books I've enjoyed and summarizes them in one or two lines of text. The data gets saved into a JSONL file with the summary as the prompt, and the page as the response. I find it works well around 60% of the time, but with fine tune pricing I feel it's better to use text-davinci-003 with a few examples. I'm wondering if you do something similar with the fine tuning, or if I'm off a bit? haha
@sebastianterrazas9658
@sebastianterrazas9658 Жыл бұрын
I have a question, if I want to train an open-source pre trained model (like BLOOM) on a corpus of data, how do I do it?
@chetang1964
@chetang1964 10 ай бұрын
This was quite useful @DaveShap , follow up question, so training it to produce a new programming language from a given intent would be done best using finetuning correct ?
@DaveShap
@DaveShap 10 ай бұрын
Constructing a programming language is too complicated for LLM and finetuning
@guillemgarcia3630
@guillemgarcia3630 Жыл бұрын
great vid! I'm left with the doubt, should I finetune to solve multiple specific tasks, or rather do multiple finetunes for one task? 🤔
@AvizStudio
@AvizStudio Жыл бұрын
Multiple fintunes, one task each
@RickLindstrom
@RickLindstrom Жыл бұрын
Transfer learning: if you can dodge a wrench, you can dodge a ball.
@DaveShap
@DaveShap Жыл бұрын
Exactly!
@rogerganga
@rogerganga Жыл бұрын
David thank you so much for this knowledge. There is a lot of misinformation about finetuning and you explained it pretty well! My question: With semantic search, your answers are limited to only what is in vector databases. Is there a way to make QA more like chatgpt plus the data in vector database? I guess semantic search makes all the answers to be only within the domain of pdf that it indexes from. (Rewording my question: How do you combine the results of extractive AI to generative AI like chatgpt and return results to users)
@nathanverni9143
@nathanverni9143 Жыл бұрын
Thank you so much for this, very helpful. To extend your library metaphor, I'm trying to understand what the approach would be for answering a question like "How many times is this Shakespeare quote mentioned in the entire library?".
@DaveShap
@DaveShap Жыл бұрын
You need a combination of chain of thought reasoning and API calls
@rafaellopezmunoz6812
@rafaellopezmunoz6812 Жыл бұрын
🎯 Key Takeaways for quick navigation: 00:01 🤔 Fine-tuning GPT-3 on a corpus does not enable efficient question-answering. Fine-tuning is for teaching new tasks, not imparting new knowledge. 02:34 📚 Semantic search uses semantic embeddings for fast and scalable database searching based on content meaning. It's more suitable for NLU tasks than fine-tuning. 05:02 🚫 Fine-tuning is not the same as imbuing an AI with knowledge. It lacks epistemological understanding and cannot distinguish true knowledge from confabulation or hallucination. 10:17 💰 Fine-tuning is slow, difficult, and expensive. Semantic search is fast, easy, and cheap, making it a more viable option for many tasks. 11:15 ✅ Fine-tuning can be used for specific tasks, but it is not optimal for question-answering. Instruct models can perform QA without fine-tuning. 14:13 📚 Use a library analogy for QA with semantic search. Index your corpus with semantic embeddings, use a large language model to generate queries, and leverage the llm to read and summarize relevant documents. Made with HARPA AI
@rymedina5196
@rymedina5196 Жыл бұрын
I know a lot of people are trying to use A.I. for RPG's. For a D&D A.I. where it can play the role of the DM and create the campaigns, would it be better to fine-tune a GPT model or get a blank A.I. model and feed it all the source material and all it would have to know is D&D? I'm not sure a GPT 3 model will able to remember early on campaign details that happened at the start, keep track of hit points during combat, know when players would have to make certain checks rolls (/roll). With enough playing around I can get GPT 4 and less effectively with 3.5 to kind of do those things. But my hope would be for the A.I. to be consistent every time a user wants to use the A.I. to play a campaign. Any idea?
@AlexandreFuchsNYC
@AlexandreFuchsNYC Жыл бұрын
How does your analysis of the genral inapplicabiloty of Fine Tuning change if the subject of the fine tuning is more qualitative than quantitative. Meaning clearly get the point about trying to retrieve/infer on real objectively boolean facts (statutes, regs) but what if you want to train an agent to mimic a personality in its interaction with you? Think an advice giver whre the advice is generally more qualitative (but still need to reflect a particular POV and persoanlity/process) rather than fearing hallucination about decision making face inputs. This is not well achieved with semantic search, retrieval and chaining.
@manfromthewest
@manfromthewest Жыл бұрын
Thanks for the video David. But i'm still wondering what way would be the one to go if wanted to build a bot that knows all articles on my blog and would recommend me the most fitting one (from my blog only) to a question or keyword i prompt it? I worked with langchain and it worked from time to time, but started to give me articles from other websites the more i asked it.
@fong555
@fong555 11 ай бұрын
Thank you for another great presentation! 🎉 Could you please help me understand the relationship between semantic search and generative AI technology? is semantic search part of Generative AI or semantic searc is separated from Generative AI? Especifically, RAG vs semantic search. Thank you very much!
@SussexSEO
@SussexSEO Жыл бұрын
So fine tuning JSONL data sets are questions and answers but do not help with Q&A, they feed a set of patterns and work as an extra layer on the output of an LLM to modify the patterns of its output?
@dalinhays1458
@dalinhays1458 Жыл бұрын
Hi David - Would I use embeddings in order to connect the GPT API to my 50MB of code? If I don't use embeddings then 50MB of code would be about 20.8 Million Tokens. I am altering a large set of code that is not mine, and I want to find a way to identify functions within the code that are relevant to the features that I want to create. In short I want to find the right hooks within 3,000 files of code to modify for the functions I desire. How might you suggest chunking the code up to convert it to a vector? (I am not a programmer)
@nattyzaddy6555
@nattyzaddy6555 Жыл бұрын
So fine tuning cant add to the corpus, but it can add to the tasks it is capable of doing?
@nattyzaddy6555
@nattyzaddy6555 Жыл бұрын
Also does chatGPT use semantic search? Can you teach new tasks with semantic search?
@creativeuser9086
@creativeuser9086 Жыл бұрын
Can you please do more videos on all the current existing types of vector embedding methods for semantic search and how we can fine tune those (not the whole model).
@maubaron9372
@maubaron9372 Жыл бұрын
David, please correct me if I'm wrong, I watched the video that you recommend at the end. It was a great video and using your code and ideas, I applied it to my own use case which was a certain Mexican law case just to try it out. My question is, is this actually scalable? Running the code on my computer and using the newer "text-embedding-ada-002" for embeddings and "gpt-3.5-turbo" for the LLM completion the whole process took around 5 minutes to complete. Is there a way to optimize this in a way to get answers within seconds (thinking of an already deployed model to the market which needs to be fast) . I understand that this knowledge is very valuable to you and that you would not want to give away certain valuable insights, I would really appreciate it if you could only provide resources for further research, I'm very interested in this topic. Thanks a lot man, really.
@DaveShap
@DaveShap Жыл бұрын
Optimizing is a whole other can of worms. You will want to use a search engine like Pinecone as well as parallelization. But also if you just switch to GPT4 32k you can get answers much much faster because of larger context windows.
@maubaron9372
@maubaron9372 Жыл бұрын
@@DaveShap Thanks David, really appreciate it.
@dr.mikeybee
@dr.mikeybee Жыл бұрын
David, It's very likely that if we get really strong AI or AGI that it will solve enough medical problems to keep us alive for a long time. Share your knowledge so that others can make progress too. Money is useless after you're dead. This tech is too important to slow it down for money. You don't want a medical issue that's killing you to be solved a day after you die. It's too late then.
@DaveShap
@DaveShap Жыл бұрын
That's my goal. If I add enough value to the world, then money will pale in comparison to living in a post-scarcity Postnihilistic world.
@jpsl5281
@jpsl5281 Жыл бұрын
What do you think is the best vector DB right now? Pinecone?
@DaveShap
@DaveShap Жыл бұрын
Depends on your requirements.
@yorth8154
@yorth8154 Жыл бұрын
Hey! I know that you have very little time and you can't answer all your comments, but I'll throw this question just in case. When it comes to making a model like gpt2 or 3 produce better poems or prose, is it better to finetune on a poem/fiction database or do we use semantic search in this case? Thank you very much in advance.
@yorth8154
@yorth8154 Жыл бұрын
Also, as a follow-up question. Does the answer change if rather than wanting the model to generate any poem, I want it to write in the style of edgar allan poe. Do I finetune it on a corpus of his work or do I use semantic search?
@Edo692
@Edo692 Жыл бұрын
Try to build a complete app for AI content generator ( text classification, summarization...), it will add more value for you channel. Good luck!
@tylerlawlerDEVGRU
@tylerlawlerDEVGRU Жыл бұрын
'The news you may have missed'. Holy Innocents School of the Northwest.
@jasonduprat3781
@jasonduprat3781 Жыл бұрын
Great job on these videos! Do you have a consulting service? I am planning to create an app and would love to hear your opinion on how best to make it happen. I have a development company who says they can do it but I'd feel much better crosschecking them since it's a multiple 5 figure investment.
@DaveShap
@DaveShap Жыл бұрын
Five figures sounds right. Just make sure they've actually done the kind of work before.
@al-aminibrahim1394
@al-aminibrahim1394 Жыл бұрын
sir what can you say about table question answering
@octavianpiano694
@octavianpiano694 Жыл бұрын
Hi david, is it a good idea to write the finetuning dataset with a question as the prompt and its answer as the completion? can anyone help me with this?
@joehplumber447
@joehplumber447 Жыл бұрын
Why do you not support what you are explaining with code? Seems sus to me. Other AI developers show by example in code.
@FrancescoSblendorio
@FrancescoSblendorio Жыл бұрын
Hi. I have to write a chatbot system for helping users of a certain product. I got a knowledge base made of about 1400 paragraphs describing the product and troubleshooting paragraphs. Which is the best way to: - instruct the system with those instruction, making this knowledge persistent - make people able to make questions about the product and receive answers
@DaveShap
@DaveShap Жыл бұрын
Happy to answer questions if you sign up for my Patreon!
@FrancescoSblendorio
@FrancescoSblendorio Жыл бұрын
@@DaveShap which plain for having that suggestion?
@FrancescoSblendorio
@FrancescoSblendorio Жыл бұрын
*which PLAN (sorry for typo). There are three.
@DaveShap
@DaveShap Жыл бұрын
Check out the descriptions for all tiers: www.patreon.com/daveshap/membership
@kristoferkrus
@kristoferkrus Жыл бұрын
Could you explain what a cognitive architecture is?
@DaveShap
@DaveShap Жыл бұрын
I'll make a video
@vulnerablegrowth3774
@vulnerablegrowth3774 Жыл бұрын
With respect to a model knowing what it knows and doesn’t know: Anthropic has a paper called “Models (mostly) know what they know” where they test to see if the model can predict whether it internally knows certain information and it seems to do quite well at that. For OpenAI, yeah they are focused on scale right now because they believe most of the capability gains will come from scale and it won’t require as much effort to add the other components once they decide to add other components. That said, they are working on stuff like WebGPT for a reason!
@XCmdr007
@XCmdr007 Жыл бұрын
Smoking hot!
@T8ersalad
@T8ersalad Жыл бұрын
I’m so mad he said fusion to power homes is over kill. 😂😂😂😂
@DaveShap
@DaveShap Жыл бұрын
I mean technically solar power is fusion power...
@T8ersalad
@T8ersalad Жыл бұрын
@@DaveShap I figured that you meant developing a personal fusion reactor to power a single home is overkill. Wasn’t really “mad”, Although a fusion reactor per city couldn’t be more reasonable and further from overkill…
@intracompiler
@intracompiler Жыл бұрын
Finally. Found someone just willing to talk about GPT related NLM stuff. Had to wade through the thousands, "Use ChatGPT to make $30,000 quick!" videos... Love the stuff. Instant sub.
@amador1997
@amador1997 Жыл бұрын
I found this out after a quick prototyping. I am think maybe a modular approach. With two bots a classifier or semantic and something like ChatGPT where one checks the othet
@miguelalba2106
@miguelalba2106 Жыл бұрын
The main problem of transfer learning (including inductive transfer learning) is catastrophic forgetting, even fine-tuning a small portion of a network makes the entire thing susceptible to forget stuff, there are ways to mitigate this but most of the research is prohibited in LLMs
@hennerz6964
@hennerz6964 Жыл бұрын
David why dont you create a private members community where you share the infomation on projects such as your curie scene work
@DaveShap
@DaveShap Жыл бұрын
I thought about it but that's way too much to organize. Information wants to flow. Speaking of, let me go ahead and set that to public.
The Joker wanted to stand at the front, but unexpectedly was beaten up by Officer Rabbit
00:12
Поветкин заставил себя уважать!
01:00
МИНУС БАЛЛ
Рет қаралды 6 МЛН
Pro's Finetuning Guide for GPT and LLMs
13:00
David Shapiro
Рет қаралды 15 М.
OpenAI Embeddings and Vector Databases Crash Course
18:41
Adrian Twarog
Рет қаралды 469 М.
OpenAI's New GPT 3.5 Embedding Model for Semantic Search
16:15
James Briggs
Рет қаралды 72 М.
RAG, semantic search, embedding, vector... Find out what the terms used with Generative AI mean!
29:04
5 Tips and Misconceptions about Finetuning GPT-3
12:39
David Shapiro
Рет қаралды 28 М.
How ChatGPT is Trained
13:43
Ari Seff
Рет қаралды 522 М.
A Survey of Techniques for Maximizing LLM Performance
45:32
GPT Engineer: Things Are Starting to Get Weird
10:14
ArjanCodes
Рет қаралды 843 М.
The Joker wanted to stand at the front, but unexpectedly was beaten up by Officer Rabbit
00:12