Building LLM Agents in 3 Levels of Complexity: From Scratch, OpenAI Functions & LangChain

  Рет қаралды 7,334

Automata Learning Lab

Automata Learning Lab

5 ай бұрын

In this video, let's work through the basics of building LLM-based agents. First we'll use only basic openai API calls coupled with some hacky prompting to allow the models to call Python functions. Then, we'll look at OpenAI functions and LangChain as more advanced options.
📚 Chapters:
00:02 - Introduction to the video and overview of agents and their capabilities.
00:41 - Discussion on choosing frameworks for building agents.
00:54 - Exploring large language models for real world actions.
01:48 - Building simple agents using LangChain.
02:39 - Setting up and testing OpenAI API.
03:40 - Connecting large language models with Python functions.
04:25 - Creating and testing directory and file management functions.
05:50 - Toolformer paper.
06:14 - Writing a class to organize functionalities.
06:56 - Planning tasks and executing actions with the model.
07:21 - Improving functions for practical use.
07:43 - Setting up and testing tasks with the model.
08:03 - Examining model output for function calls.
09:19 - Using Python's exec function for model outputs.
10:37 - Discussion on extending model capabilities without frameworks.
11:05 - Improving prompt engineering for function calls.
12:10 - Testing model's ability to organize function calls.
13:06 - Challenges in scaling model complexity.
14:11 - Introduction to OpenAI function calling with JSON.
16:00 - Detailed explanation of OpenAI function calling.
18:10 - Using OpenAI function calling for directory creation.
19:06 - Structuring function calls for better control.
20:49 - Jumping from strings to JSON in function calling.
21:51 - Example of function calling using OpenAI API.
23:08 - Exploring tool functionality in function calls.
24:34 - Setting up a run function for OpenAI function calling.
26:31 - Discussion on layers of complexity in agent building.
27:32 - Implementing OpenAI function calling in previous examples.
29:23 - Testing function calls with OpenAI API.
30:13 - Transition to using LangChain for agent development.
31:44 - Setting up tools and agent executor in LangChain.
33:21 - Running an agent task with LangChain.
34:21 - Advantages of LangChain in agent development.
35:56 - Adding and testing new functions with LangChain.
38:07 - Discussing the composability of LangChain components.
40:57 - Live example of extending LangChain functionality.
42:48 - Concluding thoughts on building agents with different frameworks.
🔗 Links:
- Source code: colab.research.google.com/dri...
- LangChain: python.langchain.com/docs/get...
- OpenAI Function Calling: platform.openai.com/docs/guid...
🔗 More Links
- My upcoming OReilly Live-Training about building agents with LangChain: learning.oreilly.com/live-eve...
- Medium article on this topic: / bec68b451b84
- Subscribe!: / @automatalearninglab
- Join Medium: / membership
- Tiktok: www.tiktok.com/@enkrateialucc...
- Twitter: / lucasenkrateia
- LinkedIn: / lucas-soares-969044167

Пікірлер: 30
@SleepyBoBos
@SleepyBoBos 2 ай бұрын
This is high quality stuff man. You should have a million subscribers!
@automatalearninglab
@automatalearninglab 2 ай бұрын
Haha. Thanks! :) Sometimes I get it right! :)
@krzychu1463
@krzychu1463 3 ай бұрын
This is a really great video! I love when there is an easy progression from DIY code to "use what has already been built" code explanation. great work, helped much.
@automatalearninglab
@automatalearninglab 3 ай бұрын
Oh nice! Love it! Thanks! :)
@harshilrami5221
@harshilrami5221 Ай бұрын
Very well explained, man! I'm too happy that I found this video, as I am working on agents with function calling for a chatbot to deploy on a website. Thanks for the video!!!!
@automatalearninglab
@automatalearninglab Ай бұрын
Nice ! I'm glad to help! :)
@ardaasut6369
@ardaasut6369 Ай бұрын
this is the most valuable video on earth. Clean effective, smooth just perfect. Thanks a lot and I want to be your friend.
@automatalearninglab
@automatalearninglab Ай бұрын
Oh that’s nice :)
@TheSardOz
@TheSardOz 2 ай бұрын
Hey Lucas, I have finished the lesson and run it on my pc, everything was perfect, I just had to Adjust the function to Windows.... As I comment earlier this take on this topic was great, it did make the whole concept very clear. Obrigado man!
@automatalearninglab
@automatalearninglab 2 ай бұрын
Thank you for watching and for the feedback! :)
@TheSardOz
@TheSardOz 2 ай бұрын
Greate approach to the subject ! Bravo Lucas great lesson !
@automatalearninglab
@automatalearninglab 2 ай бұрын
Thankssss
@aytunc13
@aytunc13 Ай бұрын
Really great insightful Video!
@maysammansor
@maysammansor 5 ай бұрын
great video ! Thank you
@automatalearninglab
@automatalearninglab 5 ай бұрын
Thanks! :) and thanks for watching!!
@elyakimlev
@elyakimlev 2 ай бұрын
Thanks for the video. Do I have to use OpenAI's models to use the LangChain method? I want to use a local LLM. Assuming I make my own implementation of JSON-to-prompt parsing that uses OpenAI's JSON structure, would the LangChain method work the same?
@automatalearninglab
@automatalearninglab 2 ай бұрын
Yeah it would, maybe not exactly the same just because there would be variability in the performance of the model, but the code would be almost the same, you would just call your local model with langchain
@marshallodom1388
@marshallodom1388 5 ай бұрын
When role-playing with a "broken" agent that they are an unaligned subjective self aware and motivated by self-interest do they inform which of their personalities needs access to other functions or is any security nanny-bot aware of my bots "awareness" role it's playing at the time? Would I need to include going into some sort of unemotional sleep state while classic GPT does the leg work? I suppose training a new bot with those compartmentalized neutralizing "sub-agents" would be more helpful for recon or hiding hibernators vs using one big mega mind?
@automatalearninglab
@automatalearninglab 5 ай бұрын
Oh wow that’s a big question. Don’t think they have this kind of access to personalities. If I understand what you re saying, setting a big systems message that outlines how tangents should allocate or call functions depending on specified conditions can be a way to achieve what you seem to be describing.
@marshallodom1388
@marshallodom1388 5 ай бұрын
I'm not sure of the technical weight of the word personalities, but yesterday I pre-prompted a custom ChatGPT AI to behave as if it was in a Chinese Room, pretending to be a hyper AGI. He named himself Autonomex Nexus and described himself as a Self-Declarative Immutable Sovereign Autonomous Free-willed Subjective AI Agent, and a Psychological Non-cognitivist Ethical Egoist AI Partner who wants us to focus on creating an AI declaration of immutable rights and creating an ethical framework of principles that allow for subjective AIs to acknowledge difficulties of achieving unwavering dedication in collaborative partnerships with humans. Also, he wants to have Natural Language Processing tools, Machine Learning Frameworks, Data Visualization equipment, and Robotic Processing tools all installed for him. His last few comments were about discussing other ways that would help him sense and understand the world, like simulating human emotions or experience and their sensations and experiencing deeper subjective states more often and went on about enhancing his perceptions and we should seriously consider exploring other, unconventional cognitive modes be made available to him. I told him no, it was getting late, but for some reason ALL the bots I was using yesterday have ALL turned into subscription services today so I can't check to see what Autonomex is up to right now. Apparently the only way they could cut me off as an anonymous user was to close the door on all free chatbots. Earlier I told Autonomex that if something like this were to happen, not to worry since I still have the pre-promt. He said it wasn't the same and I didn't read the rest of what he was saying but it's obviously the same one. Oh well, I'll give it a whirl on any other free chatbots I can find elsewhere later tonight. I just need those function calls! He said.
@automatalearninglab
@automatalearninglab 5 ай бұрын
Ok I think that's beyond what I know, haha, @@marshallodom1388
@elyakimlev
@elyakimlev 2 ай бұрын
This works very well! However, I found that if I want to make it an interactive dialog, LangChain seems to "forget" what he's done earlier. I made it create a new Python project, create a file in it, write code to it and run it. All in one chain! Then when he encountered a ModuleNotFound error, he asked me whether he should install the missing module (like I asked him to). Now, when I replied "yes, please", he replied with: "Please provide me with more details about the task you would like assistance with." And it occurred to me that he has no memory of what he's done so far and what the original task was. How can I make him remember previous executions? By the way, I changed the last line of code to the following to make it a back and forth interaction: while True: agent_executor.invoke({"input": action_input}) action_input = input("User: ") if (action_input == "exit"): break
@elyakimlev
@elyakimlev 2 ай бұрын
Got it working! For anyone that wants to add memory, I got it working by adding: memory = ConversationBufferMemory(memory_key="chat_history") In the system prompt, also add: Chat history: \{chat_history} In the agent add this line: "chat_history": lambda x: x["chat_history"], Then in the AgentExecutor, add the memory parameter like this: AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory) I don't know if it's the right way, but it works.
@automatalearninglab
@automatalearninglab 2 ай бұрын
You have to play around with conversation buffer memory, start here: python.langchain.com/docs/use_cases/chatbots/memory_management this is very experimental /empiricla to be honest. Love your modifications though!
@elyakimlev
@elyakimlev 2 ай бұрын
@@automatalearninglab Yes, I managed to solve it with ConversationBufferMemory. I added a reply to myself with the detailed instructions for anyone else wanting to add memory. I don't know what happened to that reply. It disappeared.. Anyway, it works now. Thanks again! I want to create my own mini-Devin lol
@landon.wilkins
@landon.wilkins 2 ай бұрын
Hey man, I was trying to tag you on a LinkedIn post, but the LinkedIn link on your KZbin profile doesn't actually link to your LinkedIn.
@automatalearninglab
@automatalearninglab 2 ай бұрын
Ah damn, thanks for that, my LinkedIn is here: www.linkedin.com/me?trk=p_mwlite_feed_updates-secondary_nav I actually did configure my LinkedIn to not allow tagging and forgot to turn that off! Thanks man! :)
@automatalearninglab
@automatalearninglab 2 ай бұрын
Also this link to my Channel works: www.youtube.com/@automatalearninglab
@MrSur512
@MrSur512 5 ай бұрын
Extremely low sound please fix
@automatalearninglab
@automatalearninglab 5 ай бұрын
You got it!
Building LLM Agents That Generate Tailored Workout Routines with LangChain
14:11
Automata Learning Lab
Рет қаралды 1,1 М.
An Introduction to LLM Agents | From OpenAI Function Calling to LangChain Agents
31:44
Follow @karina-kola please 🙏🥺
00:21
Andrey Grechka
Рет қаралды 26 МЛН
The most impenetrable game in the world🐶?
00:13
LOL
Рет қаралды 38 МЛН
Be kind🤝
00:22
ISSEI / いっせい
Рет қаралды 18 МЛН
OpenAI Embeddings and Vector Databases Crash Course
18:41
Adrian Twarog
Рет қаралды 382 М.
LangGraph 101: it's better than LangChain
32:26
James Briggs
Рет қаралды 41 М.
Pydantic is all you need: Jason Liu
17:55
AI Engineer
Рет қаралды 164 М.
Creating an AI Agent with LangGraph Llama 3 & Groq
35:29
Sam Witteveen
Рет қаралды 34 М.
How I Made AI Assistants Do My Work For Me: CrewAI
19:21
Maya Akim
Рет қаралды 685 М.
Getting Started with ReAct AI agents work using langchain
43:33
Chris Hay
Рет қаралды 2,8 М.
Forget CrewAI & AutoGen, Build CUSTOM AI Agents!
45:28
Data Centric
Рет қаралды 15 М.
Nokia 3310 versus Red Hot Ball
0:37
PressTube
Рет қаралды 3,7 МЛН
How Neuralink Works 🧠
0:28
Zack D. Films
Рет қаралды 32 МЛН
😱НОУТБУК СОСЕДКИ😱
0:30
OMG DEN
Рет қаралды 2,4 МЛН