The BEST walkthrough and sample code on tool calling on KZbin!
@therealsergio6 ай бұрын
Adam, your content is very helpful and thoughtfully put together. It’s clear there are a lot of hours going into their preparation
@terryliu36352 ай бұрын
The best walk through on this topic ever!! Thank you!!!
@Alexfnplays24 күн бұрын
crystal clear and right to the point!!!
@sunitjoshi35735 ай бұрын
Nice video! I was just thinking about function calling…and your video showed up! Thanks 😊
@AdamLucek5 ай бұрын
Perfect!
@PragyaKaushik-q8f2 ай бұрын
This is amazing content! Quite concise and interesting! Thank you so much!!
@JustinHennessy2 ай бұрын
Amazing run down on tools. Thanks so much for sharing.
@BagusSulistyo21 күн бұрын
hi Adam, thank for video awesomeeeee !!
@Horizont.6 ай бұрын
Great video. Dude's been speaking nonstop for 30min straight. Now, I need a 2 hour break.
@amk22986 ай бұрын
😂😂😂😂
@Karthik-ln7eg6 ай бұрын
great video. very clear explanation.
@AdamLucek6 ай бұрын
Thanks!
@JimHeil6 ай бұрын
Awesome video! Thank you!
@AdamLucek6 ай бұрын
Thanks!!!
@sirishkumar-m5z5 ай бұрын
An intriguing exploration into LLM function calling! Investigating further AI tools may improve your comprehension even more.
@billblair31552 ай бұрын
U Da Man 😎
@coolmcdude6 ай бұрын
My favorite subject
@chrisjhandyАй бұрын
awesome video!
@rowcolumn749910 күн бұрын
Thank you Adam and you effort to make this video. May I ask, in 2025, what is/are best tools if I want to develop an app that can do RAG and external tools calling in your opinion? llamaindex/langgraph/langchain? Thanks in advance!
@skibidi-skibidi-bap3 ай бұрын
Wait so does Langchain create the whole Json for you using the Tool decorator?
@aishwaryaallada09255 ай бұрын
Awesome video! Is the code shared anywhere? 😊
@AdamLucek5 ай бұрын
Yes! In description and direct link here: github.com/ALucek/tool-calling-guide
@youtubemensch5 ай бұрын
How are possible with a LLM running on a own Server? So without an Ai-API. Like a Llama model. Is this possible with a specific model?
@siddheshwarpandhare16986 ай бұрын
Hi Adam, well explained all content, I need to disable parallel calling, but I am not sure where to Put parallel_function_tool:false , can you help me in this case?
@AdamLucek6 ай бұрын
Sure, that's placed here with OpenAI's API response = client.chat.completions.create( model="gpt-4o", messages=messages, tools=first_tools, tool_choice="auto", parallel_tool_calls=False # parallel function calling ) or with Langchain during the bind_tools stage llm_tools = llm.bind_tools(tools, parallel_tool_calls=False)
I was getting that too then I updated my OpenAI api with pip and it fixed it! Make sure you restart your kernel after if you’re in a notebook environment
@siddheshwarpandhare16986 ай бұрын
Thank you @@AdamLucek
@DaleIsWigging5 ай бұрын
The tutorial doesn't say how to insert your API key. To do this replace { client = OpenAI() } with : { import os from dotenv import load_dotenv # Load environment variables load_dotenv() client = OpenAI() # OpenAI API configuration client.api_key = os.getenv("OPENAI_API_KEY") } ignore the "{" and "}". Then you can use a .env file with: OPENAI_API_KEY="sk-proj..."
@AdamLucek5 ай бұрын
Also possible to do Import os os.environ["OPENAI_API_KEY"] = "your-api-key-here" And it will automatically pull the environment variable when creating the client etc. Thanks for pointing out!
@gani2an16 ай бұрын
where can we see the functions that are available with each model?
@maxpalmer86606 ай бұрын
Kinda hate this dudes voice ngl. Anyone else with me?