HOW to Make Conversational Form with LangChain | LangChain TUTORIAL

  Рет қаралды 15,567

Sam Witteveen

Sam Witteveen

Күн бұрын

How to Make Conversational Form with LangChain
Colab link: drp.li/QAelt
My Links:
Twitter - / sam_witteveen
Linkedin - / samwitteveen
Github:
github.com/sam...
github.com/sam...

Пікірлер: 75
@datupload6253
@datupload6253 Жыл бұрын
Hi, sorry my question is not video related, but what language model would you recommend for training on a 24GB GPU from scratch if I have my own dataset and want to try from scratch? I don't want to use the pre-trained model because I want to have my own tokenizer and the dataset is not in English. I've played around a bit with GPT-NeoX with models with sub-1B parameter sizes, but I'm thinking that's a pretty old project and that maybe something more up to date (faster) has come out in past months. Thanks
@samwitteveenai
@samwitteveenai Жыл бұрын
You probably don't want to train an LLM from scratch, you need a few 100B tokens to get it to take off and most the LLMs that are decent were pretrained on 1T+ tokens. You want to Fine tune a model that has been made with a multi-lingual tokenizer. A number of the LLaMA open clones do have 50k tokenizers that are more multi lingual friendly. A lot of it depends on what the language is.
@ghrasko
@ghrasko Жыл бұрын
Thanks, this was extremely useful! You emphasized that this is still a memory-less version, but because of this, this is really limited, and I don't know yet how to build from this. I should collect for example a date from the user. In the prompt, I can inject the current date, and thus the AI would be able to sesolve input like "this Friday" or similar. However, as this is memory-less, the parser chain will not be aware of the prompt with the current date or any other contextual info about the date. I am new in LangChain, so any hint on how to proceed, would be appreciated.
@VeeDCreate
@VeeDCreate 5 ай бұрын
Most of this is deprecated now. Run vs Invoke changes a lot of things. The function create_structured_output_runnable should be used instead of create_tagging_chain_pydantic. Plus, Pydantic format is different too (dict issues).
@samwitteveenai
@samwitteveenai 5 ай бұрын
yes this is probably a year old or so now.
@VeeDCreate
@VeeDCreate 5 ай бұрын
@@samwitteveenai I wanted to thank you for your content. Didn't do that in the earlier comment. Thank you for all the work you have put into these easy to understand tutorials.
@gautamsk502
@gautamsk502 Жыл бұрын
Hi @sam, I am getting the below error, when I executed the run command in the Collab which you shared. Any idea what could be the reason? ValidationError Traceback (most recent call last) in () ----> 1 res = chain.run(test_string) ValidationError: 1 validation error for AIMessage content none is not an allowed value (type=type_error.none.not_allowed)
@thequantechshow2661
@thequantechshow2661 Жыл бұрын
This is GOLD
@jayhu6075
@jayhu6075 Жыл бұрын
What a great example from a Openai functions, hopefully more other examples from this stuff. Many thanks.
@VijayDChauhaan
@VijayDChauhaan 3 ай бұрын
Please provide any alternative solution for using this with open source models
@samwitteveenai
@samwitteveenai 3 ай бұрын
This kind of thing works with the Llama3 models often just need to play with the prompts a bit
@VijayDChauhaan
@VijayDChauhaan 3 ай бұрын
​@@samwitteveenaishould I use Ollama?
@kilopist
@kilopist 5 ай бұрын
Amazing Tutorial Sam! How could I give the user the option to ask clarifying questions? I guess memory In the ask for chain?
@carrillosanchezricardo2594
@carrillosanchezricardo2594 5 ай бұрын
Is there a way to start this "flow" by a previous conversation. For example: User: Yes, I would like to book a flight? and then start this flow to ask. I think this could be possible "wrapping" this in a agent or something, am I wrong?
@ChrisadaSookdhis
@ChrisadaSookdhis Жыл бұрын
I was totally surprised to see Chatree Kongsuwan in one of the example. Do you know of the musician?
@samwitteveenai
@samwitteveenai Жыл бұрын
P'Ohm is actually an old friend and I am in Bangkok this week, I also wanted to show that it would work with non 'western' names so I put his name in there. Cool that some one noticed :D
@huislaw
@huislaw Жыл бұрын
Nice, how can I use this properly as a tool for agent? I'm trying to create a tool for user who want to get in touch, that will collect name and email from user. When I tried to use it in an agent, it would trigger the tool if user said he want to get in touch, but when agent ask for name, and user reply with his name, the tool no longer get triggered. The agent simply said, hi, {name}, nice to meet you.
@joey424242
@joey424242 2 ай бұрын
This is great! What would you do in a case where I want to make a sort of chatbot quiz that asks questions in a certain order. The questions will either be multiple choice or answered by free text. Meaning the LLM would either need to display the multiple choice options and then record it the user chose the right one. Or the LLM would receive a free text answer and then grade the user on how close it is to the actual textual answer. The app will need to keep score and then finally grade the user. Would you use agents or functions here?
@lughinoo
@lughinoo Жыл бұрын
Great video. I would love to see an alternative way of conversational form using open source models
@trulymittal
@trulymittal 10 ай бұрын
Did you find the alternative way using agents or something else as Sam said in the video 1:00
@timttttast9793
@timttttast9793 Жыл бұрын
I love it!!! A huge thanks for sharing your knowledge Sam. One thing that's been on my mind lately - suppose a user initially introduces himself with a nickname, like Sam, and later wants to rectify it by saying, 'Apologies, but my actual name is Samuel, not Sam.' Is there an efficient way to manage this within the system and keep the other answers that were already given? I'm genuinely curious to learn more about handling such situations.
@monirehebrahimi6141
@monirehebrahimi6141 5 ай бұрын
Is there any chane to connect it with Llama 3 instead of open Ai?
@shrvn110
@shrvn110 Жыл бұрын
I hope you get everything you want in life! thank you for your videos Sam!
@tomjerry5144
@tomjerry5144 Жыл бұрын
This is what I had found for a long time! Thank you for your sharing.
@yasminesmida2585
@yasminesmida2585 3 ай бұрын
plz Why did you use model='gpt-3.5-turbo-0613' for chain 1 and model='gpt-3.5-turbo', the default model, for chain 2?
@samwitteveenai
@samwitteveenai 3 ай бұрын
they would have been the same (at the time of recording), one is a pinned version and one not
@yasminesmida2585
@yasminesmida2585 3 ай бұрын
Great video thank you very much. What is the next step after creating the two chains? Should I create a global function to call them, develop an API, or consider another approach?
@samwitteveenai
@samwitteveenai 3 ай бұрын
really depends on what you want to do with it
@ChrisadaSookdhis
@ChrisadaSookdhis Жыл бұрын
This use case is similar to one I had been considering for a while. When companies put contact drop form on the web, the prevailing wisdom is to keep the form as short as possible, less you risk turning users away. But marketers always want to have more info, and we know SOME users are OK to share them. My idea is to have conversational chatbot that tries to collect additional data fields after contact form submission. The bot would collect only as much as users are happy to share, then stop and add the gathered info to the previously submitted form. If users do not want to, they don't have to share anything more. Win-Win.
@samwitteveenai
@samwitteveenai Жыл бұрын
Certainly can do that especially if you took out the ask_for part and just had some more generic prompts etc.One main part was I wanted to show that you don't probably want the filtering part on even utterance, just on ones you think will be useful etc.
@PaulBenthamcom
@PaulBenthamcom Жыл бұрын
I kind of have this working as a feedback form but it's a bit clunky and every question is starting with "I need to gather some feedback...", so it's repeating the "explain you need to get some info" on every loop. I also had to include "not mentioned" in the 'empty' conditions. I can't help but think it needs memory to contextualise what it has already asked but this might be expensive with regards token usage. Maybe you could add a "yes" or a "no" as to whether the question is answered and then have the parser review the memory to pick out the answers from the conversation history? I've not had any luck with that yet though.
@svenandreas5947
@svenandreas5947 Жыл бұрын
just brilliant .... I would like to ask how you would ensure that a user gives an indication that he is happy with the answer or not? I did play arround with adding this question to the prompt template (including memory) but was not so succesful. It works most of the time, but for whatever reason it is unable to deal with a simple yes / no answer. Looking forward for your tutorial. eye opener
@RedCloudServices
@RedCloudServices Жыл бұрын
Sam what if the use case has picklists which are dependent for example if the form has categories of fruits and vegetables and subcategory enumerators based on value of category?
@caiyu538
@caiyu538 8 ай бұрын
Great
@MeanGeneHacks
@MeanGeneHacks Жыл бұрын
I often end up with pydantic validations errors when I add other fields, such as "issue description" for a customer service bot. Any idea why that's happening? --- I understand there was a problem with your order. In order to assist you better, could you please provide me with a description of the issue you encountered? ---> my shipment didn't arrive. --- pydantic.error_wrappers.ValidationError: 4 validation errors for OrderDetails issue_description field required (type=value_error.missing) email field required (type=value_error.missing) phone field required (type=value_error.missing) order_number field required (type=value_error.missing)
@samwitteveenai
@samwitteveenai Жыл бұрын
Without playing with it a bit, my first guess would be your description of what the field should be is not detailed enough for the OpenAI Functions. Also try it with GPT-4 now that is available.
@Mrbotosty
@Mrbotosty Жыл бұрын
We can even have validations on here, check if the email is valid and ask again if not! Its a great usecase for LLMs
@rashedulkabir6227
@rashedulkabir6227 Жыл бұрын
Make a video about new SDXL and how to run it on google colab.
@MeanGeneHacks
@MeanGeneHacks Жыл бұрын
Is it possible to include Optional fields in the pydantic class and have the model include them if provided by the user, but not ask for them specifically? Every time I add an Optional field, it seems to break the chain.
@samwitteveenai
@samwitteveenai Жыл бұрын
Yes totally all the fields I had in there were optional, the only reason it asked was because I had a separate function for that. The Pydantic Class had nothing as required.
@matthewmansour3295
@matthewmansour3295 Жыл бұрын
Pure gold. Thanks for making this.
@shreyasharma6074
@shreyasharma6074 5 ай бұрын
This is amazing! Thank you so much!
@micbab-vg2mu
@micbab-vg2mu Жыл бұрын
Great video - thank you:)
@konradhylton
@konradhylton Жыл бұрын
Thanks for sharing. Hi Sam, do you know of Langchain's Javascript components are also able to do this?
@samwitteveenai
@samwitteveenai Жыл бұрын
They should be able to all this is just a different style of prompting. The part they may struggle with is the Pydantic class, my guess is they would use a generic object that is similar.
@Weberbros1
@Weberbros1 Жыл бұрын
Hey man FYI this video seems to be blacklisted from showing up in youtube search results. Not sure what you did to piss off the algorithm lol but youtube hates this video for some reason.
@Weberbros1
@Weberbros1 Жыл бұрын
nevermind, it shows up in search results again. No problem anymore
@FreestyleTraceur
@FreestyleTraceur Жыл бұрын
Very cool. Your videos are great.
@vinsi90184
@vinsi90184 Жыл бұрын
Hey Sam, can you also explain the reason for choosing tagging chain instead of extraction chains? I was trying this out with extraction and it gives back a list. The + there is that I can also collect information about a group of users rather than one. But it also creates extra bit of errors. Let's say the name of cars you own may have one or more. So, when I used extraction chain and I said I live in Melbourne, australia and I own a Volkswagon and Tesla, the extraction created two entries. One with my name etc and one car and the other blank entries elsewhere and another car. While if I use tagging chain both the cars with and are joined in one field. Happy to hear your thoughts on tagging vs extraction chains and respective pros and cons.
@samwitteveenai
@samwitteveenai Жыл бұрын
So tagging they seem to have made more for classification, eg sentiment analysis etc. It sounds like that you are doing is more extraction than classification so it makes sense to use that one more
@vinsi90184
@vinsi90184 Жыл бұрын
As always, starting with thanks. I am always catching up with your videos. I am curious about the use of field description in the pydantic class. What purpose does it serve? Is it picked by the LLM as well to understand what this means. Also curious about how to use few shot learning with the tagging chains you have created.
@samwitteveenai
@samwitteveenai Жыл бұрын
Yes exactly the descriptions help the LLM workout what to do.
@kenchang3456
@kenchang3456 Жыл бұрын
Hi Sam, thank you for another great example to learn from :-) When you decided to use Pydantic, was it that you had experience with it and it fits this use case?
@samwitteveenai
@samwitteveenai Жыл бұрын
I have used Pydantic before with FastAPI etc but also even OpenAI apparently are using it for this, so it makes sense to use as it works really well.
@pmshadow
@pmshadow Жыл бұрын
Thanks a lot! Fantastic content!!
@grandplazaunited
@grandplazaunited Жыл бұрын
Thanks Sam. This made my day :)
@kennethleung4487
@kennethleung4487 Жыл бұрын
Great work, Sam! Super useful
@rossanobr
@rossanobr Жыл бұрын
JS videos please 🥹🥹
@yasminesmida-qc9ce
@yasminesmida-qc9ce 3 ай бұрын
can i use any other open source llm model?which one do you recommend
@samwitteveenai
@samwitteveenai 3 ай бұрын
I would go with Llama-3 now for an open source version or Mistral
@yasminesmida2585
@yasminesmida2585 3 ай бұрын
@@samwitteveenai ​ thank you very much. What is the next step after creating the two chains? Should I create a global function to call them, develop an API, or consider another approach?
@yasminesmida2585
@yasminesmida2585 3 ай бұрын
@@samwitteveenai is Llama ,Mistral better than gpt3.5 for this task?
@VijayDChauhaan
@VijayDChauhaan 3 ай бұрын
​@@yasminesmida2585are you able to recreate this with Llama 3? If so how? Did you use API or ollama or something else
@davidw8668
@davidw8668 Жыл бұрын
Thanks. This is pretty cool for all sorts of interactive content and lead generation but could also be imagined for personalised experiences.
@samwitteveenai
@samwitteveenai Жыл бұрын
Yes totally one use case that I have been working on that I used something like that was exactly lead gen.
@ahmadzaimhilmi
@ahmadzaimhilmi Жыл бұрын
I was thinking of a more complex conversation, like when you start a main conversation, then go have a side discussion about a sub topic, come to a conclusion, use the conclusion to chart the next step. Something like that.
@samwitteveenai
@samwitteveenai Жыл бұрын
Yes this will work, just plan out the possible paths. I will make a video of routers soon and they can be used for that.
@JermaineCheah
@JermaineCheah Жыл бұрын
@@samwitteveenai Thanks for going into these depths, as many other content creators always stop at the chatbot with your docs..or conversation memory and what not. But what i am trying to achieve is very similar to what Ahmad Zaim mentioned as well. Looking to piece all the puzzles together and create a much more better conversational Chat Bot for inbound customer service. Will you still be doing intent based on the routing level? to handle cases whereby when a user is say in a refund juncture, filling in the conversational form but provides a message that is not what the refund juncture is expecting, and how you handle and route those. Really excited to see what you have in the pipeline. Keep up the great work, do you have a patreon?
@guanjwcn
@guanjwcn Жыл бұрын
Is it really true that you mostly live in Singapore?
@samwitteveenai
@samwitteveenai Жыл бұрын
That is certainly true. Though I recorded this in Bangkok and am in BKK this week.
@蔡瀚緯-w4j
@蔡瀚緯-w4j Жыл бұрын
Hi Sam, thank you for your amazing videos that have helped me a lot. I've learned a great deal about LangChain recently. I'm currently working on developing a tool that can process 30 to 50 hotel reviews at once. The goal is to classify the priority level of each review based on predefined rules, allowing hotel staff to quickly respond to complaint reviews. The rules may look like this: high_priority_standard = ["Unwilling to visit again", "Customer injured due to hotel", "Serious hygiene issues", "..."] medium_priority_standard = ["Price perception gap", "Unsatisfactory staff service", "..."] low_priority_standard = ["Issues that cannot be improved in the short term (location disadvantage, outdated hotel)", "Internet connection problems", "..."] My question is, which LangChain tool should I use if I want to automate and reliably process customer reviews each day? I tried using csvAgent, inputting 30 reviews, but it only gave me 4 outputs, and the quality of the outputs did not meet my expectations. I would appreciate it if you could provide me with some advice. Thank you!
@蔡瀚緯-w4j
@蔡瀚緯-w4j Жыл бұрын
I solved the problem by template and using multiple input_variable. Thanks. But the output still unstable , LLM will missed 3~5 reviews.
@蔡瀚緯-w4j
@蔡瀚緯-w4j Жыл бұрын
I have watched your excellent video about parsers on your channel, which helped me understand the functionality of pydantic. I am wondering if I can use pydantic to define the desired number of outputs from LLM. Currently, I have only seen pydantic being used to enforce the format of each output. For example, initially, I relied on len(data) to determine that the LLM output should match the number of input reviews. However, it doesn't always work well, as sometimes the LLM still outputs fewer results. Below is my original code, and I would appreciate your suggestions. Following is my code -------------------------------------- #The priority classification rule in chinese priority_standards = [ {"priority": "high_priority", "standards": ["不願意再光顧", "顧客因為飯店受傷", "重大衛生問題", "顧客個人財物丟失或被竊", "顧客隱私洩露","評論內容顯示顧客情緒極度憤怒"]}, {"priority": "medium_priority", "standards": ["可以短期改善的問題(服務流程,動線問題....)","價格認知落差", "員工服務不滿意", "飯店設施故障", "房間清潔問題","評論內容顯示顧客情緒不太愉快"]}, {"priority": "low_priority", "standards": ["無法短期改善的問題(地點不優、飯店老舊....)", "網絡連接問題", "飯店噪音問題","客觀來講並非飯店業者問題"]} ] llm_16k = ChatOpenAI(model_name='gpt-3.5-turbo-16k',temperature = 0) #Define Template rule prioriy_classify_prompt3 = PromptTemplate( input_variables=["text_input","priority_standard","review_count"], template=""" To ensure that all the {review_count} input reviews are processed according to the provided rules,\ please use the following instructions for each review: Extract the following information: priority:As an expert in public relations and crisis management for a five-star hotel,\ please leverage your extensive experience to classify the priority of each review\ based on the provided {priority_standard}.\ The priority classification should yield one of three outcomes: high_priority, medium_priority, or low_priority. priority_reason:You will explain the reasons behind the priority classification.\ Please provide a concise description, in Traditional Chinese (Taiwan), of your rationale, customer sentiments, severity of the situation, and other relevant factors.\ Limit the explanation to 30 words to ensure brevity. key_fact:Please summarize the key facts of each review without including your own opinion\ keep the key fact in short sentences and in Traditional Chinese (Taiwan) whenever possible. Make sure the number of final ouput is equal to {review_count}, and format as JSON with the following keys: key_fact priority priority_reason reviews: '''{text_input}''' """ ) prioriy_classify_chain3 = LLMChain(llm=llm_16k,prompt=prioriy_classify_prompt3) prioritys3 = prioriy_classify_chain3.predict_and_parse(text_input = data,priority_standard =priority_standards,review_count = len(data) ) print(prioritys3)
Claude-2 meets LangChain!
15:17
Sam Witteveen
Рет қаралды 12 М.
LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners
12:44
Watermelon magic box! #shorts by Leisi Crazy
00:20
Leisi Crazy
Рет қаралды 28 МЛН
Spongebob ate Patrick 😱 #meme #spongebob #gmod
00:15
Mr. LoLo
Рет қаралды 18 МЛН
Как подписать? 😂 #shorts
00:10
Денис Кукояка
Рет қаралды 8 МЛН
AgentWrite with LangGraph
19:22
Sam Witteveen
Рет қаралды 9 М.
Memory in LangChain | Deep dive (python)
20:40
Eden Marco
Рет қаралды 10 М.
Moshi The Talking AI
15:29
Sam Witteveen
Рет қаралды 15 М.
How to Build AI ChatBot with Custom Knowledge Base in 10 mins
10:46
LangChain: Giving Memory to LLMs
15:48
Prompt Engineering
Рет қаралды 21 М.
Converting a LangChain App from OpenAI to OpenSource
20:00
Sam Witteveen
Рет қаралды 15 М.
Claude 3.5 Deep Dive: This new AI destroys GPT
36:28
AI Search
Рет қаралды 739 М.
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Рет қаралды 246 М.
Watermelon magic box! #shorts by Leisi Crazy
00:20
Leisi Crazy
Рет қаралды 28 МЛН