Thanks for this ! More LangChain please ! More people seem to be warming up to it recently and I bet it will be a good marketable skill to have in the near future if not already.
@youneverknow70969 ай бұрын
Please do more AI based projects Dave, love your work !
@resuleliyev186Күн бұрын
Hi bro thank a lot about langchain you re man actually my English is not good but i can understand you from zero to hero i appreciate you
@vutuan-p3d9 ай бұрын
so great to see this type of topics from you. i learn next js from you now langchain 😊. this is such a huge topic with lang chain lang graph that make my day Dave. thank you so much
@DaveGrayTeachesCode9 ай бұрын
Glad it was helpful!
@Amy-gr3xc8 ай бұрын
Thank you so much for this and the explanations. I'm surprised how fast the docs were out of date between the video and today. It was frustrating.
@omarelhassani84529 ай бұрын
that's what am exactly looking for .. thank you so much Dave
@DaveGrayTeachesCode9 ай бұрын
You're welcome!
@jefrycayo45829 ай бұрын
@@DaveGrayTeachesCode hello my friend
@TheNerdTyler8 күн бұрын
Dude earned my sub. Thanks for the info
@BilalKundi-lm5zzАй бұрын
Hi @Dave, Thank you for sharing this amazing video! I have a couple of questions about AI with Next.js: Why aren’t you using open-source models like LLama 3.2/2.3, which are free of cost? How can a chatbot be deployed and configured to work with both a website and WhatsApp? Looking forward to your insights! 😊
@DaveGrayTeachesCodeАй бұрын
I'm trying to remember as this was months ago, but I think the choice of LLMs was limited at the time for this integration and I believe more people were familiar with ChatGPT / OpenAI. Less config barriers as well. Again, just going from my memory here. As for integrating an app with a website and WhatsApp.. WhatsApp would need to offer an API so you can integrate with it. That answer is the same for any 3rd party integration such as Twitter, Bluesky, GitHub, etc.
@namaefumei8 ай бұрын
Soooo helpful and easy to understand. Made this with Vue and Nuxt and it worked nicely. Thanks a lot.
@ahmad-murery9 ай бұрын
Interesting, I started a simple app in the past to answer student questions based on provided PDF/Word/Text documents and then to say it out loud using a TTS API. I was very motivated until I realized that sometimes it's hard to get access to these things if you're located in the wonderland🤦♂🤔 Thanks Dave!
@DaveGrayTeachesCode9 ай бұрын
You're welcome, Ahmad - I hope you find this one interesting!
@ahmad-murery9 ай бұрын
@@DaveGrayTeachesCode Yes, it really is
@mfarnell6 ай бұрын
simple, clear and very helpful tutorial, thanks :)
@bigj35086 ай бұрын
This was really helpful thank you! Subscribed.
@BilalKundi-lm5zz29 күн бұрын
I have a question regarding Example 4, where You load a JSON document. In a more real-world scenario, if we have a Word document containing company data and want to train the model at that data-how would we approach this? How can we extract and use data from the Word document for training purposes?
@jurgensen_eth5 ай бұрын
amazing video, thanks!
@Dvir12258 ай бұрын
Is there a way to implement custom logic so that the chain doesn't stop at the "?" itself, rather after it? Returning a question to the user without a question mark is rather meh 😅
@TomAsh5198 ай бұрын
awesome job! could you make now ex5 and load the states to a vector database? and later a file?
@mutasimahmed89759 ай бұрын
More such apps with AI and next js please
@DaveGrayTeachesCode9 ай бұрын
Yes, enjoy!
@NotAllHeroesWearCapes-1019 ай бұрын
fantastic video .. thanks ..
@DaveGrayTeachesCode9 ай бұрын
Glad you liked it!
@collinsk87543 ай бұрын
🎉🎉 Excellent
@mikem-zz4ui3 ай бұрын
First, thank you for this video! everything worked as explained (which is rare). Would you be willing to instruct us on integrating a SQL database? I checked out your Patreon but didn't see anything like that. This particular model doesn't seem to handle a large number of json files well. Cheers!
@DaveGrayTeachesCode3 ай бұрын
@@mikem-zz4ui Great suggestion!
@JomarAmomas9 ай бұрын
Thanks for the video. Any plans in the future to create a DenoJS learning videos/course? More power to you.
@aymenbachiri-yh2hd9 ай бұрын
Awesome videos, thank you
@DaveGrayTeachesCode9 ай бұрын
You're welcome!
@REINOSO1958 ай бұрын
Thanks for the video, i have a question, to adapt this project to use a free model in Hugging Face what should i consider to do this? I mean i dont have a credit card to use, so i would like to use a free alternative.
@DaveGrayTeachesCode8 ай бұрын
Look at the Vercel AI SDK docs. They do offer integrations with other models.
@AjayCoding8 ай бұрын
super cool dave
@biniyamfikru44649 ай бұрын
Thank you, Dave. in an unrelated topics are there any workarounds to implement fullscreen on ios as fullscreen API is not working on iPhones
@BIDSAH7 ай бұрын
Hi there this is really fabulous video please make another one with persistent databases like qdrant where the text is converted into text embeddings and retrieved thanks from Jeddah Saudi Arabia
@DaveGrayTeachesCode7 ай бұрын
Great request!
@AIdevel7 ай бұрын
it is an excellent app please make one in React RAG and RAG very much needed
@Biglu1934 ай бұрын
I got an idea. Modify that RAG so that, among other things, in addition to querring data, it also generates one=word instructions according to which, for example, Puppeteer will perform interactions in my application. Is it even possible? :D
@tsajm6bh3 ай бұрын
I’d like to inquire about an issue I encountered while working on a project. I didn’t see related information in the comments section. The problem is that when the response time of the chatbot exceeds about ten seconds, it stops even if the reply is not yet complete. I’m wondering if this is due to limit of the Vercel free version , as I’ve tried several models and encountered the same issue.
@tsajm6bh3 ай бұрын
I think I asked a silly question; I found the answer myself. It turns out that the Vercel Hobby Plan has a default response time limit of 10 seconds. I added 'export const maxDuration = 60;' in the route file, and the issue was resolved.
@rezashah229 ай бұрын
Hi Dave, Can we use LangChain to conditionally load data, based on the first user question? For example, user asks I need to know about US States (load state data), or I need to know about cars (then load car data)
@DaveGrayTeachesCode9 ай бұрын
Yes, there are many patterns. I was thinking about covering conditionals and database queries.
@namaefumei8 ай бұрын
How does thos handle big data? Like say 5GBs lf data? Is it possible to do chunking? Even if it could handle, do we have to give it data every time we initiate a chat?
@DaveGrayTeachesCode8 ай бұрын
You would want that in a vector db. It will need data access to process each new addition to the chat.
@BilalKundi-lm5zzАй бұрын
@@DaveGrayTeachesCode which vector db you recommended that is open source and cand be downloaded and setup at local system
@en_kratia8 ай бұрын
Hello Dave, do you know how to get "cookies" in server component and preserve SSG at the same time? Looks like method "getSession" in "next-auth" can do this, but i don't know how to replicate this. Thank you
@sebastianeden8 ай бұрын
Thank you for great content! I implemented this to chat with a pdf document that is quite big (170 pages). I then reach the token limit of 60k tokens/minute (I generate about 70k-80k). Is there any workaround for this? I want to use the whole document as context to answer the question.
@nikoscuatro72519 ай бұрын
Is this going to work with books in pdf files? there is change to make the gpt keep memory of that topic?
@DaveGrayTeachesCode9 ай бұрын
Yes, if you look at the loaders in the docs, there is a PDF loader. In this example, the route handler loads the data. The chat history context is maintained.
@nikoscuatro72519 ай бұрын
@@DaveGrayTeachesCode I will come back to try it.Thanks!
@fscubetech72199 ай бұрын
Best content.
@abdul.rehman.ikram14585 ай бұрын
Hi, I have a question. It seems that model, chain etc is instantiated and then in the same route we invoke the chain with the question and get a response . So that it means every time user sends a message all of that code would have to be run again ? Does it have to, if it doesnt is there way where can make it such that code doesnt run again and again uneccesarily . Or am I just misunderstanding it ?
@sumger-project-og2 ай бұрын
can you use ollama?
@BilalKundi-lm5zzАй бұрын
that's my question @Dave? so that all thing is done in open source
@prashlovessamosa8 ай бұрын
Please use Cloudflare workers for the next project.
@kumargupta71499 ай бұрын
On current trend ❤
@DaveGrayTeachesCode9 ай бұрын
Thank you!
@programmerpro-hs8ro6 ай бұрын
How much words I can send as json to AI model
@sanjeev25255 ай бұрын
how to make with vector embeddings
@StephenRayner8 ай бұрын
Update for new Vercel AI changes?
@DaveGrayTeachesCode8 ай бұрын
Might be worth another video.
@StephenRayner8 ай бұрын
@@DaveGrayTeachesCode If so, please please also add in a sprinkle of LangGraph. I'm running through your tutorial now with the newer approaches. The extra code for handling the stream can just be dropped works nicely without ``` import { LangChainAdapter, type Message, StreamingTextResponse } from "ai"; ... const prompt = PromptTemplate.fromTemplate(TEMPLATE); const chain = prompt.pipe(model.bind({ stop: ["?"] })); const stream = await chain.stream({ chat_history: formattedPreviousMessages.join(" "), input: message, }); return new StreamingTextResponse(LangChainAdapter.toAIStream(stream)); ``` See docs for the LangChainAdapter
@StephenRayner8 ай бұрын
Also for the next video, here's an example. You can run through using LangGraph or just LangChain. 1. User inputs text to translate. 2. LLM translates it and saves the source / target translation in vector db (pgvector) 3. User inputs SAME text to translate, RAG (or RAT translation) retrieve existing translations instead of always using the LLM. 4. Agent prompt template is told to re-use existing translations if identical 5. Since the translation is a vector, include the matching score too, and now tell the prompt to create a new translation if the score is .70 or higher. 6. This above can all but done using LangChain, but you can use LangGraph and have it decide to do a google search if it doesn't have any translations. Then use those to make up a better informed one. 7. You can also add in another workflow which has it reflect on the source text and the translation and if it still holds the original meaning e.t.c.
@StephenRayner8 ай бұрын
Oh! Streaming UI! Maybe have it show a user to collect a selection from the user. Like feedback on changing the translation. Like the translation I have is... and you can edit it and save it back.
@StephenRayner7 ай бұрын
@@DaveGrayTeachesCode LangChain channel release a decent video on generate UI which hit the spot nicely. LangGraph, TypeScript! AI SDK. Check it out if you didn’t see it yet.
@KandhwaySubhash8 ай бұрын
this guy is married
@amranmohamed3773 ай бұрын
😂😂😂
@techvoiceenglish28 күн бұрын
💯
@epic_c_lips8 ай бұрын
Tech aside, please reveal your skincare routine.
@DaveGrayTeachesCode8 ай бұрын
Ha! I don't have one. Good genetics?
@KandhwaySubhash8 ай бұрын
bro help me to get married
@Neunzahn8 ай бұрын
I wish more tutorials would use free models like llama. Everyone is just using openAI.