it's literally been one day since the release of GPT 4 and we already have a video about how to practically use it by james ... 🤩🤩🤩🤩🤩 mad respect for you man keep it up 🥳🥳🥳
@jamesbriggs Жыл бұрын
thanks man appreciating your comments 🙏
@antoninleroy3863 Жыл бұрын
and we don't even have access to the api 🤣
@blackenedblue5401 Жыл бұрын
Also fireship watches your channel lol
@jamesbriggs Жыл бұрын
@@blackenedblue5401 really how do you know this? 👀
@jackmartin1146 Жыл бұрын
Great video, my friend. Are there any good advanced resources you have come across to help GPT-3.5/4 answer "followups". For example, the first generated message is good using Prompt Engineering but for generating next message to user's msg requires another Prompt template or context to be added. If we change the prompt template again and again, it does seem like we are only doing Rule based programming (If/Else) rather than using the Capabilities of an AI System. Would love to know your thoughts/resources on how to handle such cases. Please do share! Thanks
@ferguing Жыл бұрын
EXCELLENT video! Subscribing + sharing with all my colleagues
@jamesbriggs Жыл бұрын
awesome, thank you!
@paraconscious790 Жыл бұрын
Great Demo, the last part is crucial, open to imagination about the infinite powerful tools. Thank you very much!
@jamesbriggs Жыл бұрын
there's a ton of possibility, it's really awesome
@adamal93 Жыл бұрын
Thanks James! How do you properly query the Pinecone database when a user asks a follow-up question? Can you prime a chat model to rephrase the question and pass that output to the database query?
@jamesbriggs Жыл бұрын
we can use the react agent in langchain, will cover this in the langchain series very soon, but in the meantime you can find the code we'll be going through here: github.com/pinecone-io/examples/blob/master/generation/langchain/handbook/06-langchain-agents.ipynb
@henkhbit5748 Жыл бұрын
As always great video James. Yes, for specific information or information which chatgpt is not trained a vector database like pinecone is needed. Btw: Alpaca, a new llm based on llama 7 billion and finetuned with chatgpt is also very interesting development where with little training good results are possible...
@mariodelatorre6047 Жыл бұрын
Excelente video! q soft usas para hacer los gráficos con el lápiz?
@rabanal_josh64 Жыл бұрын
Exactly What I've been looking for. Amazing
@hvbris_ Жыл бұрын
Thanks for the great video James, is it possible for this type of bot to have memory?
@JasonMelanconEsq Жыл бұрын
You did exactly as you said! Thank you! This will help out my devs tremendously! ⭐⭐⭐⭐⭐
@jamesbriggs Жыл бұрын
glad to hear it!
@StephenPasco Жыл бұрын
Great video. I wish you would have had a token counter so we could see the cost incurred throughout each step.
@tajwarakmal Жыл бұрын
Awesome video! Question: is there a way to set it up such that you can ask clarifying questions? As in continue the chat to dig in deeper?
@6lack5ushi Жыл бұрын
I respect the AI researcher keeping up with the pace of innovation in the space its scary to learn anything its terrifying to learn something that will be out of date with weeks...
@jamesbriggs Жыл бұрын
yeah I have no idea how those people are doing it, it's wild
@Phasma6969 Жыл бұрын
@@jamesbriggs They're probably hyper intelligent and neurodivergent. That does it for me.
@eRiicBelleT Жыл бұрын
Do you apply GPL to the data before embedding? What chunking techniques do you use apart overlap chunks? You and David Shapiro are the best at explaining ChatGPT-related stuff. Your videos are amazing, thanks
@johnwallis1626 Жыл бұрын
keep up the great work james
@GayathriG-h5h Жыл бұрын
Can we use gpt4 in llmchain call instead of chat completion?
@mikemansour1166 Жыл бұрын
Hello I was wondering about Chunking and chunks overlap , Does the language model connect the chunks , and what benefits are there for smaller and bigger chunk sizes ?
@slayermm1122 Жыл бұрын
your code works. thanks very much for sharing.
@wdgxhitman3423 Жыл бұрын
Can we save it as a updated pre-trained model? So that next time we can use directly without feeding the new data?
@jamesbriggs Жыл бұрын
unfortunately there's no way to do this right now using gpt-4 - you could try 'distilling' the knowledge from gpt-4 into a smaller local model, to do that you'd just create a ton of training data with gpt-4 and then use that to train your smaller local model
@EleanorPrashamshini Жыл бұрын
Hi This is a great intro. Thank you! Where can I find the code for this?
@ylazerson Жыл бұрын
awesome video - thanks again! Keep em coming!
@DarioMader Жыл бұрын
Such a good video🎉
@0730pleomax Жыл бұрын
Will James update LLM-related sections to the Udemy course?
@absta1995 Жыл бұрын
So does this auto search by itself and create embeddings or do you have to search yourself and create the embeddings beforehand?
@imran.shabbir Жыл бұрын
Amazing, great explanation
@venkateshr6127 Жыл бұрын
can we use T5 as a generator in RAG QA model if yes then how can we use it ?
@sergedadesky5638 Жыл бұрын
James. Brilliant demo. Great explanations. Was able to replicate most of this. Ran into snag trying to implement mode InvalidRequestError: The model: `gpt-4` does not exist. I am signed up for gpt-4 , I believe. Has this model been deprecated ?
@FelipeNovaesRocha Жыл бұрын
Make sure your openai package have the last version.
@lutune Жыл бұрын
Great content!
@jamesbriggs Жыл бұрын
thanks Luke!
@chatwithdata Жыл бұрын
Good stuff 🙌
@jonatan01i Жыл бұрын
Based on how Language Models work, isn't it better to put the question before all the information you ask it about? So that it can "read" those having in mind what it needs to pay attention to.
@sersoage Жыл бұрын
One question regarding the splitter what if i do not want to split by chunks just by a separator?
@jamesbriggs Жыл бұрын
you could just do "your text goes here".split(" ") -- but replace " " with whatever separator you'd like to use
@jimmynguyen3386 Жыл бұрын
James is the man!
@DarrenTarmey Жыл бұрын
Were can I find all instructions to do this.
@jamesbriggs Жыл бұрын
for the openai side of things, I think I used their ChatCompletion docs ( platform.openai.com/docs/guides/chat ), for Pinecone the pinecone docs ( pinecone.io/docs ), and if you just want the code, I put it all here: github.com/pinecone-io/examples/blob/master/generation/gpt4-retrieval-augmentation/gpt-4-langchain-docs.ipynb
@midnightmoves7976 Жыл бұрын
I think the market is well and truly cornered now.
@arnoldlai822 Жыл бұрын
this way of text cutting is mechanical, may loss contextual meaning. Is there any better way?
@Phasma6969 Жыл бұрын
Vector database and embeddings
@arnoldlai822 Жыл бұрын
@@Phasma6969 it is using pinecone vector db and openai embeddings right now
@Phasma6969 Жыл бұрын
@@arnoldlai822 owh oops yes that's the way for now, until we get longer context length through other mechanisms. I saw some on the OpenAi subreddit
@chrisalmighty Жыл бұрын
A better way is to actually do proper text processing into reasonable breaks by heading or topic.
@WereOutOfBuziness Жыл бұрын
Kudos to you!
@KyleRosebk Жыл бұрын
Good video
@ricardocosta9336 Жыл бұрын
That wacon pen. I was here before it was cool.
@jamesbriggs Жыл бұрын
you've been here from the start haha
@Dr.Trustmeonthisone Жыл бұрын
nice
@JohnMcclaned Жыл бұрын
Copy pasting the OG GPT-3 videos with a GPT-4 variation 😆