Claude-2 meets LangChain!

  Рет қаралды 12,494

Sam Witteveen

Sam Witteveen

Күн бұрын

Пікірлер: 32
@channel_panel193
@channel_panel193 11 ай бұрын
it's really hard to get claude to return objects. It just wants to print.
@klammer75
@klammer75 Жыл бұрын
I’ve been waiting for almost 3 months now for anthropic API access…bookmarking this for when I do🥹🙏🏼🦾
@liy0na
@liy0na 11 ай бұрын
Hey Sam. Thanks for the video. I couldn't find this in your github repository. Do you have a link for it?
@samwitteveenai
@samwitteveenai 11 ай бұрын
Is the note in the description? I have had YT removing some of them for some strange reason. I will look into it. Could be I missed upping to Github.
@heyheni
@heyheni Жыл бұрын
Hey sam. I'm into bicycle touring. And claude 2 and GPT4 32k does a good job planing routes adding campsites, tourist attractions and restaurants. However it refuses to export it to a GPX xml navigation file for a bicycle computer in full. It only does 3 examples in xml. Even when i ammend the required data step by step like gps coordinates and elevation. Could i circumvent that using langchain and a vector database like in your video? Example route, Zurich, Chur, Bozen, Lienz, Ljubljana. 21 Days. Thank you for your very educating videos.
@barefeg
@barefeg Жыл бұрын
Could you clarify what you mentioned about anthropic last video about allowing on prem deployment?
@samwitteveenai
@samwitteveenai Жыл бұрын
best to reach out to their sales people for this.
@julian-fricker
@julian-fricker Жыл бұрын
I'll have to try to follow your method. I tried a basic prompt with an embedding search and it wouldn't work, kept giving me the default response I taught it when it couldn't answer a question. But if I used the LangSmith trace tool I could run it through their playground and it worked really well.
@clray123
@clray123 Жыл бұрын
As to the 100K token limit, you must also consider that the attention algorithm essentially averages values of the tokens it attends to (and normally these are all tokens to some extent). I have great doubt that as you increase the number of tokens you average you will get any sensible information out of them. The whole point of attention is to be selective wrt to evaluated tokens, but the more tokens there are, the less selective it is going to become.
@Dani-rt9gk
@Dani-rt9gk Жыл бұрын
So you’re saying the 100K system will require more algorithmic complexity? Or is entirely pointless?
@barrettjacobsen6614
@barrettjacobsen6614 Жыл бұрын
Then do some tests and assuage (or confirm) your doubt. Feed it a chapter of something as well as in different instances the full book and varying levels of synopsis and have it explain how the chapter relates to the whole. Report back with your results.
@samwitteveenai
@samwitteveenai Жыл бұрын
I agree that the 100k contexts seem to lose information in the middle. I totally disagree with the people saying you don't need vectorstores etc, personally I see the best way to use this is just giving it more semantic results returned etc. The other thing is I am not sure if they have actually fully said how they are doing the 100k context. There are a number of rival ways to do it and more coming out each day. Also getting people to use 100k each time when they don't need to seems a great way to increase business for these LLM startups. While there are some, not many uses cases can justify spending a few dollars each time you ping the API.
@clray123
@clray123 Жыл бұрын
@@Dani-rt9gk ""Algorithmic complexity" is a property of an algorithm, not of the data size, so it is constant. Depending on the attention algorithm you use this complexity is quadratic (if you want to process each token in context, as in the original transformer) or linear (if you cut corners by processing fewer of them, e.g. linformer or the newest "dilated attention"). So actually you may have a huge context on paper together with an algorithm which does not really use much of it to make it fast enough. I suspect there are tradeoffs to be made, but my intuition is that making contexts superlong will not really make the systems much better. I consider context length similar to human short term memory: you can do a great deal of work with not so much memory if the interface to the longer term memory makes sense. But currently this interface doesn't really exist in LLMs; there are vector dbs and there are tokens in context, but these do not really interact or exchange information of their own accord, it is left to application to do it "somehow" (probably poorly).
@clray123
@clray123 Жыл бұрын
@@barrettjacobsen6614 I don't even have access to the API at this point.
@nicewook
@nicewook Жыл бұрын
Thanks for the video. :-)
@GeoffY2020
@GeoffY2020 Жыл бұрын
Dear Sam, thanks for your wonderful efforts again. In this Claude 2 demo, are the text file structured or unstructured (i need to do structured file) ? Greeting from Hong Kong after Typhon Talim just passed !
@ZhengChen-w7t
@ZhengChen-w7t Жыл бұрын
八号风球🤣
@KararaJawaab
@KararaJawaab Жыл бұрын
Any Plan For Videos Explaining Self-attention Or Diluted Attention Mechanism In Simple Terms?
@samwitteveenai
@samwitteveenai Жыл бұрын
I would have thought there are a 100+ vids on Self attention out there now. I am thinking of making more DL coding vids etc if there is the appetite for it.
@nandakishorejoshi3487
@nandakishorejoshi3487 Жыл бұрын
Did you try generating python or SQL codes using this model?
@micbab-vg2mu
@micbab-vg2mu Жыл бұрын
Thank you for another great video, Sam. I am still waiting for the API. I will use my company email. I work for a large pharmaceutical company with over 100,000 employees. Maybe this information will help to get it.
@AlonsoGHS
@AlonsoGHS 8 ай бұрын
I just got the API Key today 😊
@micbab-vg2mu
@micbab-vg2mu 8 ай бұрын
Me too - to late - we have Mistral now:)@@AlonsoGHS
@AIWRLDOFFICIAL
@AIWRLDOFFICIAL Жыл бұрын
Nice vid!
@jgfitzpatrick
@jgfitzpatrick Жыл бұрын
I gave up on ever getting access to anthropoic api lol
@Peter-oz1oo
@Peter-oz1oo Жыл бұрын
LangChain is a pain in the ass
@FarhadKumer
@FarhadKumer Жыл бұрын
langchain is amazing
@clray123
@clray123 Жыл бұрын
Hallucinations are actually the elephant in the room for all language models. The point is that making up ANY stuff destroys trust, and the effect is magnified when the party who does it is otherwise competent. Even if the bot doesn't hallucinate 99% of the time, that one time it does will feel like a betrayal to the human it communicates with. It is equivalent to having an intelligent person who you previously found reliable starting to make up stuff or lie to you one day. In human relations it seldom works out as "oh well, (s)he must have had a bad day". We may have this leniency toward kids or imbeciles, but not toward responsible adults. Instead it acts like poison and creates suspicion about all the future interactions. And we humans especially do not like to deal with other humans who are repeatedly unreliable. So there you have it, the rather bleak future of language model AI (unless the "I don't know" response is sorted out to work correctly 100% of the time).
@nickmillerable
@nickmillerable Жыл бұрын
Nicely put. Will you be sitting this one out then?
@henkhbit5748
@henkhbit5748 Жыл бұрын
Very informative, by interfacing claude2 using code. 👋 Can we also count the tokens used like in openai? Testing how the system role compared with openai would be nice...
PaLM 2 Meets LangChain
18:29
Sam Witteveen
Рет қаралды 8 М.
Qwen Just Casually Started the Local AI Revolution
16:05
Cole Medin
Рет қаралды 63 М.
Perfect Pitch Challenge? Easy! 🎤😎| Free Fire Official
00:13
Garena Free Fire Global
Рет қаралды 93 МЛН
When u fight over the armrest
00:41
Adam W
Рет қаралды 28 МЛН
ТВОИ РОДИТЕЛИ И ЧЕЛОВЕК ПАУК 😂#shorts
00:59
BATEK_OFFICIAL
Рет қаралды 4,7 МЛН
Motorbike Smashes Into Porsche! 😱
00:15
Caters Clips
Рет қаралды 23 МЛН
15 INSANE Use Cases for NEW Claude Sonnet 3.5! (Outperforms GPT-4o)
28:54
NEW LangChain Expression Language!!
16:22
Sam Witteveen
Рет қаралды 17 М.
Converting a LangChain App from OpenAI to OpenSource
20:00
Sam Witteveen
Рет қаралды 15 М.
Talk to Your Documents, Powered by Llama-Index
17:32
Prompt Engineering
Рет қаралды 86 М.
LangGraph 101: it's better than LangChain
32:26
James Briggs
Рет қаралды 84 М.
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 239 М.
Chat with Your SQL Data Using ChatGPT
21:31
MG
Рет қаралды 84 М.
Have You Picked the Wrong AI Agent Framework?
13:10
Matt Williams
Рет қаралды 77 М.
LangChain Agents - Joining Tools and Chains with Decisions
13:34
Sam Witteveen
Рет қаралды 80 М.
Крошечный Mac Mini на M4 Pro. Л - Любовь.
10:41
Это ЛУЧШИЕ Смартфоны 2024 Года. Недорого и Качественно
15:23
Thebox - о технике и гаджетах
Рет қаралды 93 М.
Tesla Phone ВЫЙДЕТ!?
0:39
ÉЖИ АКСЁНОВ
Рет қаралды 248 М.
Лазерная замена стекла iPhone 14 plus
1:00
Mosdisplay
Рет қаралды 3,2 МЛН
Making iPhone16 pink📱
0:34
Juno Craft 주노 크래프트
Рет қаралды 25 МЛН