These videos from LangChain are so great because a nice mix of a focus on actionably useful things, explanation, and some technical context - so a nice way to get a sense of what’s happening in building in AI. Strong contrast to content elsewhere that might be too high-level, or too in technical details.
@mrchongnoi Жыл бұрын
At 34:00, when the discussion turned to episodic and semantic memory, the thought that came to mind is that these two types of memory have a relationship. For example, viewing this KZbin webinar is an episodic experience that contributes to semantic memory. Maybe a few weeks from now, I can remember that I listened to a video on CoAl, which requires a semantic lookup of what I learned during this video.There seems to be a bidirectional relationship. Not sure if bidirectional is the correct term to use. As episodic memory fades (TTL), then it becomes pure semantic. Old guy thinking out loud in Singapore.
@FreakyStyleytobby Жыл бұрын
Isn't semantic knowledge the one that we managed to retain? The one that we practice. Episodic memory fades as we don't practice it.
@mrchongnoi11 ай бұрын
@@FreakyStyleytobby Went back to review the video. Saw your reply. There some events that are just burned in to our memory. I saw a 60 minute video where there are individuals what remember almost every minute of their life.
@attilavass6935 Жыл бұрын
Pls. share the link of Shunyu's presentation! :)
@galmoore3193 Жыл бұрын
super interesting topic. Is this planned to be a Langchain package? I couldn't find any code or discussion of implementation.
@LeonidGaneline Жыл бұрын
"learn = Write long-term memory" It is not so simple. What about processing data BEFORE writing? When we consider just DB, then we don't care about this preprocessing. Maybe there is some format moderation and data cleaning. But when we write the procedural memory, the LLM, the preprocessing is important. We generalize data before writing. It is not trivial and it is not solved up-to-date. Sure, we can look at this in a pragmatic way: write into the LLM = training the LLM. It means the LLM (as the long-term memory) takes care of the generalization internally. Still, taking the data generalization step is important, IMO.
@badashphilosophy95333 ай бұрын
I dont know much about computers but what if someone trained an llm on the binary underlying the opperating system, along with coding and English, could it become an operating system that we could just feed chips to and it can talk directly to the chip to perform functions of the coding language or would it not work and really only works on and from the interface level of things?
@RedCloudServices Жыл бұрын
Harrison rendering all the academics speeches 😂
@MsDuketown Жыл бұрын
People can be so sentient when reasoning... Is this podcast about the Adam's apple or the larynx? Remember, don't underestimated the qualities of the tongue. Either way, a mature workstation PC, at home or at work, can always be helpful. Even a Ultrabook will do in most scenario's. Btw. Don't get distracted by occasional gaming on the same rig.
@UncleDavid11 ай бұрын
y’all should take a communication class or sumn bruh