You are a real life saver. I was trying to to figure out how to divide by elephants for my job. I might just get a raise for this! Thanks
@oshodikolapo21596 ай бұрын
🤣🤣
@wurstelei13566 ай бұрын
I love how the AI finds the average weight of an elephant. This LLM is just going to replace your job if you don't adapt...
@wurstelei13566 ай бұрын
This is getting more and more my most liked channel. Keep it up !
@TailorJohnson-l5y6 ай бұрын
Another great one. Keep em coming! You have interesting thought process, thank you
@Beauty.and.FashionPhotographer6 ай бұрын
Task for You: to get 2 chatbots (mistral and claude LLMs?) to discuss with each other , dialoguing and figuring out with each other's arguments and reasonings, and convincing each other on the 5 most important Human projects that could change Humanity to the better in the next 2 weeks.... Let 2 Ai Chatbots figure this out with each other ! i wonder what thes 5 resulting projects would become
@waqaskhan-uw3pf6 ай бұрын
Please make video about Romo AI super AI tools in one place and learnex AI world's First fully AI powered education platform. My favorite AI tools
@N_Wisdom6 ай бұрын
How does the system handle invalid JSON responses during its operation?
@nic-ori6 ай бұрын
Useful information. Thank you!👍👍👍
@USBEN.6 ай бұрын
Solid model for these tasks.
@drhenkharms65146 ай бұрын
Ok.. new kid on the block here... where do I find this example on the members github?
@phdcosta6 ай бұрын
thanks!
@ewasteredux6 ай бұрын
Hi Kris. I have heard that there are some versions of llama3 that have a very large context window. If this is true, is there an easy way to feed a bunch of articles into the model using RAG and query it easily? An example implementation might be a troubleshooting system in which there might be a knowledge base with many articles on common troubleshooting steps for varied issues where the KB is all read into the LLM and someone can ask questions of the LLM and it can determine the most likely cause based on the problem description. Any suggestions?
@aaaAaAAaaaaAa1aAAAAaaaaAAAAaaa6 ай бұрын
higher context requires more ram so keep that in mind just search on hugging face for llama3 gguf models
@wurstelei13566 ай бұрын
I think Kris has a video on this. You have to generate an optimized query (by AI), then feed your docs chunk by chunk together with your query/question piece by piece into the context. Each step creates a summary which is then analyzed in the end.
@dineshardjoen6 ай бұрын
Hi Kris, paid for a Master (14,99) sub but didn't recieve anything...do you need to aprove or something?
@behroozgames6 ай бұрын
Still 10000% useless compared to the future i came from
@Carnivore696 ай бұрын
You forgot to divide your percentage by elephant weight. Repeat query.
@realorfake47656 ай бұрын
I know, it's like living in the Dark Ages before we had replicators and point to point wormholes