(I have been thinking) - Two things would be useful whilst talking to LLM: #1 Feedback: - The LLM tells you what it understood that you wanted it to do. - Will clarify that it understood your exact intentions. #2 Pros and cons: - The LLM will weigh what you are trying to achieve and when it thinks that the result will be contradictory to your goal it will pose lots of cons. Same with pros. Those may be helpful to counter the fact that the LLM wants to please you and has no critical thinking of its own. - may lead to misunderstandings. Disclaimer: This is not a disclaimer. Thank you, you are a good sensei.
@FiveBelowFiveUK18 күн бұрын
Thanks very much :) These are excellent points! I have addressed this (in the past) in a way but not in this video - I have a bunch of Agent primers in my github that were originally built on GPT3 and updated for GPT4, then released open source as template cognitive frameworks. They mostly force the LLM to consider and check user instructions, then iteratively self inform for various tasks. Check my ACE-HOLOFS prompt template, for more information :) I'm sure you will find it useful; ACE = Adaptive Capacity Elicitation (my most powerful cognitive primer) HOLOFS = Holographic Filesystem (a major leap forward in the organistion of LLM cognition, with a pseudo unix interface that still works conversationally) These two things combined represent my most cutting edge LLM methodology. I'll be doing more on this in future, it's just a shame the algo punished me to this content as i believe it worked hand in hand with all my image/video gen content.