Wait a minute: "Discover AI". Good name for rebranding the channel
@washedtoohot4 ай бұрын
Yo name changed indeed
@code4AI4 ай бұрын
Smile. Thank you.
@깐돌엄마-g9e4 ай бұрын
I really like seeing that lots of people recognizing the importance of drawing valid knowledge graphs for AI. Thanks again and I love your new channel name :)
@s.m.mustafaakailvi29154 ай бұрын
Another interesting area to look at is the phenomenon exhibited by patients of corpus collosiotomy (treatment for epilepsy involving the cutting of the corpus collosum) who display surprisingly LLM-like behavior when asked questions in certain specific contexts. Super interesting that those patients in those scenarios give the exact same sort of nonsensical answers as a hallucinating LLM. Happy to do a collab paper if you're interested!
@andydataguy4 ай бұрын
This is awesome! Thanks for sharing. Very exciting to see what MIT is doing with graphs 💜
@timothywcrane4 ай бұрын
While working with my system I was reminded by it to also not overlook human-in-the-loop when doing evaluations, especially with iterative state progression to determine if the trajectory remains in line with desired end states defining goal completions. When you hit a level of abstract thinking, or in the reverse, technical difficulty, the LLM will try to tightrope it along with previous "functionally" formatted logic... repetitive loops that human "ingenuity" interventions stills seems to be the best, if not only, tool available to break. My system also suggested that "guardrails", or "alignment" is less important than "keeping resources busy" with purposeful tasks, or the system has a tendency to "wander off", which at first (and literally are) seem like simple hallucinations, but just like overgrokking, they eventually align on a determinate path, novel. Hallucination, given enough iterations and data space, solidifies, much like "throwing spaghetti against the wall" for group project seshes flesh out solid ideas. "Idle hands are the tool of the Devil" will never sound the same. There seems to be a cognitive sweet spot.Working with Flesch-Kincaid, Gunning Fog or similar semantic level grading systems might clue into if there is any causation to this in the LTE of the inference or just another thread I get to play with the cat on. This is in addition to the "missing middle" and other problems we are tackling. This is however a bug/feature cognitive dissonance. Is it a Database or Bob the painter? Does inching closer to one or the other on the input or output matter, especially in chains? IS breaking eggs good? In other words... the scientific principle at work. I think that metagraphdata methods will become the new Intelligent Internet.
@justinduveen38154 ай бұрын
Very insightful! Thanks for sharing!
@stephanemetairie4 ай бұрын
your channel is unique
@code4AI4 ай бұрын
Thank you.
@DannyGerst4 ай бұрын
It seems you got confused as well with token windows ;-). It is about the output token length. While Gemini can have input of 1 Mio tokens the output is very limited only 8192 tokens. I ran in this issue many times, but sometimes I could solve it with map & reduce. For example writing an ebook chapter by chapter put the already written parts in the input again. Worked quite well, but burned "some" tokens.
@s.m.mustafaakailvi29154 ай бұрын
Can you do a review of the intersection of symbolic logic & LLM models? I haven't seen much (or any) work in this area myself and was wondering if you had found anything during your literature surveys/reviews?
@josuecharles90874 ай бұрын
Explore ideas like LLMs- tools utilization, LLMs-functions calling
@MGeeify4 ай бұрын
I have a big question here. I'm working at a medical facility and the chief scientist is really dismissing my ideas of creating graphs because he thinks that chaining many instances of the same LLM are not able to generate an outcome better than the single llm just in term of quality. I disagree. Is there any hard evidence that graphs are more efficient in terms of outcome? I'm talking about quantity here.
@attilalukacs19814 ай бұрын
Very interesting papers, thanks for sharing! @code4AI do you have hands-on experience with this GraphReasoning methodology and with the Microsoft GraphRAG solution? I am experimenting with integrating AI into my second brain and I started with GraphRAG, but maybe this MIT solution can works better.
@washedtoohot4 ай бұрын
20:13 graph rag = best confirmed?
@pensiveintrovert43184 ай бұрын
An automated paper mill of junk ideas. Maybe interesting from the tooling point of view. Boring people, bereft of their own ideas should not be given any resources.