SciAgents Graph Reasoning: Stanford vs MIT

  Рет қаралды 3,947

Discover AI

Discover AI

Күн бұрын

Пікірлер: 17
@jomangrabx
@jomangrabx 4 ай бұрын
Wait a minute: "Discover AI". Good name for rebranding the channel
@washedtoohot
@washedtoohot 4 ай бұрын
Yo name changed indeed
@code4AI
@code4AI 4 ай бұрын
Smile. Thank you.
@깐돌엄마-g9e
@깐돌엄마-g9e 4 ай бұрын
I really like seeing that lots of people recognizing the importance of drawing valid knowledge graphs for AI. Thanks again and I love your new channel name :)
@s.m.mustafaakailvi2915
@s.m.mustafaakailvi2915 4 ай бұрын
Another interesting area to look at is the phenomenon exhibited by patients of corpus collosiotomy (treatment for epilepsy involving the cutting of the corpus collosum) who display surprisingly LLM-like behavior when asked questions in certain specific contexts. Super interesting that those patients in those scenarios give the exact same sort of nonsensical answers as a hallucinating LLM. Happy to do a collab paper if you're interested!
@andydataguy
@andydataguy 4 ай бұрын
This is awesome! Thanks for sharing. Very exciting to see what MIT is doing with graphs 💜
@timothywcrane
@timothywcrane 4 ай бұрын
While working with my system I was reminded by it to also not overlook human-in-the-loop when doing evaluations, especially with iterative state progression to determine if the trajectory remains in line with desired end states defining goal completions. When you hit a level of abstract thinking, or in the reverse, technical difficulty, the LLM will try to tightrope it along with previous "functionally" formatted logic... repetitive loops that human "ingenuity" interventions stills seems to be the best, if not only, tool available to break. My system also suggested that "guardrails", or "alignment" is less important than "keeping resources busy" with purposeful tasks, or the system has a tendency to "wander off", which at first (and literally are) seem like simple hallucinations, but just like overgrokking, they eventually align on a determinate path, novel. Hallucination, given enough iterations and data space, solidifies, much like "throwing spaghetti against the wall" for group project seshes flesh out solid ideas. "Idle hands are the tool of the Devil" will never sound the same. There seems to be a cognitive sweet spot.Working with Flesch-Kincaid, Gunning Fog or similar semantic level grading systems might clue into if there is any causation to this in the LTE of the inference or just another thread I get to play with the cat on. This is in addition to the "missing middle" and other problems we are tackling. This is however a bug/feature cognitive dissonance. Is it a Database or Bob the painter? Does inching closer to one or the other on the input or output matter, especially in chains? IS breaking eggs good? In other words... the scientific principle at work. I think that metagraphdata methods will become the new Intelligent Internet.
@justinduveen3815
@justinduveen3815 4 ай бұрын
Very insightful! Thanks for sharing!
@stephanemetairie
@stephanemetairie 4 ай бұрын
your channel is unique
@code4AI
@code4AI 4 ай бұрын
Thank you.
@DannyGerst
@DannyGerst 4 ай бұрын
It seems you got confused as well with token windows ;-). It is about the output token length. While Gemini can have input of 1 Mio tokens the output is very limited only 8192 tokens. I ran in this issue many times, but sometimes I could solve it with map & reduce. For example writing an ebook chapter by chapter put the already written parts in the input again. Worked quite well, but burned "some" tokens.
@s.m.mustafaakailvi2915
@s.m.mustafaakailvi2915 4 ай бұрын
Can you do a review of the intersection of symbolic logic & LLM models? I haven't seen much (or any) work in this area myself and was wondering if you had found anything during your literature surveys/reviews?
@josuecharles9087
@josuecharles9087 4 ай бұрын
Explore ideas like LLMs- tools utilization, LLMs-functions calling
@MGeeify
@MGeeify 4 ай бұрын
I have a big question here. I'm working at a medical facility and the chief scientist is really dismissing my ideas of creating graphs because he thinks that chaining many instances of the same LLM are not able to generate an outcome better than the single llm just in term of quality. I disagree. Is there any hard evidence that graphs are more efficient in terms of outcome? I'm talking about quantity here.
@attilalukacs1981
@attilalukacs1981 4 ай бұрын
Very interesting papers, thanks for sharing! @code4AI do you have hands-on experience with this GraphReasoning methodology and with the Microsoft GraphRAG solution? I am experimenting with integrating AI into my second brain and I started with GraphRAG, but maybe this MIT solution can works better.
@washedtoohot
@washedtoohot 4 ай бұрын
20:13 graph rag = best confirmed?
@pensiveintrovert4318
@pensiveintrovert4318 4 ай бұрын
An automated paper mill of junk ideas. Maybe interesting from the tooling point of view. Boring people, bereft of their own ideas should not be given any resources.
AI Strategist w/ Monte Carlo Tree Search
25:45
Discover AI
Рет қаралды 2,4 М.
NEW Transformer2: Self Adaptive PEFT Expert LLMs in TTA
36:52
Discover AI
Рет қаралды 5 М.
Война Семей - ВСЕ СЕРИИ, 1 сезон (серии 1-20)
7:40:31
Семейные Сериалы
Рет қаралды 1,6 МЛН
Andro, ELMAN, TONI, MONA - Зари (Official Music Video)
2:50
RAAVA MUSIC
Рет қаралды 2 МЛН
SMARTER: AI Reasoning w Knowledge Graphs + Agents
28:44
Discover AI
Рет қаралды 6 М.
Coding Adventure: Rendering Text
1:10:54
Sebastian Lague
Рет қаралды 843 М.
Explaining the ‘black box’: deep learning in drug discovery
48:18
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
World Science Festival
Рет қаралды 530 М.
Stanford Univ CREATED the S1 Reasoning LLM (o1, R1)
23:26
Discover AI
Рет қаралды 11 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
CCP Networking Module 7 Wireless LAN Technologies
35:17
Война Семей - ВСЕ СЕРИИ, 1 сезон (серии 1-20)
7:40:31
Семейные Сериалы
Рет қаралды 1,6 МЛН