Why are LLMs bad at Math?

  Рет қаралды 3,170

Prof C's Workshop

Prof C's Workshop

Күн бұрын

Пікірлер: 11
@frankwc0o
@frankwc0o 4 күн бұрын
Looks like a type-40, behind you.
@m7mo0o
@m7mo0o 6 ай бұрын
claude 3.5 opus helped me at solving and explaining set theory questions, I haven't tried it at other type of math yet.
@generalawareness101
@generalawareness101 6 ай бұрын
I am frustrated as I hit out every LLM on the web and none of them can do formulas very well.
@anubhav2092
@anubhav2092 18 күн бұрын
Did you find a solution for that?
@generalawareness101
@generalawareness101 18 күн бұрын
@@anubhav2092 No, and it is a big weakness they are working on for the next gen LLMs, or so I read.
@DepartmentofNarcissistsDON
@DepartmentofNarcissistsDON Жыл бұрын
FEEL THE AGI
@phyarth8082
@phyarth8082 8 ай бұрын
calculating a binomial probability sum with Chat-gpt and manually with excel spreadsheet gives significant error.
@Rakanay_Official
@Rakanay_Official 10 ай бұрын
Because it is a "language" model and not a "Math" model?! Or does it exist LLMS that it is really good at Math that an LMM (Large Math Model) does not need to?! And if our AI is an algorithm, so does that mean AI does not exist the intelligence? Because an algorithm is like a tree roots and picking these roots that is most likely used and right... or am i nit understanding it?! maybe xD
@WindRipples-
@WindRipples- 4 ай бұрын
No you're absolutely right. ChatGPT is not AI, neither is Claude or any other similar LLM. It's just that the masses are ignorant, so they see it as AI. It's actually just a sophisticated pattern finding machines. Human minds invent new Math, while LLMs can't get through formulas.
@chriscurry2496
@chriscurry2496 3 ай бұрын
I’d say it’s moreso because they are statistical models. And it’s statistical models are (ironically) poor ways to generate mathematical truths, particularly ones that are inductive. What are inductive truths? Well, consider the process of counting. How is it that we know that if 0+1=1, and 1+1=2, then n+1=x (where x=the correct integer after performing our counting operation accurately)? In a Large Language Model such as ChatGPT, they would attempt to train the model statistically, feeding the model millions or billions of associations of 0+1=_, with the most statistically likely value for _ to be “1.” But it should be obvious to you here that this can’t go on for very long and remain even close to the right value. Like let’s say we get to 1,403,701 + 2,878,098,801 = _. What is the correct answer for _ here? Well, LLMs can ONLY go on what they’ve seen before, and it’s very likely that if they’ve seen these combinations or operands before, they are just as likely to have seen them giving wring answers as right ones.
Transformers Explained | Simple Explanation of Transformers
57:31
СИНИЙ ИНЕЙ УЖЕ ВЫШЕЛ!❄️
01:01
DO$HIK
Рет қаралды 3,3 МЛН
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
Deadly Truth of General AI? - Computerphile
8:30
Computerphile
Рет қаралды 918 М.
Large Language Models explained briefly
7:58
3Blue1Brown
Рет қаралды 1,4 МЛН
Why Is ChatGPT Bad At Math?
12:05
SciShow
Рет қаралды 152 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 1,1 МЛН
ChatGPT can't do math...
26:50
Tom Rocks Maths
Рет қаралды 83 М.
What Generative AI and LLMs Can and Can't Do
4:47
KMWorld Conference
Рет қаралды 284
The Simple $1,000,000 Problem AI Can't Solve
15:21
Volo
Рет қаралды 9 М.
When Computers Write Proofs, What's the Point of Mathematicians?
6:34
Quanta Magazine
Рет қаралды 423 М.
Why Large Language Models Hallucinate
9:38
IBM Technology
Рет қаралды 223 М.
This is What Limits Current LLMs
7:05
Edan Meyer
Рет қаралды 96 М.
СИНИЙ ИНЕЙ УЖЕ ВЫШЕЛ!❄️
01:01
DO$HIK
Рет қаралды 3,3 МЛН