claude 3.5 opus helped me at solving and explaining set theory questions, I haven't tried it at other type of math yet.
@generalawareness1016 ай бұрын
I am frustrated as I hit out every LLM on the web and none of them can do formulas very well.
@anubhav209218 күн бұрын
Did you find a solution for that?
@generalawareness10118 күн бұрын
@@anubhav2092 No, and it is a big weakness they are working on for the next gen LLMs, or so I read.
@DepartmentofNarcissistsDON Жыл бұрын
FEEL THE AGI
@phyarth80828 ай бұрын
calculating a binomial probability sum with Chat-gpt and manually with excel spreadsheet gives significant error.
@Rakanay_Official10 ай бұрын
Because it is a "language" model and not a "Math" model?! Or does it exist LLMS that it is really good at Math that an LMM (Large Math Model) does not need to?! And if our AI is an algorithm, so does that mean AI does not exist the intelligence? Because an algorithm is like a tree roots and picking these roots that is most likely used and right... or am i nit understanding it?! maybe xD
@WindRipples-4 ай бұрын
No you're absolutely right. ChatGPT is not AI, neither is Claude or any other similar LLM. It's just that the masses are ignorant, so they see it as AI. It's actually just a sophisticated pattern finding machines. Human minds invent new Math, while LLMs can't get through formulas.
@chriscurry24963 ай бұрын
I’d say it’s moreso because they are statistical models. And it’s statistical models are (ironically) poor ways to generate mathematical truths, particularly ones that are inductive. What are inductive truths? Well, consider the process of counting. How is it that we know that if 0+1=1, and 1+1=2, then n+1=x (where x=the correct integer after performing our counting operation accurately)? In a Large Language Model such as ChatGPT, they would attempt to train the model statistically, feeding the model millions or billions of associations of 0+1=_, with the most statistically likely value for _ to be “1.” But it should be obvious to you here that this can’t go on for very long and remain even close to the right value. Like let’s say we get to 1,403,701 + 2,878,098,801 = _. What is the correct answer for _ here? Well, LLMs can ONLY go on what they’ve seen before, and it’s very likely that if they’ve seen these combinations or operands before, they are just as likely to have seen them giving wring answers as right ones.