LLMs are fantastic for generating creative outputs like music text, text, and visualizing patterns. However, as a programmer, I’m skeptical about their reasoning abilities. They often struggle with simple code and frequently produce buggy, error-prone code that requires constant correction.
@Soleft2 ай бұрын
It works insanely well for web development or systems development. it's doubled my productivity at least. I use gpt everything else has been a flop for me.
@Neuvalence2 ай бұрын
Put AI to replace a Judge or a Physicist and watch hell ensure 😂
@joineryjohn62822 ай бұрын
Very interesting video. When you see some of the capabilities of LLM's it's easy to forget they do not actually reason, but they do appear to do so. This has made me rethink, thank you.
@ZiadDanasouri2 ай бұрын
you're right, I always say no matter how good it looks, always take it with a grain of salt
@leoym18032 ай бұрын
Reminds me of the numerous studies done on Chess Grandmasters that completely fumble and drop to 1000 ELO whenever they're put in front of boards they haven't seen before. Completely random boards. Now, does that imply that humans also cannot reason? :)
@br1ghts0ng2 ай бұрын
@@leoym1803i think there’s quite a distinction between making mistakes in chess and knowing that small oranges are still oranges 😂
@leoym18032 ай бұрын
@@br1ghts0ng yeah? which is it? For me, making mistakes with new unseen information (chess) = making mistakes with new unseen information (baby oranges)
@br1ghts0ng2 ай бұрын
@@leoym1803 but the concept of a small object isn’t a new idea to an AI. Chess is a highly complicated game with so many different factors and strategies. Not recognizing that a small object is still the same object is vastly different. Yes, the AI hasn’t stumbled upon that same problem before; that’s why it’s incapable of solving it, because it cannot reason. Copying isn’t reasoning. Let’s get into cognitive sciences: reasoning is defined as the ability to break down problems into logical components and structure it using logical order. Messing up in chess isn’t failing to use reason, it’s just using reason incorrectly, which makes sense considering how unbelievably complicated chess is, logically speaking. If AI used even an ounce of reason, it would easily know that a small orange is still an orange, as the very simplest of logic defines it as such. This isn’t surprising. The way LLMs are designed isn’t to develop reasoning. The engineers can try to force it to, but it ultimately fails as they simply try to copy existing logical structures, it cannot create its own, as the concept of an LLM is not to create logical structures, it’s to copy what it is given. They would not need nearly the amount of data they are given if they could create logical structures.
@br1ghts0ng2 ай бұрын
This encapsulates many of my thoughts. AI has absolutely gotten better over the years, but it certainly follows the law of diminishing returns. It’s growing more and more difficult to crank out more cognitive power, especially when all it’s doing is highly inefficiently copying inordinate amounts of data and spitting out whichever looks right for the situation. It’s getting better and better at copying but still fails to actually reason. I have no doubt we’ll get there someday, but I highly doubt it will be with LLMs
@ZiadDanasouri2 ай бұрын
Totally agree!!
@amotriuc2 ай бұрын
It is not surprising if AI was so simple that you can just dump some data in LLM and hope you get AGI. I would suspect Human level intelligence would have evolved on earth in several species independently. But this is not what we see.
@seanloewen2 ай бұрын
…yet.
@piotrteter83002 ай бұрын
Youre so late to the party its hilarious
@simonomega2 ай бұрын
If you posted this like a year ago, you'd not look so ignorant. It's called compound systems. Look up agents. You don't know how AI works now. You are way out of date.
@DrKnowsMore2 ай бұрын
So you are smarter than researchers at Apple?
@simonomega2 ай бұрын
@DrKnowsMore the research is about LLMs in isolation. Nothing new about these findings. Compound systems use multiple LLMs in increasingly complex ways. LLMs are like cortical areas in the brain, and we use them like quick static intuitions with no thinking process involved. Compound systems are like connectome functionality and have no limit to how much extra processing can be done. The AI we have now is infinitely smarter than LLMs by themselves and have liched agi, they are super intelligence. We can make them sentient and autonomous if we want or control them in a completely safe way. My brother in law is a researcher at Apple by the way. So yes, definitely smarter than researchers at Apple lol
@ВиталийЯковчук-ъ5о2 ай бұрын
very nice that youtube has block for channels, bye bye