2024's Biggest Breakthroughs in Computer Science

  Рет қаралды 475,836

Quanta Magazine

Quanta Magazine

Күн бұрын

Пікірлер: 506
@yas2ne19
@yas2ne19 Ай бұрын
these videos are like society patch notes
@izzuddinafif
@izzuddinafif Ай бұрын
Lol fr
@atHomeNYC
@atHomeNYC Ай бұрын
Are we going to ignore the Animations? You better pay your animators because ilustrating these complex concepts is incredible work.
@udaysingh-wr2kw
@udaysingh-wr2kw Ай бұрын
Are they not paying their animators?
@jamesfrancese6091
@jamesfrancese6091 Ай бұрын
@@udaysingh-wr2kw They do, Quanta is not some KZbin channel lol they’re a reputable science journalism publisher funded in part by the Simons Foundation and their graphic designers make upwards of $110,000/yr
@KuKoVisuals
@KuKoVisuals Ай бұрын
They don't. They get interns
@Shenanigambling
@Shenanigambling Ай бұрын
"Yeah do you mind animating all combinations of 6 learned skills from a universe of 100 possible skills?"
@pirateonmeleeisland2973
@pirateonmeleeisland2973 Ай бұрын
no joke?
@kenankaneki4310
@kenankaneki4310 Ай бұрын
İ love this channel this is more exciting than spotify wrapped
@notgaybear5544
@notgaybear5544 Ай бұрын
Spotify wrapped was so good. I wish i could have that wrap up for all my content consumption
@kenankaneki4310
@kenankaneki4310 Ай бұрын
@@notgaybear5544 l can give my all data to gpt ( or any model) for this
@Giallo43
@Giallo43 Ай бұрын
😅😮ì😮​@@notgaybear5544😮😮😅😅😊😅😅😅😅😅😅😅😅😅😅
@oswach8706
@oswach8706 Ай бұрын
Intellectual humen race wrapped
@james-cal
@james-cal Ай бұрын
@@oswach8706 more like science wrapped
@Nex_Addo
@Nex_Addo Ай бұрын
Not a single link to the actual paper discussed, just a link to your article that also has no links to the paper.
@dennisfast9331
@dennisfast9331 Ай бұрын
Google -> skill mix paper -> paper found!
@quantumskull2045
@quantumskull2045 Ай бұрын
It's not very clearly highlighted but they do generally have links to the original paper somewhere in the article. In the article on skill-mix it's in the middle in one of the links.
@kingmidasthagoat
@kingmidasthagoat Ай бұрын
@@dennisfast9331zip it dennis
@Michael-rk3xg
@Michael-rk3xg Ай бұрын
Maybe you can't read. That's why you can't find the links in the article
@penguinpatroller
@penguinpatroller Ай бұрын
could one of you post the title of the paper? i could not find it either.
@tghuy8384
@tghuy8384 Ай бұрын
No one remembered Kolmogorov Arnold Network? LLMs are cool and all (tho not my favourite/specialty area), but I'd love to see people pay more attention to the scientific machine learning field (e.g. physics-informed models, interpretable models like KAN). I kinda feel like so many other fascinating usages of machine learning are being outshined by LLMs nowadays...
@a_soulspark
@a_soulspark Ай бұрын
not much came out of KANs yet, that's probably why. (with emphasis on **yet** )
@dropbit
@dropbit Ай бұрын
Unfortunatly in the computer science (not only in cs) the more a piece of technology or idea is profitable and money making the more it will be studied and be trending
@mb2776
@mb2776 Ай бұрын
afaik from my fairly limited understanding, physics-informed models do need differential equations which are super hard to implement in higher dimensions.
@demetriusmichael
@demetriusmichael Ай бұрын
@@mb2776 they all do in NN as they rely on the chain rule to update params.
@pelegshilo6358
@pelegshilo6358 Ай бұрын
I think your examples also fall into LLM-related hype. The KAN paper is full of marketing language and proves very little with regards to either theory or practical performance. Physics-inspired models are cool ideas that people have been trying for years, and we really need to judge them case by case. Just having a model inspired by Physics does not achieve much
@michaelpeeler7030
@michaelpeeler7030 Ай бұрын
No mention of BB(5) being proven? That’s perhaps the biggest CS news of the year, especially since there’s a fairly good chance we’ll never again prove a busy beaver number for a two-symbol Turing machine.
@holdthatlforluigi
@holdthatlforluigi Ай бұрын
Yeah that's sad
@DistortedV12
@DistortedV12 Ай бұрын
haha you are doing their job for them
@dddddddddddkkkkkkkkkkkkk
@dddddddddddkkkkkkkkkkkkk Ай бұрын
Bro that has no AI in its name it will not even make it to notable mentions
@dashadower
@dashadower Ай бұрын
No buzzwords, just theoretical jargon that the average Joe doesn't care
@herghamoo3242
@herghamoo3242 Ай бұрын
Imo, computing Busy beaver numbers is like mathematical trivia. Am I impressed they managed to do it? Absolutely, but there's absolutely nothing you can do with this knowledge. Also, the Busy beaver numbers are so dependent on the specific definition of a Turing machine that I don't feel it's that interesting to be honest.
@garyz904
@garyz904 Ай бұрын
Is the "skill-mix" paper we are talking about in the first part? I don't see why this paper is a breakthrough, much less the biggest breakthrough this year. BTW, where exactly is the "mathematically provable argument" presented in the paper? You cannot simply call a paper with equations theoretical.
@KeinGescheiterUser
@KeinGescheiterUser Ай бұрын
It sounds like it - and skill-mix is mentioned in a linked article in the "the year in cs" article accompanying the video. But there are a lot more things wrong with framing the paper in that way. Overstating results is already bad - but the paper does not even convincingly show that models have these skills. Grading models with other models defeats the purpose, their argument that it is more reliable than humans is weak at best. Also: what about prompt variations? Does this observed behavior for these "skills" really generalize? As an NLP researcher, I'm disappointed in quanta magazine for this one.
@Neeku
@Neeku Ай бұрын
It really seems like the paper wasn't properly digested or compared to current and past norms in NNs before including it here. Its nothing more than feature learning.
@acmhfmggru
@acmhfmggru Ай бұрын
🎯
@Sheldonsheldon109
@Sheldonsheldon109 Ай бұрын
At 7:31 the chess board in the background isn't set up correctly! The color complexes are flipped, i.e. the board is rotated 90⁰ from what it should be!
@user-qw9yf6zs9t
@user-qw9yf6zs9t Ай бұрын
thats why we need a 9x9 chess board
@NuncNuncNuncNunc
@NuncNuncNuncNunc Ай бұрын
He's solving symmetry problems in a higher dimension. He dreams Pauli matrices.
@nicksamek12
@nicksamek12 Ай бұрын
It’s funny that’s it’s set up to make him look smarter / more sophisticated, but then he goes and sets it up wrong and throws the absolute opposite message.
@TheBooker66
@TheBooker66 Ай бұрын
@@nicksamek12 Exactly my thought process.
@aineshbakshi5556
@aineshbakshi5556 Ай бұрын
It’s isomorphic
@joaoedu1917
@joaoedu1917 Ай бұрын
IA company employee publishes a paper that "proves" that its LLM is even more awesome that we all thought it was. solid.
@theresalwaysanotherway3996
@theresalwaysanotherway3996 Ай бұрын
if you have a criticism of the actual work or method then bring it up. But dismissing it simply because of the conflict of interest (which they're not trying to hide) doesn't get anyone anywhere.
@xCromlechx
@xCromlechx Ай бұрын
@theresalwaysanotherway3996 but this has nothing to do with "breaking through". It should (!) be dismissed until actual progress is made. I am pretty sure there are other 2024 breakthrough achievements in CS that would be more appropriate to cover...
@Alpha_GameDev-wq5cc
@Alpha_GameDev-wq5cc 18 күн бұрын
@theresalwaysanotherway3996 Here's, criticism of actual work. They have done no work to show how their methodology has any relation to their claims, that part of the paper is a "trust me bro" and very cunningly so because it's a charade. Not uncommon in today's world of exponential paper publication and stagnant innovation. They either don't know how LLMs work internally or they are pretending not to. I'll put it like this, they've taken the core property behind the function of LLMs and simply tested if that core function works and called it research for some futuristic feature. It's like testing if a car can move without a human insight, and then saying that it's very likely that the car can exhibit self-driving behaviors. NO, that's hilariously misleading.
@EvanMildenberger
@EvanMildenberger Ай бұрын
When researchers are so obsessed with AI and quantum, I believe it leaves much less incentive for more classical/pragmatic computer science. But if you've worked in industry before, there are so many systems that are duck tapped together and have so much bloat that it's more cost effective to just add new code rather than touch the old. The fact that programs require thousands of times more space to run than years ago but are not generally thousands of times faster is actually disappointing. And the fact that many things like the banking systems of the world using COBOL will have practically no one with sufficient experience is worrisome. It seems like a genuine solution to managing complexity and being able to safely transpile an entire legacy system into a modern one is a pressing need but I think academics all want to be the next Turing or Church rather than solve a more down to earth but important problem.
@eSKAone-
@eSKAone- Ай бұрын
Ai could help with that
@eternaldoorman5228
@eternaldoorman5228 Ай бұрын
Yes, to see "2024s Biggest Breakthroughs in Computer Science" headlined by a system designed to "distil" knowledge from bad practice is pretty depressing!
@ai_outline
@ai_outline Ай бұрын
I see were you are coming from, but AI is a classical subfield of Computer Science. Quantum not so much, but I believe it will become very important in the future.
@FiniteSimpleFox
@FiniteSimpleFox Ай бұрын
For each of the issues you have highlighted the problem is practical, not conceptual. In theory we have all the tools we need, it's just that big tech and other institutions are not making a meaningful effort to solve them. By definition, academics deal in coming up with new ideas so I do not think it would make sense for them to work on this sort of stuff; this is really a problem for software engineers. That being said I don't want to rule out the potential for research into frameworks for how to better manage large interdependent systems of software although I am not knowledgeable about this sort of thing if it exists.
@morenorueda946
@morenorueda946 Ай бұрын
Just ask an Llm to translate from whatever that old language banks use to a newer one. So hard
@AlfredoMahns
@AlfredoMahns Ай бұрын
A question for somebody that manage this topics, cause I stuggle to understand: The researcher made a bipart graph, with one part beeing skills. The flaw I see is, how can you prove the skill are different/not equivalent? Because you need to prove this for it to work.
@mtw2457
@mtw2457 Ай бұрын
Try to ask LLMs about alternative interpretations of sequences in the OEIS (Online Encyclopedia of Integer Sequences,) or to summarize a given sequence. They cannot even get the proper ID matched with the proper sequence description (in my case sequence ID A000371). In other words, they cannot even regurgitate what is already in the OEIS comments area. The alternative interpretations - if they can come up with anything at all - sound authoritative but are completely WRONG! LLMs are authoritative sounding noise.
@mb2776
@mb2776 Ай бұрын
yeah, we all know that they just repeat what they allready learned. you can brick gtp4 by just asking for a multiple nested exponential function with positive x value. It WILL produce code with an overflow. test it.
@acmhfmggru
@acmhfmggru Ай бұрын
A handy tip for lazy students: just ask an LLM to generate citations for your papers. 👹😫
@rubennijhuis
@rubennijhuis Ай бұрын
Been waiting for this one the most
@zacharykosove9048
@zacharykosove9048 Ай бұрын
me too
@ccriztoff
@ccriztoff Ай бұрын
everyone has bro
@s1lli
@s1lli Ай бұрын
welp i assume u're disappointed
@gamalalejandroabdulsalam904
@gamalalejandroabdulsalam904 Ай бұрын
"Minimize the training loss... that is called Emergence 💫" Bruh anyone that's actually trained a model understands that's just hype-talk
@mb2776
@mb2776 Ай бұрын
emergence is unexpected behaviour. minimizing the training loss is just using an optimizer like adam
@gamalalejandroabdulsalam904
@gamalalejandroabdulsalam904 Ай бұрын
@@mb2776 The exact quote from the video is: "[...] they minimize training loss, or make fewer errors. This sudden increase in performance produces new behaviors - a phenomenon called Emergence 💫" All the while, the typical training loss curve is shown going from untrained to converged. Yeah you don't say, the more they train on the data the better they get. Now that I think about it, they probably were going with "convergence", but decided that wasn't enough hype-talk for the video.
@mrgoodmood28
@mrgoodmood28 Ай бұрын
So, what are we actually looking at here, guys? How brains also “might” be working under different circumstances so that’s why we have different psyche’s, culture, beliefs, and such? Skillsets we learn based on nature + nurture produces very unique individuals? Say, that our parameters are very large?
@vidal9747
@vidal9747 Ай бұрын
​@@gamalalejandroabdulsalam904 Emergence is the emergence of behaviours not directly trained. It is not just having your loss function minimized. Let's not be overhyped, but correct.
@acmhfmggru
@acmhfmggru Ай бұрын
​@@mrgoodmood28bro they just recently found like 15 new organelles that we didn't know existed. We don't know how much we don't know (but it's obviously a lot). We most certainly do not understand how or brains work, much less how consciousness and cognition work. Beware the childish graphics and graduate students...
@ai_outline
@ai_outline Ай бұрын
For the people in the back: Computer Science is not programming. It is way more. It is Algorithms, Artificial Intelligence, Quantum Computing, Computer Graphics, etc
@Dextrostat
@Dextrostat Ай бұрын
I think Universities need to separate out Computer Science as it's feels like it's only a degree just to get a job. Computer Science is supposed to be a theoretical field on software and focuses on optimization, math behind it, and new research into algorithms. Software Engineering should be a derivative major on CS but with more a focus on industry software, infrastructure, and programming for businesses. Less math(don't really need Calc, only Discrete) and more practical applications.
@ai_outline
@ai_outline Ай бұрын
@@Dextrostat completely agree. CS degrees should teach all the mathematics behind CS/Informatics. It’s the fundamentals! Never be too focused on frameworks and technical details that are hot today but obsolete tomorrow.
@keyboardtaskforcephi-3689
@keyboardtaskforcephi-3689 Ай бұрын
Nah the movies told me all you need to do code really fast to generate an universe
@t3lesph0re
@t3lesph0re Ай бұрын
Spot on!
@XxRiseagainstfanxX
@XxRiseagainstfanxX Ай бұрын
As a mathematician i fail to see why the first one has anything to do with computer science or a scientific breakthrough but I`m not enticed enough to look up the original paper.
@ai_outline
@ai_outline Ай бұрын
@@XxRiseagainstfanxX why? It’s about AI, so it’s computer science.
@ai_outline
@ai_outline Ай бұрын
But I do understand the lack of analytical details. It’s weird to call it a CS breakthrough without much math and formality.
@EvanMildenberger
@EvanMildenberger Ай бұрын
I agree. It doesn't mean that their research is useful for testing and improving LLMs but it doesn't seem to be that much of a breakthrough. People forget that AI is not a new field even though practical natural language processing models is recent since ChatGPT. But LLMs are just a part of NLP which is just a part of machine learning which is just a part of AI. AI has become a vague buzzword when it used to be an intersection between neuroscience and computer science and focused more on what it means to think or be intelligent rather than just language tasks. Generating human-quality text was an intractable problem for deterministic programming so LLM advancements *are* very useful. But believing language is the sole path to intelligence seems to forget the importance of math, and history has shown that language with math/logic has generally led to dogmatic thinking rather than advancement.
@Tau-qr7f
@Tau-qr7f Ай бұрын
Can we get a list of the biggest breakthrough in Computer Science without mentioning ai? ai should have a separate video
@karthikgarimella2131
@karthikgarimella2131 Ай бұрын
Why though? It's part math and part computer science. It started in that domain and it should be part of that domain no? In that case, quantum problems also shouldn't be part of computer science then as mentioned in the video?
@rohanbuch2344
@rohanbuch2344 Ай бұрын
AI = computer science
@ai_outline
@ai_outline Ай бұрын
@@Tau-qr7f a separate video? What? AI is literal computer science… ffs 🤦🏻‍♂️
@ai_outline
@ai_outline Ай бұрын
@@Tau-qr7f do you also want a separate video for electromagnetism (physics)? A separate video for mechanics (physics)? A separate video for topology (math)? Jesus Christ…
@Tau-qr7f
@Tau-qr7f Ай бұрын
@@rohanbuch2344 no, it’s not. there are many other interesting topics in cs besides ai. they receive negligible coverage after the release of chatgpt although llm existed long before that.
@DistortedV12
@DistortedV12 Ай бұрын
The REAL list: o1 (now o3), quantum computing breakthroughs, veo2, successful diffusion world models, Arc challenge success, and robotics advances starting to work with ML. I would be curious one of these for causality and ML research.
@mmmmmrto958
@mmmmmrto958 Ай бұрын
Also idk if you mentioned it because i forgot the name but I saw a video where a development in using photons to connect CPUs and even to do actual computations was made this year. (Although i may be wrong as I did not look into it a lot)
@R4Y7
@R4Y7 Ай бұрын
Our yearly update :D
@ccriztoff
@ccriztoff Ай бұрын
for those curious the real important stuff starts here 9:05
@ai_outline
@ai_outline Ай бұрын
Computer science may be more fundamental to the universe than we know!
@deeplearningpartnership
@deeplearningpartnership Ай бұрын
Computation, when it intersects with physics.
@christhurman2350
@christhurman2350 Ай бұрын
Only if computer science keeps evolving
@memecached
@memecached Ай бұрын
After physics, the goal is to combine computation and biology. Imagine biological computers with human brain like efficiency and memory.
@christhurman2350
@christhurman2350 Ай бұрын
@ you just described a fungus -
@ai_outline
@ai_outline Ай бұрын
@@christhurman2350 it obviously will. There are still endless algorithms and computational solutions to be discovered yet!
@kevinmilner2072
@kevinmilner2072 Ай бұрын
man, this is what I got a degree in CS for, not *shudders* working in salesforce.
@fr3847
@fr3847 Ай бұрын
yee most cs majors just end up writting java or javascript in web dev probs backend, sad it limits potential but that's where all the jobs are, and the few hardcore cs jobs are only for geniuses
@henrycgs
@henrycgs Ай бұрын
computer science starts at 6:13
@sleepynoob1000
@sleepynoob1000 Ай бұрын
May I ask why?
@RS-kb2mf
@RS-kb2mf Ай бұрын
Thanks for the heads up! :)
@paxcoder
@paxcoder Ай бұрын
Things that can be applied to programming do not start
@ai_outline
@ai_outline Ай бұрын
What do you mean?
@sleepynoob1000
@sleepynoob1000 Ай бұрын
@paxcoder CS isn't restricted to things you can program? Every program starts from theory you know.
@Ctrl_Alt_Sup
@Ctrl_Alt_Sup Ай бұрын
I said to myself "wow finally an interesting video"... I hadn't seen that I was on Quanta Magazine!
@Caliban314
@Caliban314 Ай бұрын
These videos are awesome, you guys always cover complex topics with such ease! As an aspiring computer science researcher (and science enthusiast in general), this is exactly the kind of content I want. One suggestion - I always want to learn more about a breakthrough but cannot find citation to the original papers. Even the quanta articles don't have them. It would be great to go directly to the source!
@dennisfast9331
@dennisfast9331 Ай бұрын
Google -> skill mix paper -> paper found!
@absence9443
@absence9443 Ай бұрын
2024 had such sick neural network breakthroughs it's relatively irrelevant what was shown here
@Techmagus76
@Techmagus76 Ай бұрын
I would argue that if generalisation skills occur in LLMs, why did they struggle so hard when a task is changed only very slightly and then give very different answers. That is very hard to explain in the realm of generalisation, but it fits quite well to the statistical parrot.
@nodavood
@nodavood Ай бұрын
Excellent work. But the scaling law curve was wrong. That's not a scaling law curve.
@KshitijPawar77
@KshitijPawar77 Ай бұрын
Me( being an Indian) feeling proud seeing so many Indian origin scientists in this video 😊
@ArthasMal
@ArthasMal Ай бұрын
It's more likely for brilliant people to be born from nation that fucks a lot :D
@neils3ji
@neils3ji 28 күн бұрын
Our parents are also watching disappointed that it isn’t us in the video (I’m Indian too) 😂
@KshitijPawar77
@KshitijPawar77 26 күн бұрын
@neils3ji 😆 never show these videos to Indian parents
@_bigman8593
@_bigman8593 Ай бұрын
Why not update the foundation code or whatever it runs off to add in an explination function on how the AI came to that conclusion, like a log, being created on a user accepted bases with major correct analyses
@sam_o_chess
@sam_o_chess 25 күн бұрын
Where can I find the sound or the music at the end of video it's very motivating
@anonymouse5217
@anonymouse5217 Ай бұрын
Ugh, it's really bad that Quanta calls these the biggest breakthroughs in computer science. You will be hard pressed to find a single researcher who thinks this Deepmind paper is a breakthrough. The Hamiltonian learning paper is a nice achievement, but hardly makes sense to single out as 'the biggest breakthrough of the year'. Do better Quanta -- you are losing our trust!
@purplewine7362
@purplewine7362 Ай бұрын
Oh no, they're going to lose a self-described expert in the audience. The horror!
@noobvsprominecraftbuild
@noobvsprominecraftbuild Ай бұрын
I LOVE THE PUBLIC SECTOR!!! I TRUST MY GOVERNMENT AND CENTRAL BANK!!!!!
@samuelbucher5189
@samuelbucher5189 Ай бұрын
Could you link me to some better candidates? Or just tell me about them, if KZbin doesn't like the links.
@CybitClub
@CybitClub Ай бұрын
Nah buddy. If you were in tune with the breakthroughs in computer science you'd find it pretty absurd to make half of the video about LLMs. is it cool ? Yes. Were there some breakthroughs? Yes . But doesn't really deserve half the spot so as the Hamiltonian . @@purplewine7362
@marcomartins3563
@marcomartins3563 Ай бұрын
can you recommend a better list then?
@Elrond_Hubbard_1
@Elrond_Hubbard_1 Ай бұрын
I am in no way an expert in any of this, but I have a wild idea that I thought of a while back. I did do a BSc and majored in molecular bio, but I didn't really end up using it professionally. But I've always held up my interest and fandom for science. Anyway, my idea came when the ChatGPT thing came out, and I remembered that the human brain is compartmentalised. There is, presumably, I'm not a neurologist, some specific chunk of throbbing brain matter inside of your skull that specialises in language comprehension. Kind of like a ChatGPT AI. But then I thought, well, maybe if you trained one AI to learn language like the ChatGPT, and you taught another AI to learn physics, and you taught another to learn how to analyse visual data, and on and on, and then you somehow bundled them all together, would that not resemble a brain? A bunch of specialised compartment intelligences that combine to make an overall intelligence, or a general AI?
@rustprogrammer
@rustprogrammer Ай бұрын
What about BB(5)?
@NGBigfield
@NGBigfield Ай бұрын
08:40 he sounds like a very smart version of Jihaniou (the Good Place) 😅
@isolatedsushi5996
@isolatedsushi5996 Ай бұрын
That chess board being set up wrong at 7:34 annoyed me more than it should have
@johnskarha3575
@johnskarha3575 26 күн бұрын
Same here! Very distracting. Does not help with the credibility of what is being said...
@bitik9847
@bitik9847 12 күн бұрын
It is perfect. Tell us you dont know chess or increase the video to 4k
@andrwsk23
@andrwsk23 Ай бұрын
Emergence are complex behaviors that are created when combining simple behaviors.
@XenoCrimson-uv8uz
@XenoCrimson-uv8uz Ай бұрын
I may be wrong here, but shouldn't the math at 3:32 be 100!/(100 - 4)! instead of 100^4 so it doesn't include the same skill multiple times? For example: ( Metaphor, Metaphor, Metaphor, Metaphor ) ? so the answer is 94,109,400 instead of 100 million.
@AlSteve7
@AlSteve7 Ай бұрын
It think it's a with sampling with replacement
@nessiecz2006
@nessiecz2006 Ай бұрын
im pretty sure its 100 choose 4, which is 100!/(4!*(100-4)!)
@edwardgongsky8540
@edwardgongsky8540 Ай бұрын
I'm amazed at the pace of this progress, scary to think how much our robot overlords will be smarter than us.
@Kohcta
@Kohcta Ай бұрын
I love how this "scientists" forget everything about scientific method when they rush to explain llms. instead of trying to disprove their hypothesis their doing what ever this thing is, But those guys are not biased at all, because google have nothing to gain from AI hype. It is like gihub release their "scientific" paper about how great "github copilot" is
@purplewine7362
@purplewine7362 Ай бұрын
You can say the same thing about protein folding, which also uses DeepMind. But I guess assuming ulterior motives behind everything makes you sound smart.
@petrospaulos7736
@petrospaulos7736 Ай бұрын
AI bubble vs Quantum bubble.... which will burst first???
@molybd3num823
@molybd3num823 Ай бұрын
AI
@rebellion112
@rebellion112 Ай бұрын
Not everything is a bubble.
@salihalbayrak-es8ky
@salihalbayrak-es8ky Ай бұрын
neither.
@kylebowles9820
@kylebowles9820 Ай бұрын
Quantum, at least AI can do something
@Rockyzach88
@Rockyzach88 Ай бұрын
From an economic stance? AI definitely because there's no quantum bubble right now and "AI" is basically dominating the stock market atm.
@WalkerSlavich
@WalkerSlavich Ай бұрын
4:15 this is a joke this is what it spits out when i tell it to write my english essay
@weroleoify
@weroleoify Ай бұрын
Hamiltonian Algorithm representation looks like a Loom, could be a nice name for a future quantum computer.
@johnnyBrwn
@johnnyBrwn Ай бұрын
It's not emergence @1:53, apple literally refuted this. They are parrots. Apple demonstrated that the more weights, the larger the memory. The more data, especially diverse data, the more likely to get a sample in training days set that shows up in test set and benchmarks.
@sheggle
@sheggle Ай бұрын
One paper a fact does not make
@deeXaeed
@deeXaeed Ай бұрын
Please share paper.
@NickforGreek
@NickforGreek Ай бұрын
The chessboard in the back is wrong...
@soundofbombing
@soundofbombing Ай бұрын
Calling the first one a CS breakthrough is insulting to all of the hard technical work Computer Scientists have done this year.
@rohitagarwal9520
@rohitagarwal9520 Ай бұрын
@3:38 Shouldn't it be 100_Choose_4 ways to combine 4 skills to understand a text? Since 100^4 would include cases like choosing the same skill 4 times which seems redundant for the context to understand a text.
@kylebowles9820
@kylebowles9820 Ай бұрын
Omg you guys, more than 1 thing happened this year. The reality distortion is insane
@henrycgs
@henrycgs Ай бұрын
this video really annoyed me.
@mattlopez487
@mattlopez487 Ай бұрын
AI AI AI AI AI AI AI HEY GUYS DID YOU HEAR ABOUT AI
@mqb3gofjzkko7nzx38
@mqb3gofjzkko7nzx38 Ай бұрын
@@mattlopez487 I did it's pretty neat stuff.
@jamesfrancese6091
@jamesfrancese6091 Ай бұрын
The LLM results were just one of four research topics (all different) that they discussed for their 2024 CS “roundup”, which they also do for numerous other disciplines (physics, biology, mathematics…) - I guess you didn’t watch the whole 10min video lol but you could read their articles if you prefer. But tbh I wouldn’t, they don’t actually talk about “breakthroughs” and as we’ve been forced to learn most “scientific progress” is extremely unpredictable and heavily retrospective in nature rather than prescient, as professional scientists are now required to sound in grant applications Nobel prizes are awarded decades after the work they recognize for a reason - the idea of an “annual breakthrough” story for all major scientific disciplines is a totally artificial emulation of the modern 24/7 news cycle model
@kylebowles9820
@kylebowles9820 Ай бұрын
​@@jamesfrancese6091I did watch the video all the way through but I already keep up in real-time so it's always interesting to comment on the selection process. Some of Quantas stuff, the long form articles, can be really good though
@oneparticle
@oneparticle 19 күн бұрын
This year there's gonna be a new image and video lossless compression algorithm. that compresses gigabytes worth of footage into literal bytes
@xCromlechx
@xCromlechx Ай бұрын
I don't think that LLM research broke through anything this year...
@rudihoffman2817
@rudihoffman2817 Ай бұрын
Interesting to see the second guessing on this beautifully presented info…easier to criticize than create real content…my opinion, I could be wrong, but I don’t think so.
@hoenchioma
@hoenchioma Ай бұрын
4:13 Did ChatGPT just make a Gojo reference? xD
@drxyd
@drxyd Ай бұрын
Tired of ML
@udaysingh-wr2kw
@udaysingh-wr2kw Ай бұрын
Get ready to live the rest of your life with it
@raphaelfrey9061
@raphaelfrey9061 Ай бұрын
Ml already existed before language models. What do you think how weather is predicted? Or how voice is percieved by siri?
@marijn6009
@marijn6009 Ай бұрын
You can choose to be tired of it, or to be inspired by it
@fugamantew
@fugamantew Ай бұрын
The structure that let us all express our disgust towards it
@jimmynoosetron6518
@jimmynoosetron6518 Ай бұрын
lets go back to banging rocks together
@IkedaBC
@IkedaBC 25 күн бұрын
So the first break through is just a group of computer scientists learning how to properly prompt a large language model to do a specific task ? Nice
@ArthasMal
@ArthasMal Ай бұрын
1:28 well placed oatly add
@metylen
@metylen Ай бұрын
the background music is totally unnecessary, it's a distraction for someone trying to understand the content...
@TriPham-j3b
@TriPham-j3b Ай бұрын
Energy state equation could predict large body enough to safety and error trapping
@th_bessa
@th_bessa Ай бұрын
Can you do one for Material Science?
@siphosethudlomo
@siphosethudlomo Ай бұрын
LLMs end-point is predicting the future?
@Sharol-b3l
@Sharol-b3l Ай бұрын
I’ve been waiting for this video!! 😂
@manmohanmahapatra6040
@manmohanmahapatra6040 Ай бұрын
AI isn’t just computer science-it’s interdisciplinary. It’s rooted in math (like calculus and probability), shaped by psychology, neuroscience, and linguistics, and powered by hardware innovations from electrical and mechanical engineering. Computers are just tools; AI draws from many fields, and calling it purely computer science ignores the huge contributions of others. Let’s give credit where it’s due!
@paaabl0.
@paaabl0. Ай бұрын
Kinda bullshit - the only test for presenting each skill by the model was "asking it about it", not actually solving the novel the task. It's like a reversed argument of "Chinese room" or "Mary's room" from psychology. Empirically, anyone who's using LLMs on a daily basis knows how dumb they actually are.
@abbaquantum431
@abbaquantum431 Ай бұрын
Picky, picky. This research goes one step at a time, trying stuff out. No need to be rude. Ewin Tang is on to something big. Keep up the good work.
@bumpyfx4900
@bumpyfx4900 Ай бұрын
I think that this problem help Google release willow that can reduce the error rate by adding more quebit
@mastergluex
@mastergluex Ай бұрын
Predictive AI is programming the animal kingdom's interpersonal relations and intrapersonal communication to improve prediction. One way to predict the destiny is by being the architect of the destiny.
@Computer-v5e
@Computer-v5e Ай бұрын
Smart channel
@spakecdk
@spakecdk Ай бұрын
Cant wait to see what exciting news quantum computing beings in the near future. Showing breaking some older cryptography will be very interesting, then we can move onto discussing how vulnerable bitcoin is to quantum computers.
@jordanwyltk4569
@jordanwyltk4569 Ай бұрын
New title- "Google's research in basic machine learning"
@tanthiennguyen9308
@tanthiennguyen9308 Ай бұрын
Vielen Dank allen Doktoranden & Menschen Verwerfen mit Songs......! Bis jetzt 1.Fehler angemeldet ist
@luvon1114
@luvon1114 Ай бұрын
Something has gone wrong if you got computer scientists talking about consciousness and understanding - a field completely different than their own field.
@javen9693
@javen9693 Ай бұрын
I hope the Princeton man properly cited the oat milk in his reference library.
@GenaroCamele
@GenaroCamele Ай бұрын
The whole first part of the LLM is a lie, there is nothing like “Emergence”. LLMs predict the next word and that's it, people who develop these models like Yann LeCun already dismissed the reasoning in transformers long ago. This kind of misinformation detracts from the credibility of the channel and only serves the hype of the moment.
@elwendyrofys5944
@elwendyrofys5944 Ай бұрын
Yann leCunn will say what YAnn leCun will say, but he's not the only person in the world working on those things. On top of that, can you prove to me that you are not "just a stochastic parrots"?
@BigSources
@BigSources Ай бұрын
Weird to say "one expert said this so it's true", while casually not acknowledging dozens of other experts saying the opposite thing. LLMs are based on a neural network. That's the thing we have in our brain. If all they do is predict the next word, then what exactly are we doing?
@CuriousKush
@CuriousKush Ай бұрын
They just need content 😂😂
@severaux682
@severaux682 Ай бұрын
calling it a lie is a bit strong imho. They proved that larger LLMs use a greater set of skills. Whether that is emergence or not is down to semantics that AI companies used out of context for marketing purposes. What's the more interesting follow-up question is: "How many skills does a human use at a time, how far off are LLM's & how much scaling was needed to increment the number of skills?"
@Homerow1
@Homerow1 Ай бұрын
Emergence just means simple rules combining to form complex behavior, usually in such complexity that while a person can understand one part, maybe a group of parts, the way the whole works is beyond them.
@curious_one1156
@curious_one1156 Ай бұрын
3:01 - Why does OpenAI not give access ?
@ericchang0704
@ericchang0704 Ай бұрын
You should do Electrical Engineering
@uddhavachikara
@uddhavachikara Ай бұрын
I think, first we need to provide them a sense to sense their surrounding, unlike just processing electricity in terms of 1 & 0s.. only this may increase the prob. of make them evaluate things by themselves.. Btw, our lives just getting interesting and this makes me sad to miss out the far future’ but it’s another kind of pleasure or may be this pleasure is just the emergence of my helplessness against the end..✨
@daniloruffino9093
@daniloruffino9093 Ай бұрын
Could you add Italian audio?
@nessiecz2006
@nessiecz2006 Ай бұрын
3:35 supposed to be 100 choose 4. 100!/(4!*96!) = 3 921 225 .
@moroteseoinage
@moroteseoinage Ай бұрын
These stochastic parrots regurgitate outputs that are indistinguishable to the layman from human made outputs.
@jadetermig2085
@jadetermig2085 Ай бұрын
which is not the flex you (maybe?) imply it is. however it does make it a brilliant way to scam people that are illiterate in that specific area, which is why the marketing hype around ai still hasn't died down. Because enough people haven't realized the scam yet.
@moroteseoinage
@moroteseoinage Ай бұрын
@ I agree with you.
@TriPham-j3b
@TriPham-j3b Ай бұрын
The new break throught in data science is like a rubicube everything inside a cube and changing coloris information and time resolution is the types , class , prototype so x difference from y is plank lenght in times
@SayutiHina
@SayutiHina Ай бұрын
Visualising tricks of computer science
@aryanmusicacademy944
@aryanmusicacademy944 14 күн бұрын
just another ml/ai video telling LLM are getting good and all, CS is not only about ML/AI, their are many field in CS. Please think before writing a title, since this is just a 'clickbait' I am not undermining your work. Great video, and this is just a feedback.
@bandr-dev
@bandr-dev Ай бұрын
animations are crazy
@pain2.
@pain2. 14 күн бұрын
I most likely think that this is content aimed at a popular science basis and not at science in fact
@AnneRavenStar
@AnneRavenStar 16 күн бұрын
"The sky is"... "falling" has more probability than "blue"
@haydencoolidge8679
@haydencoolidge8679 Ай бұрын
My man blinked 21 times in 16 seconds... 5:14
@raxneff
@raxneff Ай бұрын
2 breakthroughs? One just an explanation of LLMs, another an optimization? Doesn't seem a lot has happened in 2024...
@SherlockHolmes-dx7do
@SherlockHolmes-dx7do 15 күн бұрын
can we get 2024 in engineering also please and nuclear science too if possible
@weili-ey9zm
@weili-ey9zm Ай бұрын
Is there any related article?
@TheSpiritualCollective444
@TheSpiritualCollective444 Ай бұрын
It’s so much better when spontaneously sparked
@eSKAone-
@eSKAone- Ай бұрын
Humans are also "just" stochastic parrots.
@jadetermig2085
@jadetermig2085 Ай бұрын
By your logic humans are also "just" bananas because both are "just" atoms.
@mehdizangiabadi-iw6tn
@mehdizangiabadi-iw6tn Ай бұрын
👍🙏 thanks
@coeusuniversity
@coeusuniversity Ай бұрын
It should be obvious by now that the genius and talent behind Ai isn't coming from any of these western organizations rather it's The Eastern/Asian brilliance at play in all of them
@asimwilliams861
@asimwilliams861 Ай бұрын
please do this every year
@gauravghodinde2949
@gauravghodinde2949 Ай бұрын
Basically no breakthroughs
@LiamDirk-u8v
@LiamDirk-u8v Ай бұрын
How did you forget to include willow, googled first quantum chip
@jlee-mp4
@jlee-mp4 Ай бұрын
random question, but does anyone know the name of the painting at 8:54?
@leoprice4685
@leoprice4685 Ай бұрын
nighthawks by edward hopper
@4115steve
@4115steve Ай бұрын
I was trying to think of a way to describe quantum entanglement in a simple way. I'd describe it as seeing the plane in the sky but there not being a shadow. There is no shadow because light bends around the plane and reforms to the path of least resistance. Light particles are attracted to other light particles and they reform to their original state because it's projected path. Like wave oscillation, eventually things sync back together
@WalkerSlavich
@WalkerSlavich Ай бұрын
4:15 this is a joke. this is what it spits out when i tell it to write my english essay
@user-powerziko
@user-powerziko Ай бұрын
Can you do about Earth science
@genobohez6374
@genobohez6374 Ай бұрын
Every language has so many under layers no system will ever solve to compute this every ancient culture says language was a gift from the gods. Language even evoves and changes faster than you can even predict with your sripejacket computer idiotory
2024's Biggest Breakthroughs in Math
15:13
Quanta Magazine
Рет қаралды 699 М.
2024's Biggest Breakthroughs in Physics
16:46
Quanta Magazine
Рет қаралды 1,1 МЛН
My scorpion was taken away from me 😢
00:55
TyphoonFast 5
Рет қаралды 2,7 МЛН
The Best Band 😅 #toshleh #viralshort
00:11
Toshleh
Рет қаралды 22 МЛН
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,6 МЛН
How Your Brain Chooses What to Remember
17:19
Artem Kirsanov
Рет қаралды 359 М.
Biggest Puzzle in Computer Science: P vs. NP
19:44
Quanta Magazine
Рет қаралды 968 М.
The Most Powerful Computers You've Never Heard Of
20:13
Veritasium
Рет қаралды 11 МЛН
2024's Biggest Breakthroughs in Biology and Neuroscience
16:41
Quanta Magazine
Рет қаралды 402 М.
Space-Time: The Biggest Problem in Physics
19:42
Quanta Magazine
Рет қаралды 729 М.
one year of studying (it was a mistake)
12:51
Jeffrey Codes
Рет қаралды 309 М.
The Man Who Solved the $1 Million Math Problem...Then Disappeared
10:45
This Physicist Says We’re Using Maths Entirely Wrong
9:46
Sabine Hossenfelder
Рет қаралды 452 М.
My scorpion was taken away from me 😢
00:55
TyphoonFast 5
Рет қаралды 2,7 МЛН